id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
5f121604f1060de50eb239906453408ed198385c3e6d5edfff0947bd79a643e0
|
2026-01-23T00:00:00-05:00
|
CoNRec: Context-Discerning Negative Recommendation with LLMs
|
arXiv:2601.15721v1 Announce Type: new Abstract: Understanding what users like is relatively straightforward; understanding what users dislike, however, remains a challenging and underexplored problem. Research into users' negative preferences has gained increasing importance in modern recommendation systems. Numerous platforms have introduced explicit negative feedback mechanisms and leverage such signals to refine their recommendation models. Beyond traditional business metrics, user experience-driven metrics, such as negative feedback rates, have become critical indicators for evaluating system performance. However, most existing approaches primarily use negative feedback as an auxiliary signal to enhance positive recommendations, paying little attention to directly modeling negative interests, which can be highly valuable in offline applications. Moreover, due to the inherent sparsity of negative feedback data, models often suffer from context understanding biases induced by positive feedback dominance. To address these challenges, we propose the first large language model framework for negative feedback modeling with special designed context-discerning modules. We use semantic ID Representation to replace text-based item descriptions and introduce an item-level alignment task that enhances the LLM's understanding of the semantic context behind negative feedback. Furthermore, we design a Progressive GRPO training paradigm that enables the model to dynamically balance the positive and negative behavioral context utilization. Besides, our investigation further reveals a fundamental misalignment between the conventional next-negative-item prediction objective and users' true negative preferences, which is heavily influenced by the system's recommendation order. To mitigate this, we propose a novel reward function and evaluation metric grounded in multi-day future negative feedback and their collaborative signals.
|
https://arxiv.org/abs/2601.15721
|
Academic Papers
|
svg
|
a6b47529884c1afb7517f5e43b8961ecc979c39698efb7e978798a072925a47d
|
2026-01-23T00:00:00-05:00
|
Communication-efficient Federated Graph Classification via Generative Diffusion Modeling
|
arXiv:2601.15722v1 Announce Type: new Abstract: Graph Neural Networks (GNNs) unlock new ways of learning from graph-structured data, proving highly effective in capturing complex relationships and patterns. Federated GNNs (FGNNs) have emerged as a prominent distributed learning paradigm for training GNNs over decentralized data. However, FGNNs face two significant challenges: high communication overhead from multiple rounds of parameter exchanges and non-IID data characteristics across clients. To address these issues, we introduce CeFGC, a novel FGNN paradigm that facilitates efficient GNN training over non-IID data by limiting communication between the server and clients to three rounds only. The core idea of CeFGC is to leverage generative diffusion models to minimize direct client-server communication. Each client trains a generative diffusion model that captures its local graph distribution and shares this model with the server, which then redistributes it back to all clients. Using these generative models, clients generate synthetic graphs combined with their local graphs to train local GNN models. Finally, clients upload their model weights to the server for aggregation into a global GNN model. We theoretically analyze the I/O complexity of communication volume to show that CeFGC reduces to a constant of three communication rounds only. Extensive experiments on several real graph datasets demonstrate the effectiveness and efficiency of CeFGC against state-of-the-art competitors, reflecting our superior performance on non-IID graphs by aligning local and global model objectives and enriching the training set with diverse graphs.
|
https://arxiv.org/abs/2601.15722
|
Academic Papers
|
svg
|
83933f03767af23cf89e55a2988f1fbb90a1c32c62cc3493c72fa4cfb6c9ed21
|
2026-01-23T00:00:00-05:00
|
Generalized Information Inequalities via Submodularity, and Two Combinatorial Problems
|
arXiv:2601.15723v1 Announce Type: new Abstract: It is well known that there is a strong connection between entropy inequalities and submodularity, since the entropy of a collection of random variables is a submodular function. Unifying frameworks for information inequalities arising from submodularity were developed by Madiman and Tetali (2010) and Sason (2022). Madiman and Tetali (2010) established strong and weak fractional inequalities that subsume classical results such as Han's inequality and Shearer's lemma. Sason (2022) introduced a convex-functional framework for generalizing Han's inequality, and derived unified inequalities for submodular and supermodular functions. In this work, we build on these frameworks and make three contributions. First, we establish convex-functional generalizations of the strong and weak Madiman and Tetali inequalities for submodular functions. Second, using a special case of the strong Madiman-Tetali inequality, we derive a new Loomis-Whitney-type projection inequality for finite point sets in $\mathbb{R}^d$, which improves upon the classical Loomis-Whitney bound by incorporating slice-level structural information. Finally, we study an extremal graph theory problem that recovers and extends the previously known results of Sason (2022) and Boucheron et al., employing Shearer's lemma in contrast to the use of Han's inequality in those works.
|
https://arxiv.org/abs/2601.15723
|
Academic Papers
|
svg
|
5f40e2e6c8174597a8eaaf82cc1a77531df042c67d4f11980cd423e98c07ec54
|
2026-01-23T00:00:00-05:00
|
VideoThinker: Building Agentic VideoLLMs with LLM-Guided Tool Reasoning
|
arXiv:2601.15724v1 Announce Type: new Abstract: Long-form video understanding remains a fundamental challenge for current Video Large Language Models. Most existing models rely on static reasoning over uniformly sampled frames, which weakens temporal localization and leads to substantial information loss in long videos. Agentic tools such as temporal retrieval, spatial zoom, and temporal zoom offer a natural way to overcome these limitations by enabling adaptive exploration of key moments. However, constructing agentic video understanding data requires models that already possess strong long-form video comprehension, creating a circular dependency. We address this challenge with VideoThinker, an agentic Video Large Language Model trained entirely on synthetic tool interaction trajectories. Our key idea is to convert videos into rich captions and employ a powerful agentic language model to generate multi-step tool use sequences in caption space. These trajectories are subsequently grounded back to video by replacing captions with the corresponding frames, yielding a large-scale interleaved video and tool reasoning dataset without requiring any long-form understanding from the underlying model. Training on this synthetic agentic dataset equips VideoThinker with dynamic reasoning capabilities, adaptive temporal exploration, and multi-step tool use. Remarkably, VideoThinker significantly outperforms both caption-only language model agents and strong video model baselines across long-video benchmarks, demonstrating the effectiveness of tool augmented synthetic data and adaptive retrieval and zoom reasoning for long-form video understanding.
|
https://arxiv.org/abs/2601.15724
|
Academic Papers
|
svg
|
450c4427241dd13e0f4dd9df4dfa1739244110cf7efdc433323f75a41d97ed1f
|
2026-01-23T00:00:00-05:00
|
Profit Maximization for Viral Marketing in Online Social Networks using Two Phase Diffusion Approach
|
arXiv:2601.15726v1 Announce Type: new Abstract: Now-a-days, Online Social Networks (OSNs) are extensively used by different commercial houses for viral marketing. The key problem that arises in this context is to choose a limited number of highly influential users as the initial adopters of a brand such that the influence regarding the brand in the network gets maximized. Deviating from this standard setting, in this paper, we study the problem where every user of the network is associated with a selection cost and a benefit value. This benefit value can be earned from the user if (s)he is influenced by the brand. A fixed amount of budget is allocated for selecting the seed users. The goal of initial adopters is to choose a set of seed users within the budget such that the profit is maximized. We propose a two phase diffusion model for this problem where the goal is to split the diffusion process into two phases, and hence, split the budget into two halves. First, we spend the first half budget to select seed users for the first phase and observe the diffusion for a few rounds and then deploy the seed users for the second phase and successively complete the diffusion process. We prove several properties of the two phase influence function. Three solution approaches have been proposed for our problem with detailed analysis and illustrative examples. We conduct a number of experiments with three real-world social network datasets. From the experiments, we observe that the two phase diffusion approach leads to more amount of profit compared to the single-phase diffusion. In particular, for most instances, this improvement is greater than 18% and reaching as high as 40% by the proposed methodologies.
|
https://arxiv.org/abs/2601.15726
|
Academic Papers
|
svg
|
833303cd1af8faff89df8a1a540e22be0c83dff4d08805ece0090ea298360a90
|
2026-01-23T00:00:00-05:00
|
Towards Automated Kernel Generation in the Era of LLMs
|
arXiv:2601.15727v1 Announce Type: new Abstract: The performance of modern AI systems is fundamentally constrained by the quality of their underlying kernels, which translate high-level algorithmic semantics into low-level hardware operations. Achieving near-optimal kernels requires expert-level understanding of hardware architectures and programming models, making kernel engineering a critical but notoriously time-consuming and non-scalable process. Recent advances in large language models (LLMs) and LLM-based agents have opened new possibilities for automating kernel generation and optimization. LLMs are well-suited to compress expert-level kernel knowledge that is difficult to formalize, while agentic systems further enable scalable optimization by casting kernel development as an iterative, feedback-driven loop. Rapid progress has been made in this area. However, the field remains fragmented, lacking a systematic perspective for LLM-driven kernel generation. This survey addresses this gap by providing a structured overview of existing approaches, spanning LLM-based approaches and agentic optimization workflows, and systematically compiling the datasets and benchmarks that underpin learning and evaluation in this domain. Moreover, key open challenges and future research directions are further outlined, aiming to establish a comprehensive reference for the next generation of automated kernel optimization. To keep track of this field, we maintain an open-source GitHub repository at https://github.com/flagos-ai/awesome-LLM-driven-kernel-generation.
|
https://arxiv.org/abs/2601.15727
|
Academic Papers
|
svg
|
812a3e987a24960ae2cf7e5dda006ca4cd0b3c780c834f78d9523026211a1d67
|
2026-01-23T00:00:00-05:00
|
Benchmarking Text-to-Python against Text-to-SQL: The Impact of Explicit Logic and Ambiguity
|
arXiv:2601.15728v1 Announce Type: new Abstract: While Text-to-SQL remains the dominant approach for database interaction, real-world analytics increasingly require the flexibility of general-purpose programming languages such as Python or Pandas to manage file-based data and complex analytical workflows. Despite this growing need, the reliability of Text-to-Python in core data retrieval remains underexplored relative to the mature SQL ecosystem. To address this gap, we introduce BIRD-Python, a benchmark designed for cross-paradigm evaluation. We systematically refined the original dataset to reduce annotation noise and align execution semantics, thereby establishing a consistent and standardized baseline for comparison. Our analysis reveals a fundamental paradigmatic divergence: whereas SQL leverages implicit DBMS behaviors through its declarative structure, Python requires explicit procedural logic, making it highly sensitive to underspecified user intent. To mitigate this challenge, we propose the Logic Completion Framework (LCF), which resolves ambiguity by incorporating latent domain knowledge into the generation process. Experimental results show that (1) performance differences primarily stem from missing domain context rather than inherent limitations in code generation, and (2) when these gaps are addressed, Text-to-Python achieves performance parity with Text-to-SQL. These findings establish Python as a viable foundation for analytical agents-provided that systems effectively ground ambiguous natural language inputs in executable logical specifications. Resources are available at https://anonymous.4open.science/r/Bird-Python-43B7/.
|
https://arxiv.org/abs/2601.15728
|
Academic Papers
|
svg
|
1de0950ef26482d463a50818e95ac17fc7b82fb17e4c45ea8a1e5c5d3b9dd9a8
|
2026-01-23T00:00:00-05:00
|
DualShield: Safe Model Predictive Diffusion via Reachability Analysis for Interactive Autonomous Driving
|
arXiv:2601.15729v1 Announce Type: new Abstract: Diffusion models have emerged as a powerful approach for multimodal motion planning in autonomous driving. However, their practical deployment is typically hindered by the inherent difficulty in enforcing vehicle dynamics and a critical reliance on accurate predictions of other agents, making them prone to safety issues under uncertain interactions. To address these limitations, we introduce DualShield, a planning and control framework that leverages Hamilton-Jacobi (HJ) reachability value functions in a dual capacity. First, the value functions act as proactive guidance, steering the diffusion denoising process towards safe and dynamically feasible regions. Second, they form a reactive safety shield using control barrier-value functions (CBVFs) to modify the executed actions and ensure safety. This dual mechanism preserves the rich exploration capabilities of diffusion models while providing principled safety assurance under uncertain and even adversarial interactions. Simulations in challenging unprotected U-turn scenarios demonstrate that DualShield significantly improves both safety and task efficiency compared to leading methods from different planning paradigms under uncertainty.
|
https://arxiv.org/abs/2601.15729
|
Academic Papers
|
svg
|
36053b5270c8821910700de261e115c21a02e69aaa2524cec5007970c62a78e6
|
2026-01-23T00:00:00-05:00
|
FAIR-ESI: Feature Adaptive Importance Refinement for Electrophysiological Source Imaging
|
arXiv:2601.15731v1 Announce Type: new Abstract: An essential technique for diagnosing brain disorders is electrophysiological source imaging (ESI). While model-based optimization and deep learning methods have achieved promising results in this field, the accurate selection and refinement of features remains a central challenge for precise ESI. This paper proposes FAIR-ESI, a novel framework that adaptively refines feature importance across different views, including FFT-based spectral feature refinement, weighted temporal feature refinement, and self-attention-based patch-wise feature refinement. Extensive experiments on two simulation datasets with diverse configurations and two real-world clinical datasets validate our framework's efficacy, highlighting its potential to advance brain disorder diagnosis and offer new insights into brain function.
|
https://arxiv.org/abs/2601.15731
|
Academic Papers
|
svg
|
ac69b9defa308abe71affe98e5fab8a6c0cf806e25d784ae12beb3382e7b0904
|
2026-01-23T00:00:00-05:00
|
Sub-Region-Aware Modality Fusion and Adaptive Prompting for Multi-Modal Brain Tumor Segmentation
|
arXiv:2601.15734v1 Announce Type: new Abstract: The successful adaptation of foundation models to multi-modal medical imaging is a critical yet unresolved challenge. Existing models often struggle to effectively fuse information from multiple sources and adapt to the heterogeneous nature of pathological tissues. To address this, we introduce a novel framework for adapting foundation models to multi-modal medical imaging, featuring two key technical innovations: sub-region-aware modality attention and adaptive prompt engineering. The attention mechanism enables the model to learn the optimal combination of modalities for each tumor sub-region, while the adaptive prompting strategy leverages the inherent capabilities of foundation models to refine segmentation accuracy. We validate our framework on the BraTS 2020 brain tumor segmentation dataset, demonstrating that our approach significantly outperforms baseline methods, particularly in the challenging necrotic core sub-region. Our work provides a principled and effective approach to multi-modal fusion and prompting, paving the way for more accurate and robust foundation model-based solutions in medical imaging.
|
https://arxiv.org/abs/2601.15734
|
Academic Papers
|
svg
|
b3e00489c82b8e534c12fb149367f59b18db23092f451b12a8e6d06e1a3d2319
|
2026-01-23T00:00:00-05:00
|
PhysProver: Advancing Automatic Theorem Proving for Physics
|
arXiv:2601.15737v1 Announce Type: new Abstract: The combination of verifiable languages and LLMs has significantly influenced both the mathematical and computer science communities because it provides a rigorous foundation for theorem proving. Recent advancements in the field provide foundation models and sophisticated agentic systems pushing the boundaries of formal mathematical reasoning to approach the natural language capability of LLMs. However, little attention has been given to the formal physics reasoning, which also heavily relies on similar problem-solving and theorem-proving frameworks. To solve this problem, this paper presents, to the best of our knowledge, the first approach to enhance formal theorem proving in the physics domain. We compose a dedicated dataset PhysLeanData for the task. It is composed of theorems sampled from PhysLean and data generated by a conjecture-based formal data generation pipeline. In the training pipeline, we leverage DeepSeek-Prover-V2-7B, a strong open-source mathematical theorem prover, and apply Reinforcement Learning with Verifiable Rewards (RLVR) to train our model PhysProver. Comprehensive experiments demonstrate that, using only $\sim$5K training samples, PhysProver achieves an overall 2.4\% improvement in multiple sub-domains. Furthermore, after formal physics training, we observe 1.3\% gains on the MiniF2F-Test benchmark, which indicates non-trivial generalization beyond physics domains and enhancement for formal math capability as well. The results highlight the effectiveness and efficiency of our approach, which provides a paradigm for extending formal provers outside mathematical domains. To foster further research, we will release both our dataset and model to the community.
|
https://arxiv.org/abs/2601.15737
|
Academic Papers
|
svg
|
cb209bbe565351db844c4a3dfa7a90d54c127b79b4eccca504e9430def6d5981
|
2026-01-23T00:00:00-05:00
|
LLM-Assisted Automatic Dispatching Rule Design for Dynamic Flexible Assembly Flow Shop Scheduling
|
arXiv:2601.15738v1 Announce Type: new Abstract: Dynamic multi-product delivery environments demand rapid coordination of part completion and product-level kitting within hybrid processing and assembly systems to satisfy strict hierarchical supply constraints. The flexible assembly flow shop scheduling problem formally defines dependencies for multi-stage kitting, yet dynamic variants make designing integrated scheduling rules under multi-level time coupling highly challenging. Existing automated heuristic design methods, particularly genetic programming constrained to fixed terminal symbol sets, struggle to capture and leverage dynamic uncertainties and hierarchical dependency information under transient decision states. This study develops an LLM-assisted Dynamic Rule Design framework (LLM4DRD) that automatically evolves integrated online scheduling rules adapted to scheduling features. Firstly, multi-stage processing and assembly supply decisions are transformed into feasible directed edge orderings based on heterogeneous graph. Then, an elite knowledge guided initialization embeds advanced design expertise into initial rules to enhance initial quality. Additionally, a dual-expert mechanism is introduced in which LLM-A evolutionary code to generate candidate rules and LLM-S conducts scheduling evaluation, while dynamic feature-fitting rule evolution combined with hybrid evaluation enables continuous improvement and extracts adaptive rules with strong generalization capability. A series of experiments are conducted to validate the effectiveness of the method. The average tardiness of LLM4DRD is 3.17-12.39% higher than state-of-the-art methods in 20 practical instances used for training and testing, respectively. In 24 scenarios with different resource configurations, order loads, and disturbance levels totaling 480 instances, it achieves 11.10% higher performance than the second best competitor, exhibiting excellent robustness.
|
https://arxiv.org/abs/2601.15738
|
Academic Papers
|
svg
|
414e478367c0d7552251b9013bf842eaebb35e6e3e6ee5969617a984ca017ad2
|
2026-01-23T00:00:00-05:00
|
Breaking the Resolution Barrier: Arbitrary-resolution Deep Image Steganography Framework
|
arXiv:2601.15739v1 Announce Type: new Abstract: Deep image steganography (DIS) has achieved significant results in capacity and invisibility. However, current paradigms enforce the secret image to maintain the same resolution as the cover image during hiding and revealing. This leads to two challenges: secret images with inconsistent resolutions must undergo resampling beforehand which results in detail loss during recovery, and the secret image cannot be recovered to its original resolution when the resolution value is unknown. To address these, we propose ARDIS, the first Arbitrary Resolution DIS framework, which shifts the paradigm from discrete mapping to reference-guided continuous signal reconstruction. Specifically, to minimize the detail loss caused by resolution mismatch, we first design a Frequency Decoupling Architecture in hiding stage. It disentangles the secret into a resolution-aligned global basis and a resolution-agnostic high-frequency latent to hide in a fixed-resolution cover. Second, for recovery, we propose a Latent-Guided Implicit Reconstructor to perform deterministic restoration. The recovered detail latent code modulates a continuous implicit function to accurately query and render high-frequency residuals onto the recovered global basis, ensuring faithful restoration of original details. Furthermore, to achieve blind recovery, we introduce an Implicit Resolution Coding strategy. By transforming discrete resolution values into dense feature maps and hiding them in the redundant space of the feature domain, the reconstructor can correctly decode the secret's resolution directly from the steganographic representation. Experimental results demonstrate that ARDIS significantly outperforms state-of-the-art methods in both invisibility and cross-resolution recovery fidelity.
|
https://arxiv.org/abs/2601.15739
|
Academic Papers
|
svg
|
bbcc12ff6bf6c8a2f3f10f079795074499d13f5de321f548dd8e27cf82405086
|
2026-01-23T00:00:00-05:00
|
Hallucination Mitigating for Medical Report Generation
|
arXiv:2601.15745v1 Announce Type: new Abstract: In the realm of medical report generation (MRG), the integration of natural language processing has emerged as a vital tool to alleviate the workload of radiologists. Despite the impressive capabilities demonstrated by large vision language models (LVLMs) in understanding natural language, their susceptibility to generating plausible yet inaccurate claims, known as ``hallucinations'', raises concerns-especially in the nuanced and critical field of medical. In this work, we introduce a framework, \textbf{K}nowledge-\textbf{E}nhanced with Fine-Grained \textbf{R}einforced Rewards \textbf{M}edical Report Generation (KERM), to tackle the issue. Our approach refines the input to the LVLM by first utilizing MedCLIP for knowledge retrieval, incorporating relevant lesion fact sentences from a curated knowledge corpus. We then introduce a novel purification module to ensure the retrieved knowledge is contextually relevant to the patient's clinical context. Subsequently, we employ fine-grained rewards to guide these models in generating highly supportive and clinically relevant descriptions, ensuring the alignment of model's outputs with desired behaviors. Experimental results on IU-Xray and MIMIC-CXR datasets validate the effectiveness of our approach in mitigating hallucinations and enhancing report quality.
|
https://arxiv.org/abs/2601.15745
|
Academic Papers
|
svg
|
00fc12932a84fd9942f9a327ddf32b8506d8dc276d3182d99c83d2264ab44ba5
|
2026-01-23T00:00:00-05:00
|
Tabular Incremental Inference
|
arXiv:2601.15751v1 Announce Type: new Abstract: Tabular data is a fundamental form of data structure. The evolution of table analysis tools reflects humanity's continuous progress in data acquisition, management, and processing. The dynamic changes in table columns arise from technological advancements, changing needs, data integration, etc. However, the standard process of training AI models on tables with fixed columns and then performing inference is not suitable for handling dynamically changed tables. Therefore, new methods are needed for efficiently handling such tables in an unsupervised manner. In this paper, we introduce a new task, Tabular Incremental Inference (TabII), which aims to enable trained models to incorporate new columns during the inference stage, enhancing the practicality of AI models in scenarios where tables are dynamically changed. Furthermore, we demonstrate that this new task can be framed as an optimization problem based on the information bottleneck theory, which emphasizes that the key to an ideal tabular incremental inference approach lies in minimizing mutual information between tabular data and representation while maximizing between representation and task labels. Under this guidance, we design a TabII method with Large Language Model placeholders and Pretrained TabAdapter to provide external knowledge and Incremental Sample Condensation blocks to condense the task-relevant information given by incremental column attributes. Experimental results across eight public datasets show that TabII effectively utilizes incremental attributes, achieving state-of-the-art performance.
|
https://arxiv.org/abs/2601.15751
|
Academic Papers
|
svg
|
4923f282c2375b949aa3c671fa335e399c0113c2bb2614a15a4e43402bfbe5e9
|
2026-01-23T00:00:00-05:00
|
CAFE-GB: Scalable and Stable Feature Selection for Malware Detection via Chunk-wise Aggregated Gradient Boosting
|
arXiv:2601.15754v1 Announce Type: new Abstract: High-dimensional malware datasets often exhibit feature redundancy, instability, and scalability limitations, which hinder the effectiveness and interpretability of machine learning-based malware detection systems. Although feature selection is commonly employed to mitigate these issues, many existing approaches lack robustness when applied to large-scale and heterogeneous malware data. To address this gap, this paper proposes CAFE-GB (Chunk-wise Aggregated Feature Estimation using Gradient Boosting), a scalable feature selection framework designed to produce stable and globally consistent feature rankings for high-dimensional malware detection. CAFE-GB partitions training data into overlapping chunks, estimates local feature importance using gradient boosting models, and aggregates these estimates to derive a robust global ranking. Feature budget selection is performed separately through a systematic k-selection and stability analysis to balance detection performance and robustness. The proposed framework is evaluated on two large-scale malware datasets: BODMAS and CIC-AndMal2020, representing large and diverse malware feature spaces. Experimental results show that classifiers trained on CAFE-GB -selected features achieve performance parity with full-feature baselines across multiple metrics, including Accuracy, F1-score, MCC, ROC-AUC, and PR-AUC, while reducing feature dimensionality by more than 95\%. Paired Wilcoxon signed-rank tests confirm that this reduction does not introduce statistically significant performance degradation. Additional analyses demonstrate low inter-feature redundancy and improved interpretability through SHAP-based explanations. Runtime and memory profiling further indicate reduced downstream classification overhead. Overall, CAFE-GB provides a stable, interpretable, and scalable feature selection strategy for large-scale malware detection.
|
https://arxiv.org/abs/2601.15754
|
Academic Papers
|
svg
|
9670539357e76e32a04103eda8a42438c4fcb13043de49252b12f1c9b67013dd
|
2026-01-23T00:00:00-05:00
|
Beyond Marginal Distributions: A Framework to Evaluate the Representativeness of Demographic-Aligned LLMs
|
arXiv:2601.15755v1 Announce Type: new Abstract: Large language models are increasingly used to represent human opinions, values, or beliefs, and their steerability towards these ideals is an active area of research. Existing work focuses predominantly on aligning marginal response distributions, treating each survey item independently. While essential, this may overlook deeper latent structures that characterise real populations and underpin cultural values theories. We propose a framework for evaluating the representativeness of aligned models through multivariate correlation patterns in addition to marginal distributions. We show the value of our evaluation scheme by comparing two model steering techniques (persona prompting and demographic fine-tuning) and evaluating them against human responses from the World Values Survey. While the demographically fine-tuned model better approximates marginal response distributions than persona prompting, both techniques fail to fully capture the gold standard correlation patterns. We conclude that representativeness is a distinct aspect of value alignment and an evaluation focused on marginals can mask structural failures, leading to overly optimistic conclusions about model capabilities.
|
https://arxiv.org/abs/2601.15755
|
Academic Papers
|
svg
|
7112ede4a95dbe6f3529275d308ce9af38a9f17d5d33b2c3b3c19cc24def2818
|
2026-01-23T00:00:00-05:00
|
CTL* Model Checking on Infinite Families of Finite-State Labeled Transition Systems (Technical Report)
|
arXiv:2601.15756v1 Announce Type: new Abstract: We study model checking algorithms for infinite families of finite-state labeled transition systems against temporal properties written in CTL*. Such families arise, for example, as models of highly configurable systems or software product lines. We model families using context-free graph grammars. We then develop a state labeling algorithm that works compositionally on the grammar's production rules with limited information about the context in which the rule is applied. The result is a graph grammar modeling the same family but with extended labels. We leverage this grammar to decide whether all, some, or (in)finitely many members of a family satisfy a given temporal property. We have implemented our algorithms and present early experiments.
|
https://arxiv.org/abs/2601.15756
|
Academic Papers
|
svg
|
9a084787a9c5af4d7fca012bf173ebb7c2dbfc8b113766de71137021671d357c
|
2026-01-23T00:00:00-05:00
|
White-Box mHC: Electromagnetic Spectrum-Aware and Interpretable Stream Interactions for Hyperspectral Image Classification
|
arXiv:2601.15757v1 Announce Type: new Abstract: In hyperspectral image classification (HSIC), most deep learning models rely on opaque spectral-spatial feature mixing, limiting their interpretability and hindering understanding of internal decision mechanisms. We present physical spectrum-aware white-box mHC, named ES-mHC, a hyper-connection framework that explicitly models interactions among different electromagnetic spectrum groupings (residual stream in mHC) interactions using structured, directional matrices. By separating feature representation from interaction structure, ES-mHC promotes electromagnetic spectrum grouping specialization, reduces redundancy, and exposes internal information flow that can be directly visualized and spatially analyzed. Using hyperspectral image classification as a representative testbed, we demonstrate that the learned hyper-connection matrices exhibit coherent spatial patterns and asymmetric interaction behaviors, providing mechanistic insight into the model internal dynamics. Furthermore, we find that increasing the expansion rate accelerates the emergence of structured interaction patterns. These results suggest that ES-mHC transforms HSIC from a purely black-box prediction task into a structurally transparent, partially white-box learning process.
|
https://arxiv.org/abs/2601.15757
|
Academic Papers
|
svg
|
611d00f7e4991156ffc470e084ec92bc1b063879be72cd1db44595615f147c6d
|
2026-01-23T00:00:00-05:00
|
NL4ST: A Natural Language Query Tool for Spatio-Temporal Databases
|
arXiv:2601.15758v1 Announce Type: new Abstract: The advancement of mobile computing devices and positioning technologies has led to an explosive growth of spatio-temporal data managed in databases. Representative queries over such data include range queries, nearest neighbor queries, and join queries. However, formulating those queries usually requires domain-specific expertise and familiarity with executable query languages, which would be a challenging task for non-expert users. It leads to a great demand for well-supported natural language queries (NLQs) in spatio-temporal databases. To bridge the gap between non-experts and query plans in databases, we present NL4ST, an interactive tool that allows users to query spatio-temporal databases in natural language. NL4ST features a three-layer architecture: (i) knowledge base and corpus for knowledge preparation, (ii) natural language understanding for entity linking, and (iii) generating physical plans. Our demonstration will showcase how NL4ST provides effective spatio-temporal physical plans, verified by using four real and synthetic datasets. We make NL4ST online and provide the demo video at https://youtu.be/-J1R7R5WoqQ.
|
https://arxiv.org/abs/2601.15758
|
Academic Papers
|
svg
|
cfceee272cbd4f96fcc624c3854c5e4151a90bca4d21dd0506d24a582a128c1c
|
2026-01-23T00:00:00-05:00
|
Atlas-Assisted Segment Anything Model for Fetal Brain MRI (FeTal-SAM)
|
arXiv:2601.15759v1 Announce Type: new Abstract: This paper presents FeTal-SAM, a novel adaptation of the Segment Anything Model (SAM) tailored for fetal brain MRI segmentation. Traditional deep learning methods often require large annotated datasets for a fixed set of labels, making them inflexible when clinical or research needs change. By integrating atlas-based prompts and foundation-model principles, FeTal-SAM addresses two key limitations in fetal brain MRI segmentation: (1) the need to retrain models for varying label definitions, and (2) the lack of insight into whether segmentations are driven by genuine image contrast or by learned spatial priors. We leverage multi-atlas registration to generate spatially aligned label templates that serve as dense prompts, alongside a bounding-box prompt, for SAM's segmentation decoder. This strategy enables binary segmentation on a per-structure basis, which is subsequently fused to reconstruct the full 3D segmentation volumes. Evaluations on two datasets, the dHCP dataset and an in-house dataset demonstrate FeTal-SAM's robust performance across gestational ages. Notably, it achieves Dice scores comparable to state-of-the-art baselines which were trained for each dataset and label definition for well-contrasted structures like cortical plate and cerebellum, while maintaining the flexibility to segment any user-specified anatomy. Although slightly lower accuracy is observed for subtle, low-contrast structures (e.g., hippocampus, amygdala), our results highlight FeTal-SAM's potential to serve as a general-purpose segmentation model without exhaustive retraining. This method thus constitutes a promising step toward clinically adaptable fetal brain MRI analysis tools.
|
https://arxiv.org/abs/2601.15759
|
Academic Papers
|
svg
|
7d46931a95bb03edc853003033582b03d8b52b313f968ad955aeb02f8f705e8a
|
2026-01-23T00:00:00-05:00
|
Off-Policy Actor-Critic with Sigmoid-Bounded Entropy for Real-World Robot Learning
|
arXiv:2601.15761v1 Announce Type: new Abstract: Deploying reinforcement learning in the real world remains challenging due to sample inefficiency, sparse rewards, and noisy visual observations. Prior work leverages demonstrations and human feedback to improve learning efficiency and robustness. However, offline-to-online methods need large datasets and can be unstable, while VLA-assisted RL relies on large-scale pretraining and fine-tuning. As a result, a low-cost real-world RL method with minimal data requirements has yet to emerge. We introduce \textbf{SigEnt-SAC}, an off-policy actor-critic method that learns from scratch using a single expert trajectory. Our key design is a sigmoid-bounded entropy term that prevents negative-entropy-driven optimization toward out-of-distribution actions and reduces Q-function oscillations. We benchmark SigEnt-SAC on D4RL tasks against representative baselines. Experiments show that SigEnt-SAC substantially alleviates Q-function oscillations and reaches a 100\% success rate faster than prior methods. Finally, we validate SigEnt-SAC on four real-world robotic tasks across multiple embodiments, where agents learn from raw images and sparse rewards; results demonstrate that SigEnt-SAC can learn successful policies with only a small number of real-world interactions, suggesting a low-cost and practical pathway for real-world RL deployment.
|
https://arxiv.org/abs/2601.15761
|
Academic Papers
|
svg
|
f3cf9a8bbe78c2176f937bde688f72bace70b1f99f0a7a6dacc700a802eee393
|
2026-01-23T00:00:00-05:00
|
NMRGym: A Comprehensive Benchmark for Nuclear Magnetic Resonance Based Molecular Structure Elucidation
|
arXiv:2601.15763v1 Announce Type: new Abstract: Nuclear Magnetic Resonance (NMR) spectroscopy is the cornerstone of small-molecule structure elucidation. While deep learning has demonstrated significant potential in automating structure elucidation and spectral simulation, current progress is severely impeded by the reliance on synthetic datasets, which introduces significant domain shifts when applied to real-world experimental spectra. Furthermore, the lack of standardized evaluation protocols and rigorous data splitting strategies frequently leads to unfair comparisons and data leakage. To address these challenges, we introduce \textbf{NMRGym}, the largest and most comprehensive standardized dataset and benchmark derived from high-quality experimental NMR data to date. Comprising \textbf{269,999} unique molecules paired with high-fidelity $^1$H and $^{13}$C spectra, NMRGym bridges the critical gap between synthetic approximations and real-world diversity. We implement a strict quality control pipeline and unify data formats to ensure fair comparison. To strictly prevent data leakage, we enforce a scaffold-based split. Additionally, we provide fine-grained peak-atom level annotations to support future usage. Leveraging this resource, we establish a comprehensive evaluation suite covering diverse downstream tasks, including structure elucidation, functional group prediction from NMR, toxicity prediction from NMR, and spectral simulation, benchmarking representative state-of-the-art methodologies. Finally, we release an open-source leadboard with an automated leaderboard to foster community collaboration and standardize future research. The dataset, benchmark and leaderboard are publicly available at \textcolor{blue}{https://AIMS-Lab-HKUSTGZ.github.io/NMRGym/}.
|
https://arxiv.org/abs/2601.15763
|
Academic Papers
|
svg
|
665bfc0552f7c95bd07271962ca7180558b04c8fa7e7494cf66fc87f374a5db5
|
2026-01-23T00:00:00-05:00
|
LL-GaussianMap: Zero-shot Low-Light Image Enhancement via 2D Gaussian Splatting Guided Gain Maps
|
arXiv:2601.15766v1 Announce Type: new Abstract: Significant progress has been made in low-light image enhancement with respect to visual quality. However, most existing methods primarily operate in the pixel domain or rely on implicit feature representations. As a result, the intrinsic geometric structural priors of images are often neglected. 2D Gaussian Splatting (2DGS) has emerged as a prominent explicit scene representation technique characterized by superior structural fitting capabilities and high rendering efficiency. Despite these advantages, the utilization of 2DGS in low-level vision tasks remains unexplored. To bridge this gap, LL-GaussianMap is proposed as the first unsupervised framework incorporating 2DGS into low-light image enhancement. Distinct from conventional methodologies, the enhancement task is formulated as a gain map generation process guided by 2DGS primitives. The proposed method comprises two primary stages. First, high-fidelity structural reconstruction is executed utilizing 2DGS. Then, data-driven enhancement dictionary coefficients are rendered via the rasterization mechanism of Gaussian splatting through an innovative unified enhancement module. This design effectively incorporates the structural perception capabilities of 2DGS into gain map generation, thereby preserving edges and suppressing artifacts during enhancement. Additionally, the reliance on paired data is circumvented through unsupervised learning. Experimental results demonstrate that LL-GaussianMap achieves superior enhancement performance with an extremely low storage footprint, highlighting the effectiveness of explicit Gaussian representations for image enhancement.
|
https://arxiv.org/abs/2601.15766
|
Academic Papers
|
svg
|
1d44163d1949f10ba15cdf3b26e1816c30a82d92134f9a3d1cf5f98dc25681ac
|
2026-01-23T00:00:00-05:00
|
Recursive Flow: A Generative Framework for MIMO Channel Estimation
|
arXiv:2601.15767v1 Announce Type: new Abstract: Channel estimation is a fundamental challenge in massive multiple-input multiple-output systems, where estimation accuracy governs the spectral efficiency and link reliability. In this work, we introduce Recursive Flow (RC-Flow), a novel solver that leverages pre-trained flow matching priors to robustly recover channel state information from noisy, under-determined measurements. Different from conventional open-loop generative models, our approach establishes a closed-loop refinement framework via a serial restart mechanism and anchored trajectory rectification. By synergizing flow-consistent prior directions with data-fidelity proximal projections, the proposed RC-Flow achieves robust channel reconstruction and delivers state-of-the-art performance across diverse noise levels, particularly in noise-dominated scenarios. The framework is further augmented by an adaptive dual-scheduling strategy, offering flexible management of the trade-off between convergence speed and reconstruction accuracy. Theoretically, we analyze the Jacobian spectral radius of the recursive operator to prove its global asymptotic stability. Numerical results demonstrate that RC-Flow reduces inference latency by two orders of magnitude while achieving a 2.7 dB performance gain in low signal-to-noise ratio regimes compared to the score-based baseline.
|
https://arxiv.org/abs/2601.15767
|
Academic Papers
|
svg
|
453a65fa04183c26993d5012d4b7774a866cbf593e8aa52d55a204af0e138537
|
2026-01-23T00:00:00-05:00
|
Rethinking Drug-Drug Interaction Modeling as Generalizable Relation Learning
|
arXiv:2601.15771v1 Announce Type: new Abstract: Drug-drug interaction (DDI) prediction is central to drug discovery and clinical development, particularly in the context of increasingly prevalent polypharmacy. Although existing computational methods achieve strong performance on standard benchmarks, they often fail to generalize to realistic deployment scenarios, where most candidate drug pairs involve previously unseen drugs and validated interactions are scarce. We demonstrate that proximity in the embedding spaces of prevailing molecule-centric DDI models does not reliably correspond to interaction labels, and that simply scaling up model capacity therefore fails to improve generalization. To address these limitations, we propose GenRel-DDI, a generalizable relation learning framework that reformulates DDI prediction as a relation-centric learning problem, in which interaction representations are learned independently of drug identities. This relation-level abstraction enables the capture of transferable interaction patterns that generalize to unseen drugs and novel drug pairs. Extensive experiments across multiple benchmark demonstrate that GenRel-DDI consistently and significantly outperforms state-of-the-art methods, with particularly large gains on strict entity-disjoint evaluations, highlighting the effectiveness and practical utility of relation learning for robust DDI prediction. The code is available at https://github.com/SZU-ADDG/GenRel-DDI.
|
https://arxiv.org/abs/2601.15771
|
Academic Papers
|
svg
|
13b5ad1612d5fdbd80012b8c8a6aebb4be2ee4f49d8a7954d5a2ded17347347a
|
2026-01-23T00:00:00-05:00
|
LL-GaussianImage: Efficient Image Representation for Zero-shot Low-Light Enhancement with 2D Gaussian Splatting
|
arXiv:2601.15772v1 Announce Type: new Abstract: 2D Gaussian Splatting (2DGS) is an emerging explicit scene representation method with significant potential for image compression due to high fidelity and high compression ratios. However, existing low-light enhancement algorithms operate predominantly within the pixel domain. Processing 2DGS-compressed images necessitates a cumbersome decompression-enhancement-recompression pipeline, which compromises efficiency and introduces secondary degradation. To address these limitations, we propose LL-GaussianImage, the first zero-shot unsupervised framework designed for low-light enhancement directly within the 2DGS compressed representation domain. Three primary advantages are offered by this framework. First, a semantic-guided Mixture-of-Experts enhancement framework is designed. Dynamic adaptive transformations are applied to the sparse attribute space of 2DGS using rendered images as guidance to enable compression-as-enhancement without full decompression to a pixel grid. Second, a multi-objective collaborative loss function system is established to strictly constrain smoothness and fidelity during enhancement, suppressing artifacts while improving visual quality. Third, a two-stage optimization process is utilized to achieve reconstruction-as-enhancement. The accuracy of the base representation is ensured through single-scale reconstruction and network robustness is enhanced. High-quality enhancement of low-light images is achieved while high compression ratios are maintained. The feasibility and superiority of the paradigm for direct processing within the compressed representation domain are validated through experimental results.
|
https://arxiv.org/abs/2601.15772
|
Academic Papers
|
svg
|
1556a7ac1ef54909b3765bee46aef0563b139bf9a7ab90c822be531e63f9bbc1
|
2026-01-23T00:00:00-05:00
|
Next Generation Active Learning: Mixture of LLMs in the Loop
|
arXiv:2601.15773v1 Announce Type: new Abstract: With the rapid advancement and strong generalization capabilities of large language models (LLMs), they have been increasingly incorporated into the active learning pipelines as annotators to reduce annotation costs. However, considering the annotation quality, labels generated by LLMs often fall short of real-world applicability. To address this, we propose a novel active learning framework, Mixture of LLMs in the Loop Active Learning, replacing human annotators with labels generated through a Mixture-of-LLMs-based annotation model, aimed at enhancing LLM-based annotation robustness by aggregating the strengths of multiple LLMs. To further mitigate the impact of the noisy labels, we introduce annotation discrepancy and negative learning to identify the unreliable annotations and enhance learning effectiveness. Extensive experiments demonstrate that our framework achieves performance comparable to human annotation and consistently outperforms single-LLM baselines and other LLM-ensemble-based approaches. Moreover, our framework is built on lightweight LLMs, enabling it to operate fully on local machines in real-world applications.
|
https://arxiv.org/abs/2601.15773
|
Academic Papers
|
svg
|
c4115373fe5f18f790c24c9d1c9196f161d436ef52902295c5f71599a2b46961
|
2026-01-23T00:00:00-05:00
|
FirmReBugger: A Benchmark Framework for Monolithic Firmware Fuzzers
|
arXiv:2601.15774v1 Announce Type: new Abstract: Monolithic Firmware is widespread. Unsurprisingly, fuzz testing firmware is an active research field with new advances addressing the unique challenges in the domain. However, understanding and evaluating improvements by deriving metrics such as code coverage and unique crashes are problematic, leading to a desire for a reliable bug-based benchmark. To address the need, we design and build FirmReBugger, a holistic framework for fairly assessing monolithic firmware fuzzers with a realistic, diverse, bug-based benchmark. FirmReBugger proposes using bug oracles--C syntax expressions of bug descriptors--with an interpreter to automate analysis and accurately report on bugs discovered, discriminating between states of detected, triggered, reached and not reached. Importantly, our idea of benchmarking does not modify the target binary and simply replays fuzzing seeds to isolate the benchmark implementation from the fuzzer while providing a simple means to extend with new bug oracles. Further, analyzing fuzzing roadblocks, we created FirmBench, a set of diverse, real-world binary targets with 313 software bug oracles. Incorporating our analysis of roadblocks challenging monolithic firmware fuzzing, the bench provides for rapid evaluation of future advances. We implement FirmReBugger in a FuzzBench-for-Firmware type service and use FirmBench to evaluate 9 state-of-the art monolithic firmware fuzzers in the style of a reproducibility study, using a 10 CPU-year effort, to report our findings.
|
https://arxiv.org/abs/2601.15774
|
Academic Papers
|
svg
|
5e882293d280e63e4cd158a35d6e0d2f97e5dc08bd36b34f3abf4db87f92e160
|
2026-01-23T00:00:00-05:00
|
Glove2UAV: A Wearable IMU-Based Glove for Intuitive Control of UAV
|
arXiv:2601.15775v1 Announce Type: new Abstract: This paper presents Glove2UAV, a wearable IMU-glove interface for intuitive UAV control through hand and finger gestures, augmented with vibrotactile warnings for exceeding predefined speed thresholds. To promote safer and more predictable interaction in dynamic flight, Glove2UAV is designed as a lightweight and easily deployable wearable interface intended for real-time operation. Glove2UAV streams inertial measurements in real time and estimates palm and finger orientations using a compact processing pipeline that combines median-based outlier suppression with Madgwick-based orientation estimation. The resulting motion estimations are mapped to a small set of control primitives for directional flight (forward/backward and lateral motion) and, when supported by the platform, to object-interaction commands. Vibrotactile feedback is triggered when flight speed exceeds predefined threshold values, providing an additional alert channel during operation. We validate real-time feasibility by synchronizing glove signals with UAV telemetry in both simulation and real-world flights. The results show fast gesture-based command execution, stable coupling between gesture dynamics and platform motion, correct operation of the core command set in our trials, and timely delivery of vibratile warning cues.
|
https://arxiv.org/abs/2601.15775
|
Academic Papers
|
svg
|
a6149878fc62c74e0618bd37146557069f7537e99a81663f8b90be774791bfba
|
2026-01-23T00:00:00-05:00
|
UXCascade: Scalable Usability Testing with Simulated User Agents
|
arXiv:2601.15777v1 Announce Type: new Abstract: Simulated user agents are increasingly used in usability testing to support fast, iterative UX workflows, as they generate rich data such as action logs and think-aloud reasoning, but the unstructured nature of this output often obscures actionable insights. We present UXCascade, an interactive tool for extracting, aggregating, and presenting agent-generated usability feedback at scale. Our core contribution is a multi-level analysis workflow that (1) highlights patterns across persona traits, goals, and outcomes, (2) links agent reasoning to specific issues, and (3) supports actionable design improvements. UXCascade operationalizes this approach by listing agent goals, traits, and issues in a structured overview. Practitioners can explore detailed reasoning traces and annotated views, propose interface edits, and assess their impact across personas. This enables a top-down, exploration-driven analysis from patterns to concrete UX interventions. A user study with eight UX professionals demonstrates that UXCascade integrates into existing workflows, enabling iterative feedback during early-stage interface development.
|
https://arxiv.org/abs/2601.15777
|
Academic Papers
|
svg
|
d5832bae371bae527b31c892df5ec33ce4788e3efae030a7f07ea2bc5cd45461
|
2026-01-23T00:00:00-05:00
|
Agentic Confidence Calibration
|
arXiv:2601.15778v1 Announce Type: new Abstract: AI agents are rapidly advancing from passive language models to autonomous systems executing complex, multi-step tasks. Yet their overconfidence in failure remains a fundamental barrier to deployment in high-stakes settings. Existing calibration methods, built for static single-turn outputs, cannot address the unique challenges of agentic systems, such as compounding errors along trajectories, uncertainty from external tools, and opaque failure modes. To address these challenges, we introduce, for the first time, the problem of Agentic Confidence Calibration and propose Holistic Trajectory Calibration (HTC), a novel diagnostic framework that extracts rich process-level features ranging from macro dynamics to micro stability across an agent's entire trajectory. Powered by a simple, interpretable model, HTC consistently surpasses strong baselines in both calibration and discrimination, across eight benchmarks, multiple LLMs, and diverse agent frameworks. Beyond performance, HTC delivers three essential advances: it provides interpretability by revealing the signals behind failure, enables transferability by applying across domains without retraining, and achieves generalization through a General Agent Calibrator (GAC) that achieves the best calibration (lowest ECE) on the out-of-domain GAIA benchmark. Together, these contributions establish a new process-centric paradigm for confidence calibration, providing a framework for diagnosing and enhancing the reliability of AI agents.
|
https://arxiv.org/abs/2601.15778
|
Academic Papers
|
svg
|
a275c4a4e2bed08d4a292a98c6501d3c92991f434a7a1e17266dbeb2587408a4
|
2026-01-23T00:00:00-05:00
|
Diffusion Model-Based Data Augmentation for Enhanced Neuron Segmentation
|
arXiv:2601.15779v1 Announce Type: new Abstract: Neuron segmentation in electron microscopy (EM) aims to reconstruct the complete neuronal connectome; however, current deep learning-based methods are limited by their reliance on large-scale training data and extensive, time-consuming manual annotations. Traditional methods augment the training set through geometric and photometric transformations; however, the generated samples remain highly correlated with the original images and lack structural diversity. To address this limitation, we propose a diffusion-based data augmentation framework capable of generating diverse and structurally plausible image-label pairs for neuron segmentation. Specifically, the framework employs a resolution-aware conditional diffusion model with multi-scale conditioning and EM resolution priors to enable voxel-level image synthesis from 3D masks. It further incorporates a biology-guided mask remodeling module that produces augmented masks with enhanced structural realism. Together, these components effectively enrich the training set and improve segmentation performance. On the AC3 and AC4 datasets under low-annotation regimes, our method improves the ARAND metric by 32.1% and 30.7%, respectively, when combined with two different post-processing methods. Our code is available at https://github.com/HeadLiuYun/NeuroDiff.
|
https://arxiv.org/abs/2601.15779
|
Academic Papers
|
svg
|
d3b1b3355f5e7564c4e2067d8dcbb2bfb4900d72cbef8b042081b76f2a5791a8
|
2026-01-23T00:00:00-05:00
|
Assessing Situational and Spatial Awareness of VLMs with Synthetically Generated Video
|
arXiv:2601.15780v1 Announce Type: new Abstract: Spatial reasoning in vision language models (VLMs) remains fragile when semantics hinge on subtle temporal or geometric cues. We introduce a synthetic benchmark that probes two complementary skills: situational awareness (recognizing whether an interaction is harmful or benign) and spatial awareness (tracking who does what to whom, and reasoning about relative positions and motion). Through minimal video pairs, we test three challenges: distinguishing violence from benign activity, binding assailant roles across viewpoints, and judging fine-grained trajectory alignment. While we evaluate recent VLMs in a training-free setting, the benchmark is applicable to any video classification model. Results show performance only slightly above chance across tasks. A simple aid, stable color cues, partly reduces assailant role confusions but does not resolve the underlying weakness. By releasing data and code, we aim to provide reproducible diagnostics and seed exploration of lightweight spatial priors to complement large-scale pretraining.
|
https://arxiv.org/abs/2601.15780
|
Academic Papers
|
svg
|
b6d78ed170839614d443a001c0e628ba7440e444ad0a6094fea06b4df219466e
|
2026-01-23T00:00:00-05:00
|
Endowing Molecular Language with Geometry Perception via Modality Compensation for High-Throughput Quantum Hamiltonian Prediction
|
arXiv:2601.15786v1 Announce Type: new Abstract: The quantum Hamiltonian is a fundamental property that governs a molecule's electronic structure and behavior, and its calculation and prediction are paramount in computational chemistry and materials science. Accurate prediction is highly reliant on extensive training data, including precise molecular geometries and the Hamiltonian matrices, which are expensive to acquire via either experimental or computational methods. Towards a fast yet accurate method for Hamiltonian prediction, we first introduce a geometry information-aware molecular language model to bypass the use of expensive molecular geometries by only using the readily available molecular language -- simplified molecular input line entry system (SMILES). Our method employs multimodal alignment to bridge the relationship between SMILES strings and their corresponding molecular geometries. Recognizing that the molecular language inherently lacks explicit geometric information, we propose a geometry modality compensation strategy to imbue molecular language representations with essential geometric features, thereby enabling accurate predictions using SMILES. In addition, given the high cost of acquiring Hamiltonian data, we devise a weakly supervised strategy to fine-tune the molecular language model, thus improving the data efficiency. Theoretically, we prove that the prediction generalization error without explicit molecular geometry can be bounded through our modality compensation scheme. Empirically, our method achieves superior computational efficiency, providing up to 100x speedup over conventional quantum mechanical methods while maintaining comparable prediction accuracy. We further demonstrate the practical case study of our approach in the screening of electrolyte formulations.
|
https://arxiv.org/abs/2601.15786
|
Academic Papers
|
svg
|
ed0caaa991e756b9cdbad8f57d4e5946be80716d2daa25c7cf9513f3ded073dc
|
2026-01-23T00:00:00-05:00
|
Efficient Numerical Reconstruction of Wave Equation Sources via Droplet-Induced Asymptotics
|
arXiv:2601.15787v1 Announce Type: new Abstract: In this paper, we develop and numerically implement a novel approach for solving the inverse source problem of the acoustic wave equation in three dimensions. By injecting a small high-contrast droplet into the medium, we exploit the resulting wave field perturbation measured at a single external point over time. The method enables stable source reconstructions where conventional approaches fail due to ill-posedness, with potential applications in medical imaging and non-destructive testing. Key contributions include: 1. Implementation of a theoretically justified asymptotic expansion, from [33], using the eigensystem of the Newtonian operator, with error analysis for the spectral truncation. 2. Novel numerical schemes for solving the time-domain Lippmann-Schwinger equation and reconstructing the source via Riesz basis expansions and mollification-based numerical differentiations. 3. Reconstruction requiring only single-point measurements, overcoming traditional spatial data limitations. 4. 3D numerical experiments demonstrating accurate source recovery under noise (SNR of the order $1/a$), with error analysis for the droplet size (of the order $a$) and the number of spectral modes $N$.
|
https://arxiv.org/abs/2601.15787
|
Academic Papers
|
svg
|
9c06157ab06848c4fa4a42baa042472f748cb65c4c9f737af337964ab2daac3d
|
2026-01-23T00:00:00-05:00
|
HumanLLM: Towards Personalized Understanding and Simulation of Human Nature
|
arXiv:2601.15793v1 Announce Type: new Abstract: Motivated by the remarkable progress of large language models (LLMs) in objective tasks like mathematics and coding, there is growing interest in their potential to simulate human behavior--a capability with profound implications for transforming social science research and customer-centric business insights. However, LLMs often lack a nuanced understanding of human cognition and behavior, limiting their effectiveness in social simulation and personalized applications. We posit that this limitation stems from a fundamental misalignment: standard LLM pretraining on vast, uncontextualized web data does not capture the continuous, situated context of an individual's decisions, thoughts, and behaviors over time. To bridge this gap, we introduce HumanLLM, a foundation model designed for personalized understanding and simulation of individuals. We first construct the Cognitive Genome Dataset, a large-scale corpus curated from real-world user data on platforms like Reddit, Twitter, Blogger, and Amazon. Through a rigorous, multi-stage pipeline involving data filtering, synthesis, and quality control, we automatically extract over 5.5 million user logs to distill rich profiles, behaviors, and thinking patterns. We then formulate diverse learning tasks and perform supervised fine-tuning to empower the model to predict a wide range of individualized human behaviors, thoughts, and experiences. Comprehensive evaluations demonstrate that HumanLLM achieves superior performance in predicting user actions and inner thoughts, more accurately mimics user writing styles and preferences, and generates more authentic user profiles compared to base models. Furthermore, HumanLLM shows significant gains on out-of-domain social intelligence benchmarks, indicating enhanced generalization.
|
https://arxiv.org/abs/2601.15793
|
Academic Papers
|
svg
|
ae8f889529a1ca4ddd9655538c9ee61fe80357e1c69b8fa697dcb601e5fc8687
|
2026-01-23T00:00:00-05:00
|
Creativity in the Age of AI: Rethinking the Role of Intentional Agency
|
arXiv:2601.15797v1 Announce Type: new Abstract: Many theorists of creativity maintain that intentional agency is a necessary condition of creativity. We argue that this requirement, which we call the Intentional Agency Condition (IAC), should be rejected as a general condition of creativity, while retaining its relevance in specific contexts. We show that recent advances in generative AI have rendered the IAC increasingly problematic, both descriptively and functionally. We offer two reasons for abandoning it at the general level. First, we present corpus evidence indicating that authors and journalists are increasingly comfortable ascribing creativity to generative AI, despite its lack of intentional agency. This development places pressure on the linguistic intuitions that have traditionally been taken to support the IAC. Second, drawing on the method of conceptual engineering, we argue that the IAC no longer fulfils its core social function. Rather than facilitating the identification and encouragement of reliable sources of novel and valuable products, it now feeds into biases that distort our assessments of AI-generated outputs. We therefore propose replacing the IAC with a consistency requirement, according to which creativity tracks the reliable generation of novel and valuable products. Nonetheless, we explain why the IAC should be retained in specific local domains.
|
https://arxiv.org/abs/2601.15797
|
Academic Papers
|
svg
|
372689a806161decfcfe15bc4bb9c06e0752d8393581d235133bb217bb4d3ce2
|
2026-01-23T00:00:00-05:00
|
VitalDiagnosis: AI-Driven Ecosystem for 24/7 Vital Monitoring and Chronic Disease Management
|
arXiv:2601.15798v1 Announce Type: new Abstract: Chronic diseases have become the leading cause of death worldwide, a challenge intensified by strained medical resources and an aging population. Individually, patients often struggle to interpret early signs of deterioration or maintain adherence to care plans. In this paper, we introduce VitalDiagnosis, an LLM-driven ecosystem designed to shift chronic disease management from passive monitoring to proactive, interactive engagement. By integrating continuous data from wearable devices with the reasoning capabilities of LLMs, the system addresses both acute health anomalies and routine adherence. It analyzes triggers through context-aware inquiries, produces provisional insights within a collaborative patient-clinician workflow, and offers personalized guidance. This approach aims to promote a more proactive and cooperative care paradigm, with the potential to enhance patient self-management and reduce avoidable clinical workload.
|
https://arxiv.org/abs/2601.15798
|
Academic Papers
|
svg
|
a57f9f22423cb46dd2bc12e07e23da5910ac7dacf6f953bb3c0392f5e408bcac
|
2026-01-23T00:00:00-05:00
|
Attributing and Exploiting Safety Vectors through Global Optimization in Large Language Models
|
arXiv:2601.15801v1 Announce Type: new Abstract: While Large Language Models (LLMs) are aligned to mitigate risks, their safety guardrails remain fragile against jailbreak attacks. This reveals limited understanding of components governing safety. Existing methods rely on local, greedy attribution that assumes independent component contributions. However, they overlook the cooperative interactions between different components in LLMs, such as attention heads, which jointly contribute to safety mechanisms. We propose \textbf{G}lobal \textbf{O}ptimization for \textbf{S}afety \textbf{V}ector Extraction (GOSV), a framework that identifies safety-critical attention heads through global optimization over all heads simultaneously. We employ two complementary activation repatching strategies: Harmful Patching and Zero Ablation. These strategies identify two spatially distinct sets of safety vectors with consistently low overlap, termed Malicious Injection Vectors and Safety Suppression Vectors, demonstrating that aligned LLMs maintain separate functional pathways for safety purposes. Through systematic analyses, we find that complete safety breakdown occurs when approximately 30\% of total heads are repatched across all models. Building on these insights, we develop a novel inference-time white-box jailbreak method that exploits the identified safety vectors through activation repatching. Our attack substantially outperforms existing white-box attacks across all test models, providing strong evidence for the effectiveness of the proposed GOSV framework on LLM safety interpretability.
|
https://arxiv.org/abs/2601.15801
|
Academic Papers
|
svg
|
f614f6de16d1f8e4dc4a6f9b209e0dc377f777b07b85febccb207d69be4681bb
|
2026-01-23T00:00:00-05:00
|
A Beacon Based Solution for Autonomous UUVs GNSS-Denied Stealthy Navigation
|
arXiv:2601.15802v1 Announce Type: new Abstract: Autonomous Unmanned Underwater Vehicles (UUVs) enable military and civilian covert operations in coastal areas without relying on support vessels or Global Navigation Satellite Systems (GNSS). Such operations are critical when surface access is not possible and stealthy navigation is required in restricted environments such as protected zones or dangerous areas under access ban. GNSS denied navigation is then essential to maintaining concealment as surfacing could expose UUVs to detection. To ensure a precise fleet positioning a constellation of beacons deployed by aerial or surface drones establish a synthetic landmark network that will guide the fleet of UUVs along an optimized path from the continental shelf to the goal on the shore. These beacons either submerged or floating emit acoustic signals for UUV localisation and navigation. A hierarchical planner generates an adaptive route for the drones executing primitive actions while continuously monitoring and replanning as needed to maintain trajectory accuracy.
|
https://arxiv.org/abs/2601.15802
|
Academic Papers
|
svg
|
edfc7193fe25cbbd516ffdec6aee3400d7c93acf9ccbf99bd09a3ef2d76a56db
|
2026-01-23T00:00:00-05:00
|
Entangled Life and Code: A Computational Design Taxonomy for Synergistic Bio-Digital Systems
|
arXiv:2601.15804v1 Announce Type: new Abstract: Bio-digital systems that merge microbial life with technology promise new modes of computation, combining biological adaptability with digital precision. Yet realizing this potential symbiotically -- where biological and digital agents co-adapt and co-process -- remains elusive, largely due to the absence of a shared vocabulary bridging biology and computing. Consequently, microbes are often constrained to uni-directional roles, functioning as sensors or actuators rather than as active, computational partners in bio-digital systems. In response, we propose a taxonomy and pathways that articulate and expand the roles of biological and digital entities for synergetic bio-digital computation. Using this taxonomy, we analysed 70 systems across HCI, design, and engineering, identifying how biological mechanisms can be mapped onto computational abstractions. We argue that such mappings enable computationally actionable directions that foster richer and reciprocal relationships in bio-digital systems, supporting regenerative ecologies across time and scale while inspiring new paradigms for computation in HCI.
|
https://arxiv.org/abs/2601.15804
|
Academic Papers
|
svg
|
9176619881da157ed99c54ebd3254e21c1ce7e5082cec7abe7a6a8b135f6fff6
|
2026-01-23T00:00:00-05:00
|
Inference-Time Scaling of Verification: Self-Evolving Deep Research Agents via Test-Time Rubric-Guided Verification
|
arXiv:2601.15808v1 Announce Type: new Abstract: Recent advances in Deep Research Agents (DRAs) are transforming automated knowledge discovery and problem-solving. While the majority of existing efforts focus on enhancing policy capabilities via post-training, we propose an alternative paradigm: self-evolving the agent's ability by iteratively verifying the policy model's outputs, guided by meticulously crafted rubrics. This approach gives rise to the inference-time scaling of verification, wherein an agent self-improves by evaluating its generated answers to produce iterative feedback and refinements. We derive the rubrics based on an automatically constructed DRA Failure Taxonomy, which systematically classifies agent failures into five major categories and thirteen sub-categories. We present DeepVerifier, a rubrics-based outcome reward verifier that leverages the asymmetry of verification and outperforms vanilla agent-as-judge and LLM judge baselines by 12%-48% in meta-evaluation F1 score. To enable practical self-evolution, DeepVerifier integrates as a plug-and-play module during test-time inference. The verifier produces detailed rubric-based feedback, which is fed back to the agent for iterative bootstrapping, refining responses without additional training. This test-time scaling delivers 8%-11% accuracy gains on challenging subsets of GAIA and XBench-DeepResearch when powered by capable closed-source LLMs. Finally, to support open-source advancement, we release DeepVerifier-4K, a curated supervised fine-tuning dataset of 4,646 high-quality agent steps focused on DRA verification. These examples emphasize reflection and self-critique, enabling open models to develop robust verification capabilities.
|
https://arxiv.org/abs/2601.15808
|
Academic Papers
|
svg
|
6a51d966a77e7a372acb9508f07c650ea0ec195141f11103452f4b2558f5aabc
|
2026-01-23T00:00:00-05:00
|
SteerEval: Inference-time Interventions Strengthen Multilingual Generalization in Neural Summarization Metrics
|
arXiv:2601.15809v1 Announce Type: new Abstract: An increasing body of work has leveraged multilingual language models for Natural Language Generation tasks such as summarization. A major empirical bottleneck in this area is the shortage of accurate and robust evaluation metrics for many languages, which hinders progress. Recent studies suggest that multilingual language models often use English as an internal pivot language, and that misalignment with this pivot can lead to degraded downstream performance. Motivated by the hypothesis that this mismatch could also apply to multilingual neural metrics, we ask whether steering their activations toward an English pivot can improve correlation with human judgments. We experiment with encoder- and decoder-based metrics and find that test-time intervention methods are effective across the board, increasing metric effectiveness for diverse languages.
|
https://arxiv.org/abs/2601.15809
|
Academic Papers
|
svg
|
c350b7342f433fcf0e29cc7810ff63304634914ca4cb51946229b34c8e4f5b4a
|
2026-01-23T00:00:00-05:00
|
A Mobile Application for Flower Recognition System Based on Convolutional Neural Networks
|
arXiv:2601.15810v1 Announce Type: new Abstract: A convolutional neural network (CNN) is a deep learning algorithm that has been specifically designed for computer vision applications. The CNNs proved successful in handling the increasing amount of data in many computer vision problems, where classical machine learning algorithms were insufficient. Flowers have many uses in our daily lives, from decorating to making medicines to detoxifying the environment. Identifying flower types requires expert knowledge. However, accessing experts at any time and in any location may not always be feasible. In this study a mobile application based on CNNs was developed to recognize different types of flowers to provide non-specialists with quick and easy access to information about flower types. The study employed three distinct CNN models, namely MobileNet, DenseNet121, and Xception, to determine the most suitable model for the mobile application. The classification performances of the models were evaluated by training them with seven different optimization algorithms. The DenseNet-121 architecture, which uses the stochastic gradient descent (SGD) optimization algorithm, was the most successful, achieving 95.84 % accuracy, 96.00% precision, recall, and F1-score. This result shows that CNNs can be used for flower classification in mobile applications.
|
https://arxiv.org/abs/2601.15810
|
Academic Papers
|
svg
|
3a188347fef5c9ec802a3719e83a5de9ab0fef5505fa1a9e624cbb7778d0b010
|
2026-01-23T00:00:00-05:00
|
Contractions of quasi relation algebras and applications to representability
|
arXiv:2601.15811v1 Announce Type: new Abstract: Quasi relation algebras (qRAs) were first described by Galatos and Jipsen in 2013. They are generalisations of relation algebras and can also be viewed as certain residuated lattice expansions. We identify positive symmetric idempotent elements in qRAs and show that they can be used to construct new qRAs, so-called contractions of the original algebra. We then show that the contraction of a distributive qRA will be representable when the original algebra is representable. Further, we identify a class of distributive qRAs that are not finitely representable.
|
https://arxiv.org/abs/2601.15811
|
Academic Papers
|
svg
|
b1868937603bc59a9db2a08807b48a7d57afdb2522302713113b056ec78bac54
|
2026-01-23T00:00:00-05:00
|
ErrorMap and ErrorAtlas: Charting the Failure Landscape of Large Language Models
|
arXiv:2601.15812v1 Announce Type: new Abstract: Large Language Models (LLM) benchmarks tell us when models fail, but not why they fail. A wrong answer on a reasoning dataset may stem from formatting issues, calculation errors, or dataset noise rather than weak reasoning. Without disentangling such causes, benchmarks remain incomplete and cannot reliably guide model improvement. We introduce ErrorMap, the first method to chart the sources of LLM failure. It extracts a model's unique "failure signature", clarifies what benchmarks measure, and broadens error identification to reduce blind spots. This helps developers debug models, aligns benchmark goals with outcomes, and supports informed model selection. ErrorMap works on any model or dataset with the same logic. Applying our method to 35 datasets and 83 models we generate ErrorAtlas, a taxonomy of model errors, revealing recurring failure patterns. ErrorAtlas highlights error types that are currently underexplored in LLM research, such as omissions of required details in the output and question misinterpretation. By shifting focus from where models succeed to why they fail, ErrorMap and ErrorAtlas enable advanced evaluation - one that exposes hidden weaknesses and directs progress. Unlike success, typically measured by task-level metrics, our approach introduces a deeper evaluation layer that can be applied globally across models and tasks, offering richer insights into model behavior and limitations. We make the taxonomy and code publicly available with plans to periodically update ErrorAtlas as new benchmarks and models emerge.
|
https://arxiv.org/abs/2601.15812
|
Academic Papers
|
svg
|
77021fa0fa48d0e5cbccf52e45949cfba3f5a3b21b99dab44a303238381224ce
|
2026-01-23T00:00:00-05:00
|
Beyond Off-the-Shelf Models: A Lightweight and Accessible Machine Learning Pipeline for Ecologists Working with Image Data
|
arXiv:2601.15813v1 Announce Type: new Abstract: We introduce a lightweight experimentation pipeline designed to lower the barrier for applying machine learning (ML) methods for classifying images in ecological research. We enable ecologists to experiment with ML models independently, thus they can move beyond off-the-shelf models and generate insights tailored to local datasets and specific classification tasks and target variables. Our tool combines a simple command-line interface for preprocessing, training, and evaluation with a graphical interface for annotation, error analysis, and model comparison. This design enables ecologists to build and iterate on compact, task-specific classifiers without requiring advanced ML expertise. As a proof of concept, we apply the pipeline to classify red deer (Cervus elaphus) by age and sex from 3392 camera trap images collected in the Veldenstein Forest, Germany. Using 4352 cropped images containing individual deer labeled by experts, we trained and evaluated multiple backbone architectures with a wide variety of parameters and data augmentation strategies. Our best-performing models achieved 90.77% accuracy for age classification and 96.15% for sex classification. These results demonstrate that reliable demographic classification is feasible even with limited data to answer narrow, well-defined ecological problems. More broadly, the framework provides ecologists with an accessible tool for developing ML models tailored to specific research questions, paving the way for broader adoption of ML in wildlife monitoring and demographic analysis.
|
https://arxiv.org/abs/2601.15813
|
Academic Papers
|
svg
|
42099162248231eb7e88db5baf85f9b1e5b0dba0a0942fa60d4ef1174b570d3d
|
2026-01-23T00:00:00-05:00
|
Improved Approximation Ratios for the Shortest Common Superstring Problem with Reverse Complements
|
arXiv:2601.15814v1 Announce Type: new Abstract: The Shortest Common Superstring (SCS) problem asks for the shortest string that contains each of a given set of strings as a substring. Its reverse-complement variant, the Shortest Common Superstring problem with Reverse Complements (SCS-RC), naturally arises in bioinformatics applications, where for each input string, either the string itself or its reverse complement must appear as a substring of the superstring. The well-known MGREEDY algorithm for the standard SCS constructs a superstring by first computing an optimal cycle cover on the overlap graph and then concatenating the strings corresponding to the cycles, while its refined variant, TGREEDY, further improves the approximation ratio. Although the original 4- and 3-approximation bounds of these algorithms have been successively improved for the standard SCS, no such progress has been made for the reverse-complement setting. A previous study extended MGREEDY to SCS-RC with a 4-approximation guarantee and briefly suggested that extending TGREEDY to the reverse-complement setting could achieve a 3-approximation. In this work, we strengthen these results by proving that the extensions of MGREEDY and TGREEDY to the reverse-complement setting achieve 3.75- and 2.875-approximation ratios, respectively. Our analysis extends the classical proofs for the standard SCS to handle the bidirectional overlaps introduced by reverse complements. These results provide the first formal improvement of approximation guarantees for SCS-RC, with the 2.875-approximate algorithm currently representing the best known bound for this problem.
|
https://arxiv.org/abs/2601.15814
|
Academic Papers
|
svg
|
28ebe10cb35256e61df8c9dc35996236a5516cd040603923afb814b5e7dd1b52
|
2026-01-23T00:00:00-05:00
|
Virtual Traffic Police: Large Language Model-Augmented Traffic Signal Control for Unforeseen Incidents
|
arXiv:2601.15816v1 Announce Type: new Abstract: Adaptive traffic signal control (TSC) has demonstrated strong effectiveness in managing dynamic traffic flows. However, conventional methods often struggle when unforeseen traffic incidents occur (e.g., accidents and road maintenance), which typically require labor-intensive and inefficient manual interventions by traffic police officers. Large Language Models (LLMs) appear to be a promising solution thanks to their remarkable reasoning and generalization capabilities. Nevertheless, existing works often propose to replace existing TSC systems with LLM-based systems, which can be (i) unreliable due to the inherent hallucinations of LLMs and (ii) costly due to the need for system replacement. To address the issues of existing works, we propose a hierarchical framework that augments existing TSC systems with LLMs, whereby a virtual traffic police agent at the upper level dynamically fine-tunes selected parameters of signal controllers at the lower level in response to real-time traffic incidents. To enhance domain-specific reliability in response to unforeseen traffic incidents, we devise a self-refined traffic language retrieval system (TLRS), whereby retrieval-augmented generation is employed to draw knowledge from a tailored traffic language database that encompasses traffic conditions and controller operation principles. Moreover, we devise an LLM-based verifier to update the TLRS continuously over the reasoning process. Our results show that LLMs can serve as trustworthy virtual traffic police officers that can adapt conventional TSC methods to unforeseen traffic incidents with significantly improved operational efficiency and reliability.
|
https://arxiv.org/abs/2601.15816
|
Academic Papers
|
svg
|
37f012ef00754f59ebb62f04781da1fdb77d1352447bd8895690937efe72e0c8
|
2026-01-23T00:00:00-05:00
|
ExDR: Explanation-driven Dynamic Retrieval Enhancement for Multimodal Fake News Detection
|
arXiv:2601.15820v1 Announce Type: new Abstract: The rapid spread of multimodal fake news poses a serious societal threat, as its evolving nature and reliance on timely factual details challenge existing detection methods. Dynamic Retrieval-Augmented Generation provides a promising solution by triggering keyword-based retrieval and incorporating external knowledge, thus enabling both efficient and accurate evidence selection. However, it still faces challenges in addressing issues such as redundant retrieval, coarse similarity, and irrelevant evidence when applied to deceptive content. In this paper, we propose ExDR, an Explanation-driven Dynamic Retrieval-Augmented Generation framework for Multimodal Fake News Detection. Our framework systematically leverages model-generated explanations in both the retrieval triggering and evidence retrieval modules. It assesses triggering confidence from three complementary dimensions, constructs entity-aware indices by fusing deceptive entities, and retrieves contrastive evidence based on deception-specific features to challenge the initial claim and enhance the final prediction. Experiments on two benchmark datasets, AMG and MR2, demonstrate that ExDR consistently outperforms previous methods in retrieval triggering accuracy, retrieval quality, and overall detection performance, highlighting its effectiveness and generalization capability.
|
https://arxiv.org/abs/2601.15820
|
Academic Papers
|
svg
|
0f31823c2cf92ff4b1cc877f8e0da3cdc1c10aa88758979c099397275a80037c
|
2026-01-23T00:00:00-05:00
|
Introducing the Generative Application Firewall (GAF)
|
arXiv:2601.15824v1 Announce Type: new Abstract: This paper introduces the Generative Application Firewall (GAF), a new architectural layer for securing LLM applications. Existing defenses -- prompt filters, guardrails, and data-masking -- remain fragmented; GAF unifies them into a single enforcement point, much like a WAF coordinates defenses for web traffic, while also covering autonomous agents and their tool interactions.
|
https://arxiv.org/abs/2601.15824
|
Academic Papers
|
svg
|
d3ba0bcd8e8458d499821da2103ce732e43307155a6cb0b8a0008ba3bd4e927c
|
2026-01-23T00:00:00-05:00
|
Can professional translators identify machine-generated text?
|
arXiv:2601.15828v1 Announce Type: new Abstract: This study investigates whether professional translators can reliably identify short stories generated in Italian by artificial intelligence (AI) without prior specialized training. Sixty-nine translators took part in an in-person experiment, where they assessed three anonymized short stories - two written by ChatGPT-4o and one by a human author. For each story, participants rated the likelihood of AI authorship and provided justifications for their choices. While average results were inconclusive, a statistically significant subset (16.2%) successfully distinguished the synthetic texts from the human text, suggesting that their judgements were informed by analytical skill rather than chance. However, a nearly equal number misclassified the texts in the opposite direction, often relying on subjective impressions rather than objective markers, possibly reflecting a reader preference for AI-generated texts. Low burstiness and narrative contradiction emerged as the most reliable indicators of synthetic authorship, with unexpected calques, semantic loans and syntactic transfer from English also reported. In contrast, features such as grammatical accuracy and emotional tone frequently led to misclassification. These findings raise questions about the role and scope of synthetic-text editing in professional contexts.
|
https://arxiv.org/abs/2601.15828
|
Academic Papers
|
svg
|
54fa11e402dad71d6a8693a02ee627bdd6611a4c9ff8e97d96bc1c91c70dd6c8
|
2026-01-23T00:00:00-05:00
|
Towards Realistic Remote Sensing Dataset Distillation with Discriminative Prototype-guided Diffusion
|
arXiv:2601.15829v1 Announce Type: new Abstract: Recent years have witnessed the remarkable success of deep learning in remote sensing image interpretation, driven by the availability of large-scale benchmark datasets. However, this reliance on massive training data also brings two major challenges: (1) high storage and computational costs, and (2) the risk of data leakage, especially when sensitive categories are involved. To address these challenges, this study introduces the concept of dataset distillation into the field of remote sensing image interpretation for the first time. Specifically, we train a text-to-image diffusion model to condense a large-scale remote sensing dataset into a compact and representative distilled dataset. To improve the discriminative quality of the synthesized samples, we propose a classifier-driven guidance by injecting a classification consistency loss from a pre-trained model into the diffusion training process. Besides, considering the rich semantic complexity of remote sensing imagery, we further perform latent space clustering on training samples to select representative and diverse prototypes as visual style guidance, while using a visual language model to provide aggregated text descriptions. Experiments on three high-resolution remote sensing scene classification benchmarks show that the proposed method can distill realistic and diverse samples for downstream model training. Code and pre-trained models are available online (https://github.com/YonghaoXu/DPD).
|
https://arxiv.org/abs/2601.15829
|
Academic Papers
|
svg
|
ea1095ffe200f8372a34afc92b7372f3b778457d1c9bbeedcc047b60fa2579b5
|
2026-01-23T00:00:00-05:00
|
An IoT-Based Smart Plant Monitoring and Irrigation System with Real-Time Environmental Sensing, Automated Alerts, and Cloud Analytics
|
arXiv:2601.15830v1 Announce Type: new Abstract: The increasing global demand for sustainable agriculture necessitates intelligent monitoring systems that optimize resource utilization and plant health management. Traditional farming methods rely on manual observation and periodic watering, often leading to water wastage, inconsistent plant growth, and delayed response to environmental changes. This paper presents a comprehensive IoT-based smart plant monitoring system that integrates multiple environmental sensors with automated irrigation and cloud analytics. The proposed system utilizes an ESP32 microcontroller to collect real-time data from DHT22 (temperature/humidity), HC-SR04 (water level), and soil moisture sensors, with visual feedback through an OLED display and auditory alerts via a buzzer. All sensor data is wirelessly transmitted to the ThingSpeak cloud platform for remote monitoring, historical analysis, and automated alert generation. Experimental results demonstrate the system's effectiveness in maintaining optimal soil moisture levels (with 92\% accuracy), providing real-time environmental monitoring, and reducing water consumption by approximately 40\% compared to conventional irrigation methods. The integrated web dashboard offers comprehensive visualization of plant health parameters, making it suitable for both small-scale gardening and commercial agriculture applications. With a total implementation cost of \$45.20, this system provides an affordable, scalable solution for precision agriculture and smart farming.
|
https://arxiv.org/abs/2601.15830
|
Academic Papers
|
svg
|
cf35ddef55aba133250ea2dae0ab5521b22843619bd941914ee1dfc71afecb5a
|
2026-01-23T00:00:00-05:00
|
RF Intelligence for Health: Classification of SmartBAN Signals in overcrowded ISM band
|
arXiv:2601.15836v1 Announce Type: new Abstract: Accurate classification of Radio-Frequency (RF) signals is essential for reliable wearable health-monitoring systems, providing awareness of the interference conditions in which medical protocols operate. In the overcrowded 2.4 GHz ISM band, however, identifying low-power transmissions from medical sensors is challenging due to strong co-channel interference and substantial power asymmetry with coexisting technologies. This work introduces the first open source framework for automatic recognition of SmartBAN signals in Body Area Networks (BANs). The framework combines a synthetic dataset of simulated signals with real RF acquisitions obtained through Software-Defined Radios (SDRs), enabling both controlled and realistic evaluation. Deep convolutional neural networks based on ResNet encoders and U-Net decoders with attention mechanisms are trained and assessed across diverse propagation conditions. The proposed approach achieves over 90% accuracy on synthetic datasets and demonstrates consistent performance on real over-the-air spectrograms. By enabling reliable SmartBAN signal recognition in dense spectral environments, this framework supports interferenceaware coexistence strategies and improves the dependability of wearable healthcare systems.
|
https://arxiv.org/abs/2601.15836
|
Academic Papers
|
svg
|
d00c3b3c76e60efc755d247045ff2d8e83a3c3065f6b77fbe58ebd2dc4cc4ed2
|
2026-01-23T00:00:00-05:00
|
TinySense: Effective CSI Compression for Scalable and Accurate Wi-Fi Sensing
|
arXiv:2601.15838v1 Announce Type: new Abstract: With the growing demand for device-free and privacy-preserving sensing solutions, Wi-Fi sensing has emerged as a promising approach for human pose estimation (HPE). However, existing methods often process vast amounts of channel state information (CSI) data directly, ultimately straining networking resources. This paper introduces TinySense, an efficient compression framework that enhances the scalability of Wi-Fi-based human sensing. Our approach is based on a new vector quantization-based generative adversarial network (VQGAN). Specifically, by leveraging a VQGAN-learned codebook, TinySense significantly reduces CSI data while maintaining the accuracy required for reliable HPE. To optimize compression, we employ the K-means algorithm to dynamically adjust compression bitrates to cluster a large-scale pre-trained codebook into smaller subsets. Furthermore, a Transformer model is incorporated to mitigate bitrate loss, enhancing robustness in unreliable networking conditions. We prototype TinySense on an experimental testbed using Jetson Nano and Raspberry Pi to measure latency and network resource use. Extensive results demonstrate that TinySense significantly outperforms state-of-the-art compression schemes, achieving up to 1.5x higher HPE accuracy score (PCK20) under the same compression rate. It also reduces latency and networking overhead, respectively, by up to 5x and 2.5x. The code repository is available online at here.
|
https://arxiv.org/abs/2601.15838
|
Academic Papers
|
svg
|
ee40298fad499d9ac6ccf61ef94ff9448b17b943318b41ae8e8aa8a3f6257d19
|
2026-01-23T00:00:00-05:00
|
Determinants of Training Corpus Size for Clinical Text Classification
|
arXiv:2601.15846v1 Announce Type: new Abstract: Introduction: Clinical text classification using natural language processing (NLP) models requires adequate training data to achieve optimal performance. For that, 200-500 documents are typically annotated. The number is constrained by time and costs and lacks justification of the sample size requirements and their relationship to text vocabulary properties. Methods: Using the publicly available MIMIC-III dataset containing hospital discharge notes with ICD-9 diagnoses as labels, we employed pre-trained BERT embeddings followed by Random Forest classifiers to identify 10 randomly selected diagnoses, varying training corpus sizes from 100 to 10,000 documents, and analyzed vocabulary properties by identifying strong and noisy predictive words through Lasso logistic regression on bag-of-words embeddings. Results: Learning curves varied significantly across the 10 classification tasks despite identical preprocessing and algorithms, with 600 documents sufficient to achieve 95% of the performance attainable with 10,000 documents for all tasks. Vocabulary analysis revealed that more strong predictors and fewer noisy predictors were associated with steeper learning curves, where every 100 additional noisy words decreased accuracy by approximately 0.02 while 100 additional strong predictors increased maximum accuracy by approximately 0.04.
|
https://arxiv.org/abs/2601.15846
|
Academic Papers
|
svg
|
677b1f56431e853963fba89892cb6e16393a09a6aa4bfd62e917f2d8a3960a72
|
2026-01-23T00:00:00-05:00
|
CGPT: Cluster-Guided Partial Tables with LLM-Generated Supervision for Table Retrieval
|
arXiv:2601.15849v1 Announce Type: new Abstract: General-purpose embedding models have demonstrated strong performance in text retrieval but remain suboptimal for table retrieval, where highly structured content leads to semantic compression and query-table mismatch. Recent LLM-based retrieval augmentation methods mitigate this issue by generating synthetic queries, yet they often rely on heuristic partial-table selection and seldom leverage these synthetic queries as supervision to improve the embedding model. We introduce CGPT, a training framework that enhances table retrieval through LLM-generated supervision. CGPT constructs semantically diverse partial tables by clustering table instances using K-means and sampling across clusters to broaden semantic coverage. An LLM then generates synthetic queries for these partial tables, which are used in hard-negative contrastive fine-tuning to refine the embedding model. Experiments across four public benchmarks (MimoTable, OTTQA, FetaQA, and E2E-WTQ) show that CGPT consistently outperforms retrieval baselines, including QGpT, with an average R@1 improvement of 16.54 percent. In a unified multi-domain corpus setting, CGPT further demonstrates strong cross-domain generalization and remains effective even when using smaller LLMs for synthetic query generation. These results indicate that semantically guided partial-table construction, combined with contrastive training from LLM-generated supervision, provides an effective and scalable paradigm for large-scale table retrieval. Our code is available at https://github.com/yumeow0122/CGPT.
|
https://arxiv.org/abs/2601.15849
|
Academic Papers
|
svg
|
8e34499b8aa40978a6c1ed2f4ea765f89dcf39d15f98602ad5ccf68b74c8b962
|
2026-01-23T00:00:00-05:00
|
Practical applications of Set Shaping Theory to Non-Uniform Sequences
|
arXiv:2601.15853v1 Announce Type: new Abstract: Set Shaping Theory (SST) moves beyond the classical fixed-space model by constructing bijective mappings the original sequence set into structured regions of a larger sequence space. These shaped subsets are characterized by a reduced average information content, measured by the product of the empirical entropy and the length, yielding (N +k)H0(f(s)) < NH0(s), which represents the universal coding limit when the source distribution is unknown. The principal experimental difficulty in applying Set Shaping Theory to non-uniform sequences arises from the need to order the sequences of both the original and transformed sets according to their information content. An exact ordering of these sets entails exponential complexity, rendering a direct implementation impractical. In this article, we show that this obstacle can be overcome by performing an approximate but informative ordering that preserves the structural requirements of SST while achieving the shaping gain predicted by the theory. This result extends previous experimental findings obtained for uniformly distributed sequences and demonstrates that the shaping advantage of SST persists for non-uniform sequences. Finally, to ensure full reproducibility, the software implementing the proposed method has been made publicly available on GitHub, enabling independent verification of the results reported in this work
|
https://arxiv.org/abs/2601.15853
|
Academic Papers
|
svg
|
f19a7f858f78e419d8ca7180c1c051c384036fe05132c0087e4ca41c3700cd81
|
2026-01-23T00:00:00-05:00
|
How to Tamper with a Parliament: Strategic Campaigns in Apportionment Elections
|
arXiv:2601.15855v1 Announce Type: new Abstract: In parliamentary elections, parties compete for a limited, typically fixed number of seats. Most parliaments are assembled using apportionment methods that distribute the seats based on the parties' vote counts. Common apportionment methods include divisor sequence methods (like D'Hondt or Sainte-Lagu\"e), the largest-remainder method, and first-past-the-post. In many countries, an electoral threshold is implemented to prevent very small parties from entering the parliament. Further, several countries have apportionment systems that incorporate multiple districts. We study how computationally hard it is to change the election outcome (i.e., to increase or limit the influence of a distinguished party) by convincing a limited number of voters to change their vote. We refer to these bribery-style attacks as \emph{strategic campaigns} and study the corresponding problems in terms of their computational (both classical and parameterized) complexity. We also run extensive experiments on real-world election data and study the effectiveness of optimal campaigns, in particular as opposed to using heuristic bribing strategies and with respect to the influence of the threshold and the influence of the number of districts. For apportionment elections with threshold, finally, we propose -- as an alternative to the standard top-choice mode -- the second-chance mode where voters of parties below the threshold receive a second chance to vote for another party, and we establish computational complexity results also in this setting.
|
https://arxiv.org/abs/2601.15855
|
Academic Papers
|
svg
|
0284a3d47f1faeb43efc4b7ceaa9a394a5e7c67bc27f51d965b8d0e61d902571
|
2026-01-23T00:00:00-05:00
|
Uncertainty-guided Generation of Dark-field Radiographs
|
arXiv:2601.15859v1 Announce Type: new Abstract: X-ray dark-field radiography provides complementary diagnostic information to conventional attenuation imaging by visualizing microstructural tissue changes through small-angle scattering. However, the limited availability of such data poses challenges for developing robust deep learning models. In this work, we present the first framework for generating dark-field images directly from standard attenuation chest X-rays using an Uncertainty-Guided Progressive Generative Adversarial Network. The model incorporates both aleatoric and epistemic uncertainty to improve interpretability and reliability. Experiments demonstrate high structural fidelity of the generated images, with consistent improvement of quantitative metrics across stages. Furthermore, out-of-distribution evaluation confirms that the proposed model generalizes well. Our results indicate that uncertainty-guided generative modeling enables realistic dark-field image synthesis and provides a reliable foundation for future clinical applications.
|
https://arxiv.org/abs/2601.15859
|
Academic Papers
|
svg
|
63ee8c51a507bf80c5104ca996e357c9bd479f0aa0921c78246040f942ba179a
|
2026-01-23T00:00:00-05:00
|
STAR: Semantic Table Representation with Header-Aware Clustering and Adaptive Weighted Fusion
|
arXiv:2601.15860v1 Announce Type: new Abstract: Table retrieval is the task of retrieving the most relevant tables from large-scale corpora given natural language queries. However, structural and semantic discrepancies between unstructured text and structured tables make embedding alignment particularly challenging. Recent methods such as QGpT attempt to enrich table semantics by generating synthetic queries, yet they still rely on coarse partial-table sampling and simple fusion strategies, which limit semantic diversity and hinder effective query-table alignment. We propose STAR (Semantic Table Representation), a lightweight framework that improves semantic table representation through semantic clustering and weighted fusion. STAR first applies header-aware K-means clustering to group semantically similar rows and selects representative centroid instances to construct a diverse partial table. It then generates cluster-specific synthetic queries to comprehensively cover the table's semantic space. Finally, STAR employs weighted fusion strategies to integrate table and query embeddings, enabling fine-grained semantic alignment. This design enables STAR to capture complementary information from structured and textual sources, improving the expressiveness of table representations. Experiments on five benchmarks show that STAR achieves consistently higher Recall than QGpT on all datasets, demonstrating the effectiveness of semantic clustering and adaptive weighted fusion for robust table representation. Our code is available at https://github.com/adsl135789/STAR.
|
https://arxiv.org/abs/2601.15860
|
Academic Papers
|
svg
|
4288c91af4fa115b9fb88869531bd0ca7b1d4a0cd8a92867ef189d717996e542
|
2026-01-23T00:00:00-05:00
|
Finding large sparse induced subgraphs in graphs of small (but not very small) tree-independence number
|
arXiv:2601.15861v1 Announce Type: new Abstract: The independence number of a tree decomposition is the size of a largest independent set contained in a single bag. The tree-independence number of a graph $G$ is the minimum independence number of a tree decomposition of $G$. As shown recently by Lima et al. [ESA~2024], a large family of optimization problems asking for a maximum-weight induced subgraph of bounded treewidth, satisfying a given \textsf{CMSO}$_2$ property, can be solved in polynomial time in graphs whose tree-independence number is bounded by some constant~$k$. However, the complexity of the algorithm of Lima et al. grows rapidly with $k$, making it useless if the tree-independence number is superconstant. In this paper we present a refined version of the algorithm. We show that the same family of problems can be solved in time~$n^{\mathcal{O}(k)}$, where $n$ is the number of vertices of the instance, $k$ is the tree-independence number, and the $\mathcal{O}(\cdot)$-notation hides factors depending on the treewidth bound of the solution and the considered \textsf{CMSO}$_2$ property. This running time is quasipolynomial for classes of graphs with polylogarithmic tree-independence number; several such classes were recently discovered. Furthermore, the running time is subexponential for many natural classes of geometric intersection graphs -- namely, ones that admit balanced clique-based separators of sublinear size.
|
https://arxiv.org/abs/2601.15861
|
Academic Papers
|
svg
|
38c8e8d9ed39fbd2287aff5f651750c898b617196fb6d596a4689eb6f785dbc8
|
2026-01-23T00:00:00-05:00
|
Minimum Envy Graphical House Allocation Beyond Identical Valuations
|
arXiv:2601.15864v1 Announce Type: new Abstract: House allocation is an extremely well-studied problem in the field of fair allocation, where the goal is to assign $n$ houses to $n$ agents while satisfying certain fairness criterion, e.g., envy-freeness. To model social interactions, the Graphical House Allocation framework introduces a social graph $G$, in which each vertex corresponds to an agent, and an edge $(u, v)$ corresponds to the potential of agent $u$ to envy the agent $v$, based on their allocations and valuations. In undirected social graphs, the potential for envy is in both the directions. In the Minimum Envy Graphical House Allocation (ME-GHA) problem, given a set of $n$ agents, $n$ houses, a social graph, and agent's valuation functions, the goal is to find an allocation that minimizes the total envy summed up over all the edges of $G$. Recent work, [Hosseini et al., AAMAS 2023, AAMAS 2024] studied ME-GHA in the regime of polynomial-time algorithms, and designed exact and approximation algorithms, for certain graph classes under identical agent valuations. We initiate the study of \gha with non-identical valuations, a setting that has so far remained unexplored. We investigate the multivariate (parameterized) complexity of \gha by identifying structural restrictions on the social graph and valuation functions that yield tractability. We also design moderately exponential-time algorithms for several graph classes, and a polynomial-time algorithm for {binary valuations that returns an allocation with envy at most one when the social graph has maximum degree at most one.
|
https://arxiv.org/abs/2601.15864
|
Academic Papers
|
svg
|
6af99e9580894b8434cc07cd44a70d28550bfb9852260831d4c94986f5b347c2
|
2026-01-23T00:00:00-05:00
|
A Lightweight Brain-Inspired Machine Learning Framework for Coronary Angiography: Hybrid Neural Representation and Robust Learning Strategies
|
arXiv:2601.15865v1 Announce Type: new Abstract: Background: Coronary angiography (CAG) is a cornerstone imaging modality for assessing coronary artery disease and guiding interventional treatment decisions. However, in real-world clinical settings, angiographic images are often characterized by complex lesion morphology, severe class imbalance, label uncertainty, and limited computational resources, posing substantial challenges to conventional deep learning approaches in terms of robustness and generalization.Methods: The proposed framework is built upon a pretrained convolutional neural network to construct a lightweight hybrid neural representation. A selective neural plasticity training strategy is introduced to enable efficient parameter adaptation. Furthermore, a brain-inspired attention-modulated loss function, combining Focal Loss with label smoothing, is employed to enhance sensitivity to hard samples and uncertain annotations. Class-imbalance-aware sampling and cosine annealing with warm restarts are adopted to mimic rhythmic regulation and attention allocation mechanisms observed in biological neural systems.Results: Experimental results demonstrate that the proposed lightweight brain-inspired model achieves strong and stable performance in binary coronary angiography classification, yielding competitive accuracy, recall, F1-score, and AUC metrics while maintaining high computational efficiency.Conclusion: This study validates the effectiveness of brain-inspired learning mechanisms in lightweight medical image analysis and provides a biologically plausible and deployable solution for intelligent clinical decision support under limited computational resources.
|
https://arxiv.org/abs/2601.15865
|
Academic Papers
|
svg
|
cb7de58379cec7328b5040745aa18571733d5652efec22cc81ad7e56fac7e7f0
|
2026-01-23T00:00:00-05:00
|
Out-of-Distribution Detection Based on Total Variation Estimation
|
arXiv:2601.15867v1 Announce Type: new Abstract: This paper introduces a novel approach to securing machine learning model deployments against potential distribution shifts in practical applications, the Total Variation Out-of-Distribution (TV-OOD) detection method. Existing methods have produced satisfactory results, but TV-OOD improves upon these by leveraging the Total Variation Network Estimator to calculate each input's contribution to the overall total variation. By defining this as the total variation score, TV-OOD discriminates between in- and out-of-distribution data. The method's efficacy was tested across a range of models and datasets, consistently yielding results in image classification tasks that were either comparable or superior to those achieved by leading-edge out-of-distribution detection techniques across all evaluation metrics.
|
https://arxiv.org/abs/2601.15867
|
Academic Papers
|
svg
|
f6d14e96ac51c378cd449b4a82fc8f91fc00f1f5e360bdc6db05c68439f52ae2
|
2026-01-23T00:00:00-05:00
|
Artificial Rigidities vs. Biological Noise: A Comparative Analysis of Multisensory Integration in AV-HuBERT and Human Observers
|
arXiv:2601.15869v1 Announce Type: new Abstract: This study evaluates AV-HuBERT's perceptual bio-fidelity by benchmarking its response to incongruent audiovisual stimuli (McGurk effect) against human observers (N=44). Results reveal a striking quantitative isomorphism: AI and humans exhibited nearly identical auditory dominance rates (32.0% vs. 31.8%), suggesting the model captures biological thresholds for auditory resistance. However, AV-HuBERT showed a deterministic bias toward phonetic fusion (68.0%), significantly exceeding human rates (47.7%). While humans displayed perceptual stochasticity and diverse error profiles, the model remained strictly categorical. Findings suggest that current self-supervised architectures mimic multisensory outcomes but lack the neural variability inherent to human speech perception.
|
https://arxiv.org/abs/2601.15869
|
Academic Papers
|
svg
|
95ddab98200c2d306c47ededbe1c09187249922292f0a7946f7e237bbeafbc28
|
2026-01-23T00:00:00-05:00
|
Why Inference in Large Models Becomes Decomposable After Training
|
arXiv:2601.15871v1 Announce Type: new Abstract: Inference in large-scale AI models is typically performed on dense parameter matrices, leading to inference cost and system complexity that scale unsustainably with model size. This limitation does not arise from insufficient model capacity, but from treating post-training inference systems as monolithic operators while ignoring internal structures formed during learning. We show that gradient update events in large models are highly localized and selective, leaving many parameter dependencies statistically indistinguishable from their initialization distribution after training. As a result, post-training inference systems are structurally non-uniform and inherently decomposable. Based on this observation, we introduce a post-training statistical criterion and a structural annealing procedure that removes unsupported dependencies and reveals stable, independent substructures. This work establishes a post-training, model-agnostic structural view of inference systems and enables structured, parallel inference without modifying model functionality or interfaces.
|
https://arxiv.org/abs/2601.15871
|
Academic Papers
|
svg
|
7f06f1c9b4ead26e022e3588b4a18f75e09a5b1918d289fc4104273e8a535eff
|
2026-01-23T00:00:00-05:00
|
PF-D2M: A Pose-free Diffusion Model for Universal Dance-to-Music Generation
|
arXiv:2601.15872v1 Announce Type: new Abstract: Dance-to-music generation aims to generate music that is aligned with dance movements. Existing approaches typically rely on body motion features extracted from a single human dancer and limited dance-to-music datasets, which restrict their performance and applicability to real-world scenarios involving multiple dancers and non-human dancers. In this paper, we propose PF-D2M, a universal diffusion-based dance-to-music generation model that incorporates visual features extracted from dance videos. PF-D2M is trained with a progressive training strategy that effectively addresses data scarcity and generalization challenges. Both objective and subjective evaluations show that PF-D2M achieves state-of-the-art performance in dance-music alignment and music quality.
|
https://arxiv.org/abs/2601.15872
|
Academic Papers
|
svg
|
85ffb0e82ecfed7d069079b81ed5b7b441d8be35b757c0d9eebaf21ed5e2e363
|
2026-01-23T00:00:00-05:00
|
SoK: Challenges in Tabular Membership Inference Attacks
|
arXiv:2601.15874v1 Announce Type: new Abstract: Membership Inference Attacks (MIAs) are currently a dominant approach for evaluating privacy in machine learning applications. Despite their significance in identifying records belonging to the training dataset, several concerns remain unexplored, particularly with regard to tabular data. In this paper, first, we provide an extensive review and analysis of MIAs considering two main learning paradigms: centralized and federated learning. We extend and refine the taxonomy for both. Second, we demonstrate the efficacy of MIAs in tabular data using several attack strategies, also including defenses. Furthermore, in a federated learning scenario, we consider the threat posed by an outsider adversary, which is often neglected. Third, we demonstrate the high vulnerability of single-outs (records with a unique signature) to MIAs. Lastly, we explore how MIAs transfer across model architectures. Our results point towards a general poor performance of these attacks in tabular data which contrasts with previous state-of-the-art. Notably, even attacks with limited attack performance can still successfully expose a large portion of single-outs. Moreover, our findings suggest that using different surrogate models makes MIAs more effective.
|
https://arxiv.org/abs/2601.15874
|
Academic Papers
|
svg
|
9d0d578a5b2aba17ba707e8ab3cdb521ed581efb8de6aef5af91fad99e7f460f
|
2026-01-23T00:00:00-05:00
|
EvoCUA: Evolving Computer Use Agents via Learning from Scalable Synthetic Experience
|
arXiv:2601.15876v1 Announce Type: new Abstract: The development of native computer-use agents (CUA) represents a significant leap in multimodal AI. However, their potential is currently bottlenecked by the constraints of static data scaling. Existing paradigms relying primarily on passive imitation of static datasets struggle to capture the intricate causal dynamics inherent in long-horizon computer tasks. In this work, we introduce EvoCUA, a native computer use agentic model. Unlike static imitation, EvoCUA integrates data generation and policy optimization into a self-sustaining evolutionary cycle. To mitigate data scarcity, we develop a verifiable synthesis engine that autonomously generates diverse tasks coupled with executable validators. To enable large-scale experience acquisition, we design a scalable infrastructure orchestrating tens of thousands of asynchronous sandbox rollouts. Building on these massive trajectories, we propose an iterative evolving learning strategy to efficiently internalize this experience. This mechanism dynamically regulates policy updates by identifying capability boundaries -- reinforcing successful routines while transforming failure trajectories into rich supervision through error analysis and self-correction. Empirical evaluations on the OSWorld benchmark demonstrate that EvoCUA achieves a success rate of 56.7%, establishing a new open-source state-of-the-art. Notably, EvoCUA significantly outperforms the previous best open-source model, OpenCUA-72B (45.0%), and surpasses leading closed-weights models such as UI-TARS-2 (53.1%). Crucially, our results underscore the generalizability of this approach: the evolving paradigm driven by learning from experience yields consistent performance gains across foundation models of varying scales, establishing a robust and scalable path for advancing native agent capabilities.
|
https://arxiv.org/abs/2601.15876
|
Academic Papers
|
svg
|
ac6b500be86b38bfeb0d7f849a3b5b8ae20688b59b0ebb3ad5f0682edcd85adf
|
2026-01-23T00:00:00-05:00
|
Evaluating and Achieving Controllable Code Completion in Code LLM
|
arXiv:2601.15879v1 Announce Type: new Abstract: Code completion has become a central task, gaining significant attention with the rise of large language model (LLM)-based tools in software engineering. Although recent advances have greatly improved LLMs' code completion abilities, evaluation methods have not advanced equally. Most current benchmarks focus solely on functional correctness of code completions based on given context, overlooking models' ability to follow user instructions during completion-a common scenario in LLM-assisted programming. To address this limitation, we present the first instruction-guided code completion benchmark, Controllable Code Completion Benchmark (C3-Bench), comprising 2,195 carefully designed completion tasks. Through comprehensive evaluation of over 40 mainstream LLMs across C3-Bench and conventional benchmarks, we reveal substantial gaps in instruction-following capabilities between open-source and advanced proprietary models during code completion tasks. Moreover, we develop a straightforward data synthesis pipeline that leverages Qwen2.5-Coder to generate high-quality instruction-completion pairs for supervised fine-tuning (SFT). The resulting model, Qwen2.5-Coder-C3, achieves state-of-the-art performance on C3-Bench. Our findings provide valuable insights for enhancing LLMs' code completion and instruction-following capabilities, establishing new directions for future research in code LLMs. To facilitate reproducibility and foster further research in code LLMs, we open-source all code, datasets, and models.
|
https://arxiv.org/abs/2601.15879
|
Academic Papers
|
svg
|
0586413d6bd0f43c2e81d7df7fd9a20d1b4544e5bddb1766014442636787267f
|
2026-01-23T00:00:00-05:00
|
PMPBench: A Paired Multi-Modal Pan-Cancer Benchmark for Medical Image Synthesis
|
arXiv:2601.15884v1 Announce Type: new Abstract: Contrast medium plays a pivotal role in radiological imaging, as it amplifies lesion conspicuity and improves detection for the diagnosis of tumor-related diseases. However, depending on the patient's health condition or the medical resources available, the use of contrast medium is not always feasible. Recent work has explored AI-based image translation to synthesize contrast-enhanced images directly from non-contrast scans, aims to reduce side effects and streamlines clinical workflows. Progress in this direction has been constrained by data limitations: (1) existing public datasets focus almost exclusively on brain-related paired MR modalities; (2) other collections include partially paired data but suffer from missing modalities/timestamps and imperfect spatial alignment; (3) explicit labeling of CT vs. CTC or DCE phases is often absent; (4) substantial resources remain private. To bridge this gap, we introduce the first public, fully paired, pan-cancer medical imaging dataset spanning 11 human organs. The MR data include complete dynamic contrast-enhanced (DCE) sequences covering all three phases (DCE1-DCE3), while the CT data provide paired non-contrast and contrast-enhanced acquisitions (CTC). The dataset is curated for anatomical correspondence, enabling rigorous evaluation of 1-to-1, N-to-1, and N-to-N translation settings (e.g., predicting DCE phases from non-contrast inputs). Built upon this resource, we establish a comprehensive benchmark. We report results from representative baselines of contemporary image-to-image translation. We release the dataset and benchmark to catalyze research on safe, effective contrast synthesis, with direct relevance to multi-organ oncology imaging workflows. Our code and dataset are publicly available at https://github.com/YifanChen02/PMPBench.
|
https://arxiv.org/abs/2601.15884
|
Academic Papers
|
svg
|
c6ff692652a1642b8f4e760fc5c90d4ded7200c44b07e4838f30aade56ddd702
|
2026-01-23T00:00:00-05:00
|
Understanding the Transfer Limits of Vision Foundation Models
|
arXiv:2601.15888v1 Announce Type: new Abstract: Foundation models leverage large-scale pretraining to capture extensive knowledge, demonstrating generalization in a wide range of language tasks. By comparison, vision foundation models (VFMs) often exhibit uneven improvements across downstream tasks, despite substantial computational investment. We postulate that this limitation arises from a mismatch between pretraining objectives and the demands of downstream vision-and-imaging tasks. Pretraining strategies like masked image reconstruction or contrastive learning shape representations for tasks such as recovery of generic visual patterns or global semantic structures, which may not align with the task-specific requirements of downstream applications including segmentation, classification, or image synthesis. To investigate this in a concrete real-world clinical area, we assess two VFMs, a reconstruction-focused MAE-based model (ProFound) and a contrastive-learning-based model (ProViCNet), on five prostate multiparametric MR imaging tasks, examining how such task alignment influences transfer performance, i.e., from pretraining to fine-tuning. Our findings indicate that better alignment between pretraining and downstream tasks, measured by simple divergence metrics such as maximum-mean-discrepancy (MMD) between the same features before and after fine-tuning, correlates with greater performance improvements and faster convergence, emphasizing the importance of designing and analyzing pretraining objectives with downstream applicability in mind.
|
https://arxiv.org/abs/2601.15888
|
Academic Papers
|
svg
|
d17098f63e015d6c1cc2e42754afb49474ff48d85382bdc609c0db890692f4b5
|
2026-01-23T00:00:00-05:00
|
Existential Positive Transductions of Sparse Graphs
|
arXiv:2601.15890v1 Announce Type: new Abstract: Monadic stability generalizes many tameness notions from structural graph theory such as planarity, bounded degree, bounded tree-width, and nowhere density. The sparsification conjecture predicts that the (possibly dense) monadically stable graph classes are exactly those that can be logically encoded by first-order (FO) transductions in the (always sparse) nowhere dense classes. So far this conjecture has been verified for several special cases, such as for classes of bounded shrub-depth, and for the monadically stable fragments of bounded (linear) clique-width, twin-width, and merge-width. In this work we propose the existential positive sparsification conjecture, predicting that the more restricted co-matching-free, monadically stable classes are exactly those that can be transduced from nowhere dense classes using only existential positive FO formulas. While the general conjecture remains open, we verify its truth for all known special cases of the original conjecture. Even stronger, we find the sparse preimages as subgraphs of the dense input graphs. As a key ingredient, we introduce a new combinatorial operation, called subflip, that arises as the natural co-matching-free analog of the flip operation, which is a central tool in the characterization of monadic stability. Using subflips, we characterize the co-matching-free fragment of monadic stability by appropriate strengthenings of the known flip-flatness and flipper game characterizations for monadic stability. In an attempt to generalize our results to the more expressive MSO logic, we discover (rediscover?) that on relational structures (existential) positive MSO has the same expressive power as (existential) positive FO.
|
https://arxiv.org/abs/2601.15890
|
Academic Papers
|
svg
|
7c4780c52d9b21e22af51557d1c6d1522824d1efd1b844aef71b21a4ebfba3ab
|
2026-01-23T00:00:00-05:00
|
RadJEPA: Radiology Encoder for Chest X-Rays via Joint Embedding Predictive Architecture
|
arXiv:2601.15891v1 Announce Type: new Abstract: Recent advances in medical vision language models guide the learning of visual representations; however, this form of supervision is constrained by the availability of paired image text data, raising the question of whether robust radiology encoders can be learned without relying on language supervision. In this work, we introduce RadJEPA, a self-supervised framework built on a Joint Embedding Predictive Architecture that learns without language supervision. Pre-trained solely on unlabeled chest X-ray images, the model learns to predict latent representations of masked image regions. This predictive objective differs fundamentally from both image text pre-training and DINO-style self-distillation: rather than aligning global representations across views or modalities, RadJEPA explicitly models latent-space prediction. We evaluate the learned encoder on disease classification, semantic segmentation, and report generation tasks. Across benchmarks, RadJEPA achieves performance exceeding state-of-the-art approaches, including Rad-DINO.
|
https://arxiv.org/abs/2601.15891
|
Academic Papers
|
svg
|
c3898579ae5f04f3a75e8d4ff6318faa5fe13b96cc992c2918f9e7aee85f1116
|
2026-01-23T00:00:00-05:00
|
Stable-DiffCoder: Pushing the Frontier of Code Diffusion Large Language Model
|
arXiv:2601.15892v1 Announce Type: new Abstract: Diffusion-based language models (DLLMs) offer non-sequential, block-wise generation and richer data reuse compared to autoregressive (AR) models, but existing code DLLMs still lag behind strong AR baselines under comparable budgets. We revisit this setting in a controlled study and introduce Stable-DiffCoder, a block diffusion code model that reuses the Seed-Coder architecture, data, and training pipeline. To enable efficient knowledge learning and stable training, we incorporate a block diffusion continual pretraining (CPT) stage enhanced by a tailored warmup and block-wise clipped noise schedule. Under the same data and architecture, Stable-DiffCoder overall outperforms its AR counterpart on a broad suite of code benchmarks. Moreover, relying only on the CPT and supervised fine-tuning stages, Stable-DiffCoder achieves stronger performance than a wide range of \~8B ARs and DLLMs, demonstrating that diffusion-based training can improve code modeling quality beyond AR training alone. Moreover, diffusion-based any-order modeling improves structured code modeling for editing and reasoning, and through data augmentation, benefits low-resource coding languages.
|
https://arxiv.org/abs/2601.15892
|
Academic Papers
|
svg
|
cd0e292a9842e108f501c3f3f98afdc583ab34fd95797e9fb117a49aebd95ffb
|
2026-01-23T00:00:00-05:00
|
Iterative Amortized Hierarchical VAE
|
arXiv:2601.15894v1 Announce Type: new Abstract: In this paper we propose the Iterative Amortized Hierarchical Variational Autoencoder (IA-HVAE), which expands on amortized inference with a hybrid scheme containing an initial amortized guess and iterative refinement with decoder gradients. We achieve this by creating a linearly separable decoder in a transform domain (e.g. Fourier space), enabling real-time applications with very high model depths. The architectural change leads to a 35x speed-up for iterative inference with respect to the traditional HVAE. We show that our hybrid approach outperforms fully amortized and fully iterative equivalents in accuracy and speed respectively. Moreover, the IAHVAE shows improved reconstruction quality over a vanilla HVAE in inverse problems such as deblurring and denoising.
|
https://arxiv.org/abs/2601.15894
|
Academic Papers
|
svg
|
98722913c179121a6e1eb59f70a6232d4a563f0e19e4590dc3c7a12abdde3972
|
2026-01-23T00:00:00-05:00
|
Co-Constructing Alignment: A Participatory Approach to Situate AI Values
|
arXiv:2601.15895v1 Announce Type: new Abstract: As AI systems become embedded in everyday practice, value misalignment has emerged as a pressing concern. Yet, dominant alignment approaches remain model centric, treating users as passive recipients of prespecified values rather than as epistemic agents who encounter and respond to misalignment during interactions. Drawing on situated perspectives, we frame alignment as an interactional practice co-constructed during human AI interaction. We investigate how users understand and wish to contribute to this process through a participatory workshop that combines misalignment diaries with generative design activities. We surface how misalignments materialise in practice and how users envision acting on them, grounded in the context of researchers using Large Language Models as research assistants. Our findings show that misalignments are experienced less as abstract ethical violations than as unexpected responses, and task or social breakdowns. Participants articulated roles ranging from adjusting and interpreting model behaviour to deliberate non-engagement as an alignment strategy. We conclude with implications for designing systems that support alignment as an ongoing, situated, and shared practice.
|
https://arxiv.org/abs/2601.15895
|
Academic Papers
|
svg
|
b9f56ce39071f98f7e459eec677cb019ef51b45d0a86e565605841e4a0c57955
|
2026-01-23T00:00:00-05:00
|
ThermoSplat: Cross-Modal 3D Gaussian Splatting with Feature Modulation and Geometry Decoupling
|
arXiv:2601.15897v1 Announce Type: new Abstract: Multi-modal scene reconstruction integrating RGB and thermal infrared data is essential for robust environmental perception across diverse lighting and weather conditions. However, extending 3D Gaussian Splatting (3DGS) to multi-spectral scenarios remains challenging. Current approaches often struggle to fully leverage the complementary information of multi-modal data, typically relying on mechanisms that either tend to neglect cross-modal correlations or leverage shared representations that fail to adaptively handle the complex structural correlations and physical discrepancies between spectrums. To address these limitations, we propose ThermoSplat, a novel framework that enables deep spectral-aware reconstruction through active feature modulation and adaptive geometry decoupling. First, we introduce a Cross-Modal FiLM Modulation mechanism that dynamically conditions shared latent features on thermal structural priors, effectively guiding visible texture synthesis with reliable cross-modal geometric cues. Second, to accommodate modality-specific geometric inconsistencies, we propose a Modality-Adaptive Geometric Decoupling scheme that learns independent opacity offsets and executes an independent rasterization pass for the thermal branch. Additionally, a hybrid rendering pipeline is employed to integrate explicit Spherical Harmonics with implicit neural decoding, ensuring both semantic consistency and high-frequency detail preservation. Extensive experiments on the RGBT-Scenes dataset demonstrate that ThermoSplat achieves state-of-the-art rendering quality across both visible and thermal spectrums.
|
https://arxiv.org/abs/2601.15897
|
Academic Papers
|
svg
|
a2dd4d21dfa0eaf0110bb8c6f783b826255d35cd94dd2ddef9a3e58c9624b30e
|
2026-01-23T00:00:00-05:00
|
Blind Identification of Channel Codes: A Subspace-Coding Approach
|
arXiv:2601.15903v1 Announce Type: new Abstract: The problem of blind identification of channel codes at a receiver involves identifying a code chosen by a transmitter from a known code-family, by observing the transmitted codewords through the channel. Most existing approaches for code-identification are contingent upon the codes in the family having some special structure, and are often computationally expensive otherwise. Further, rigorous analytical guarantees on the performance of these existing techniques are largely absent. This work presents a new method for code-identification on the binary symmetric channel (BSC), inspired by the framework of subspace codes for operator channels, carefully combining principles of hamming-metric and subspace-metric decoding. We refer to this method as the minimum denoised subspace discrepancy decoder. We present theoretical guarantees for code-identification using this decoder, for bounded-weight errors, and also present a bound on the probability of error when used on the BSC. Simulations demonstrate the improved performance of our decoder for random linear codes beyond existing general-purpose techniques, across most channel conditions and even with a limited number of received vectors.
|
https://arxiv.org/abs/2601.15903
|
Academic Papers
|
svg
|
a9e9f7947d394736ef4aedeaf7406c74facd9cf6cc2d643a35815c0fdbdaeb9d
|
2026-01-23T00:00:00-05:00
|
Dynamic Server Allocation Under Stochastic Switchover on Time-Varying Links
|
arXiv:2601.15904v1 Announce Type: new Abstract: Dynamic resource allocation to parallel queues is a cornerstone of network scheduling, yet classical solutions often fail when accounting for the overhead of switching delays to queues with superior link conditions. In particular, system performance is further degraded when switching delays are stochastic and inhomogeneous. In this domain, the myopic, Max-Weight policy struggles, as it is agnostic to switching delays. This paper introduces ACI, a non-myopic, frame-based scheduling framework that directly amortizes these switching delays. We first use a Lyapunov drift analysis to prove that backlog-driven ACI is throughput-optimal with respect to a scaled capacity region; then validate ACI's effectiveness on multi-UAV networks with an FSO backhaul. Finally, we demonstrate how adapting its core urgency metric provides the flexibility to navigate the throughput-latency trade-off.
|
https://arxiv.org/abs/2601.15904
|
Academic Papers
|
svg
|
8babf70b43b9158a54791d752a8df8acea452db5a7c12d81e66bc73f16eb0f4b
|
2026-01-23T00:00:00-05:00
|
Pregroup representable expansions of residuated lattices
|
arXiv:2601.15905v1 Announce Type: new Abstract: Group representable relation algebras play an important role in the study of representable relation algebras. The class of distributive involutive FL-algebras (DInFL-algebras) generalises relation algebras, as well as Sugihara monoids and MV-algebras. We construct DInFL-algebras from pregroups and show that they can be represented as algebras of binary relations. Even for finite pregroups we obtain relational representations of DInFL-algebras with non-Boolean lattice reducts. If the pregroup is enriched with a particular unary order-reversing operation, then our construction yields representation results for distributive quasi relation algebras.
|
https://arxiv.org/abs/2601.15905
|
Academic Papers
|
svg
|
f9eebbb8c69104e4562cea02d2bbc5f2645478ff51d9911985b6ecefdc4026bf
|
2026-01-23T00:00:00-05:00
|
Opening the Black Box: Preliminary Insights into Affective Modeling in Multimodal Foundation Models
|
arXiv:2601.15906v1 Announce Type: new Abstract: Understanding where and how emotions are represented in large-scale foundation models remains an open problem, particularly in multimodal affective settings. Despite the strong empirical performance of recent affective models, the internal architectural mechanisms that support affective understanding and generation are still poorly understood. In this work, we present a systematic mechanistic study of affective modeling in multimodal foundation models. Across multiple architectures, training strategies, and affective tasks, we analyze how emotion-oriented supervision reshapes internal model parameters. Our results consistently reveal a clear and robust pattern: affective adaptation does not primarily focus on the attention module, but instead localizes to the feed-forward gating projection (\texttt{gate\_proj}). Through controlled module transfer, targeted single-module adaptation, and destructive ablation, we further demonstrate that \texttt{gate\_proj} is sufficient, efficient, and necessary for affective understanding and generation. Notably, by tuning only approximately 24.5\% of the parameters tuned by AffectGPT, our approach achieves 96.6\% of its average performance across eight affective tasks, highlighting substantial parameter efficiency. Together, these findings provide empirical evidence that affective capabilities in foundation models are structurally mediated by feed-forward gating mechanisms and identify \texttt{gate\_proj} as a central architectural locus of affective modeling.
|
https://arxiv.org/abs/2601.15906
|
Academic Papers
|
svg
|
e9f5b37d2bb506ce78d8abe01013ceb168c50138a6a4eac7d1cfd4abed11094e
|
2026-01-23T00:00:00-05:00
|
Transfer Learning from ImageNet for MEG-Based Decoding of Imagined Speech
|
arXiv:2601.15909v1 Announce Type: new Abstract: Non-invasive decoding of imagined speech remains challenging due to weak, distributed signals and limited labeled data. Our paper introduces an image-based approach that transforms magnetoencephalography (MEG) signals into time-frequency representations compatible with pretrained vision models. MEG data from 21 participants performing imagined speech tasks were projected into three spatial scalogram mixtures via a learnable sensor-space convolution, producing compact image-like inputs for ImageNet-pretrained vision architectures. These models outperformed classical and non-pretrained models, achieving up to 90.4% balanced accuracy for imagery vs. silence, 81.0% vs. silent reading, and 60.6% for vowel decoding. Cross-subject evaluation confirmed that pretrained models capture shared neural representations, and temporal analyses localized discriminative information to imagery-locked intervals. These findings show that pretrained vision models applied to image-based MEG representations can effectively capture the structure of imagined speech in non-invasive neural signals.
|
https://arxiv.org/abs/2601.15909
|
Academic Papers
|
svg
|
69e875c70e0030c8dd68ee0ba29d52219c11231f00a6dac5b4cba93426d73ad3
|
2026-01-23T00:00:00-05:00
|
A fully diagonalized spectral method on the unit ball
|
arXiv:2601.15911v1 Announce Type: new Abstract: Our main objective in this work is to show how Sobolev orthogonal polynomials emerge as a useful tool within the framework of spectral methods for boundary-value problems. The solution of a boundary-value problem for a stationary Schr\"odinger equation on the unit ball can be studied from a variational perspective. In this variational formulation, a Sobolev inner product naturally arises. As test functions, we consider the linear space of the polynomials satisfying the boundary conditions on the sphere, and a basis of mutually orthogonal polynomials with respect to the Sobolev inner product is provided. The basis of the proposed method is given in terms of spherical harmonics and univariate Sobolev orthogonal polynomials. The connection formula between these Sobolev orthogonal polynomials and the classical orthogonal polynomials on the ball is established. Consequently, the Sobolev Fourier coefficients of a function satisfying the boundary value problem are recursively derived. Finally, one numerical experiment is presented.
|
https://arxiv.org/abs/2601.15911
|
Academic Papers
|
svg
|
d3a11046d6fe5593e3b18825e9f94573b4f1aa352e75d6e9d74f722bce17ed39
|
2026-01-23T00:00:00-05:00
|
TeNet: Text-to-Network for Compact Policy Synthesis
|
arXiv:2601.15912v1 Announce Type: new Abstract: Robots that follow natural-language instructions often either plan at a high level using hand-designed interfaces or rely on large end-to-end models that are difficult to deploy for real-time control. We propose TeNet (Text-to-Network), a framework for instantiating compact, task-specific robot policies directly from natural language descriptions. TeNet conditions a hypernetwork on text embeddings produced by a pretrained large language model (LLM) to generate a fully executable policy, which then operates solely on low-dimensional state inputs at high control frequencies. By using the language only once at the policy instantiation time, TeNet inherits the general knowledge and paraphrasing robustness of pretrained LLMs while remaining lightweight and efficient at execution time. To improve generalization, we optionally ground language in behavior during training by aligning text embeddings with demonstrated actions, while requiring no demonstrations at inference time. Experiments on MuJoCo and Meta-World benchmarks show that TeNet produces policies that are orders of magnitude smaller than sequence-based baselines, while achieving strong performance in both multi-task and meta-learning settings and supporting high-frequency control. These results show that text-conditioned hypernetworks offer a practical way to build compact, language-driven controllers for ressource-constrained robot control tasks with real-time requirements.
|
https://arxiv.org/abs/2601.15912
|
Academic Papers
|
svg
|
6d6a775a7de83d85ab920edfc809bcfa232f54dfe565346368b0149083ba4321
|
2026-01-23T00:00:00-05:00
|
The Latency Wall: Benchmarking Off-the-Shelf Emotion Recognition for Real-Time Virtual Avatars
|
arXiv:2601.15914v1 Announce Type: new Abstract: In the realm of Virtual Reality (VR) and Human-Computer Interaction (HCI), real-time emotion recognition shows promise for supporting individuals with Autism Spectrum Disorder (ASD) in improving social skills. This task requires a strict latency-accuracy trade-off, with motion-to-photon (MTP) latency kept below 140 ms to maintain contingency. However, most off-the-shelf Deep Learning models prioritize accuracy over the strict timing constraints of commodity hardware. As a first step toward accessible VR therapy, we benchmark State-of-the-Art (SOTA) models for Zero-Shot Facial Expression Recognition (FER) on virtual characters using the UIBVFED dataset. We evaluate Medium and Nano variants of YOLO (v8, v11, and v12) for face detection, alongside general-purpose Vision Transformers including CLIP, SigLIP, and ViT-FER.Our results on CPU-only inference demonstrate that while face detection on stylized avatars is robust (100% accuracy), a "Latency Wall" exists in the classification stage. The YOLOv11n architecture offers the optimal balance for detection (~54 ms). However, general-purpose Transformers like CLIP and SigLIP fail to achieve viable accuracy (150 ms) for real-time loops. This study highlights the necessity for lightweight, domain-specific architectures to enable accessible, real-time AI in therapeutic settings.
|
https://arxiv.org/abs/2601.15914
|
Academic Papers
|
svg
|
83ab6e27d6f71f56bf0088b0774380ac0e553fab85cdfc11ebdafeb82ae6a241
|
2026-01-23T00:00:00-05:00
|
A Multi-View Pipeline and Benchmark Dataset for 3D Hand Pose Estimation in Surgery
|
arXiv:2601.15918v1 Announce Type: new Abstract: Purpose: Accurate 3D hand pose estimation supports surgical applications such as skill assessment, robot-assisted interventions, and geometry-aware workflow analysis. However, surgical environments pose severe challenges, including intense and localized lighting, frequent occlusions by instruments or staff, and uniform hand appearance due to gloves, combined with a scarcity of annotated datasets for reliable model training. Method: We propose a robust multi-view pipeline for 3D hand pose estimation in surgical contexts that requires no domain-specific fine-tuning and relies solely on off-the-shelf pretrained models. The pipeline integrates reliable person detection, whole-body pose estimation, and state-of-the-art 2D hand keypoint prediction on tracked hand crops, followed by a constrained 3D optimization. In addition, we introduce a novel surgical benchmark dataset comprising over 68,000 frames and 3,000 manually annotated 2D hand poses with triangulated 3D ground truth, recorded in a replica operating room under varying levels of scene complexity. Results: Quantitative experiments demonstrate that our method consistently outperforms baselines, achieving a 31% reduction in 2D mean joint error and a 76% reduction in 3D mean per-joint position error. Conclusion: Our work establishes a strong baseline for 3D hand pose estimation in surgery, providing both a training-free pipeline and a comprehensive annotated dataset to facilitate future research in surgical computer vision.
|
https://arxiv.org/abs/2601.15918
|
Academic Papers
|
svg
|
3a787210556a53f2de4ffd7a99fe1757d8dd02ddbf2bad821338a0eb1e1025d4
|
2026-01-23T00:00:00-05:00
|
Class Confidence Aware Reweighting for Long Tailed Learning
|
arXiv:2601.15924v1 Announce Type: new Abstract: Deep neural network models degrade significantly in the long-tailed data distribution, with the overall training data dominated by a small set of classes in the head, and the tail classes obtaining less training examples. Addressing the imbalance in the classes, attention in the related literature was given mainly to the adjustments carried out in the decision space in terms of either corrections performed at the logit level in order to compensate class-prior bias, with the least attention to the optimization process resulting from the adjustments introduced through the differences in the confidences among the samples. In the current study, we present the design of a class and confidence-aware re-weighting scheme for long-tailed learning. This scheme is purely based upon the loss level and has a complementary nature to the existing methods performing the adjustment of the logits. In the practical implementation stage of the proposed scheme, we use an {\Omega}(p_t, f_c) function. This function enables the modulation of the contribution towards the training task based upon the confidence value of the prediction, as well as the relative frequency of the corresponding class. Our observations in the experiments are corroborated by significant experimental results performed on the CIFAR-100-LT, ImageNet-LT, and iNaturalist2018 datasets under various values of imbalance factors that clearly authenticate the theoretical discussions above.
|
https://arxiv.org/abs/2601.15924
|
Academic Papers
|
svg
|
4f37ecb677665fb3f38e3f6df7bc186eca82ca56850e505d076a61608a538b54
|
2026-01-23T00:00:00-05:00
|
A Remark on Downlink Massive Random Access
|
arXiv:2601.15928v1 Announce Type: new Abstract: In downlink massive random access (DMRA), a base station transmits messages to a typically small subset of active users, selected randomly from a massive number of total users. Explicitly encoding the identities of active users would incur a significant overhead scaling logarithmically with the number of total users. Recently, via a random coding argument, Song, Attiah and Yu have shown that the overhead can be reduced to within some upper bound irrespective of the number of total users. In this remark, recognizing that the code design for DMRA is an instance of covering arrays in combinatorics, we show that there exists deterministic construction of variable-length codes that incur an overhead no greater than $1 + log_2 e$ bits.
|
https://arxiv.org/abs/2601.15928
|
Academic Papers
|
svg
|
fe7beb6a32a77cd2b8f1340c318edf8576c4dffc27416662e866324ccf379663
|
2026-01-23T00:00:00-05:00
|
NeuroMamba: Multi-Perspective Feature Interaction with Visual Mamba for Neuron Segmentation
|
arXiv:2601.15929v1 Announce Type: new Abstract: Neuron segmentation is the cornerstone of reconstructing comprehensive neuronal connectomes, which is essential for deciphering the functional organization of the brain. The irregular morphology and densely intertwined structures of neurons make this task particularly challenging. Prevailing CNN-based methods often fail to resolve ambiguous boundaries due to the lack of long-range context, whereas Transformer-based methods suffer from boundary imprecision caused by the loss of voxel-level details during patch partitioning. To address these limitations, we propose NeuroMamba, a multi-perspective framework that exploits the linear complexity of Mamba to enable patch-free global modeling and synergizes this with complementary local feature modeling, thereby efficiently capturing long-range dependencies while meticulously preserving fine-grained voxel details. Specifically, we design a channel-gated Boundary Discriminative Feature Extractor (BDFE) to enhance local morphological cues. Complementing this, we introduce the Spatial Continuous Feature Extractor (SCFE), which integrates a resolution-aware scanning mechanism into the Visual Mamba architecture to adaptively model global dependencies across varying data resolutions. Finally, a cross-modulation mechanism synergistically fuses these multi-perspective features. Our method demonstrates state-of-the-art performance across four public EM datasets, validating its exceptional adaptability to both anisotropic and isotropic resolutions. The source code will be made publicly available.
|
https://arxiv.org/abs/2601.15929
|
Academic Papers
|
svg
|
b2f31563b0f764114560d6d318c56ed8b4748b75418ecab3e824f6f4347cfd04
|
2026-01-23T00:00:00-05:00
|
MMGRid: Navigating Temporal-aware and Cross-domain Generative Recommendation via Model Merging
|
arXiv:2601.15930v1 Announce Type: new Abstract: Model merging (MM) offers an efficient mechanism for integrating multiple specialized models without access to original training data or costly retraining. While MM has demonstrated success in domains like computer vision, its role in recommender systems (RSs) remains largely unexplored. Recently, Generative Recommendation (GR) has emerged as a new paradigm in RSs, characterized by rapidly growing model scales and substantial computational costs, making MM particularly appealing for cost-sensitive deployment scenarios. In this work, we present the first systematic study of MM in GR through a contextual lens. We focus on a fundamental yet underexplored challenge in real-world: how to merge generative recommenders specialized to different real-world contexts, arising from temporal evolving user behaviors and heterogeneous application domains. To this end, we propose a unified framework MMGRid, a structured contextual grid of GR checkpoints that organizes models trained under diverse contexts induced by temporal evolution and domain diversity. All checkpoints are derived from a shared base LLM but fine-tuned on context-specific data, forming a realistic and controlled model space for systematically analyzing MM across GR paradigms and merging algorithms. Our investigation reveals several key insights. First, training GR models from LLMs can introduce parameter conflicts during merging due to token distribution shifts and objective disparities; such conflicts can be alleviated by disentangling task-aware and context-specific parameter changes via base model replacement. Second, incremental training across contexts induces recency bias, which can be effectively balanced through weighted contextual merging. Notably, we observe that optimal merging weights correlate with context-dependent interaction characteristics, offering practical guidance for weight selection in real-world deployments.
|
https://arxiv.org/abs/2601.15930
|
Academic Papers
|
svg
|
883c5ddbd7515e7410a6861361f7332d7b12c1e0431cdc9d08860b5936e64f8e
|
2026-01-23T00:00:00-05:00
|
ICON: Invariant Counterfactual Optimization with Neuro-Symbolic Priors for Text-Based Person Search
|
arXiv:2601.15931v1 Announce Type: new Abstract: Text-Based Person Search (TBPS) holds unique value in real-world surveillance bridging visual perception and language understanding, yet current paradigms utilizing pre-training models often fail to transfer effectively to complex open-world scenarios. The reliance on "Passive Observation" leads to multifaceted spurious correlations and spatial semantic misalignment, causing a lack of robustness against distribution shifts. To fundamentally resolve these defects, this paper proposes ICON (Invariant Counterfactual Optimization with Neuro-symbolic priors), a framework integrating causal and topological priors. First, we introduce Rule-Guided Spatial Intervention to strictly penalize sensitivity to bounding box noise, forcibly severing location shortcuts to achieve geometric invariance. Second, Counterfactual Context Disentanglement is implemented via semantic-driven background transplantation, compelling the model to ignore background interference for environmental independence. Then, we employ Saliency-Driven Semantic Regularization with adaptive masking to resolve local saliency bias and guarantee holistic completeness. Finally, Neuro-Symbolic Topological Alignment utilizes neuro-symbolic priors to constrain feature matching, ensuring activated regions are topologically consistent with human structural logic. Experimental results demonstrate that ICON not only maintains leading performance on standard benchmarks but also exhibits exceptional robustness against occlusion, background interference, and localization noise. This approach effectively advances the field by shifting from fitting statistical co-occurrences to learning causal invariance.
|
https://arxiv.org/abs/2601.15931
|
Academic Papers
|
svg
|
e979b2fd13f4d195496b5532ca9c7efe95a98b5e4ec30ae067f41aed0438729e
|
2026-01-23T00:00:00-05:00
|
Layered automata: A canonical model for automata over infinite words
|
arXiv:2601.15940v1 Announce Type: new Abstract: We introduce layered automata, a subclass of alternating parity automata that generalises deterministic automata. Assuming a consistency property, these automata are history deterministic and 0-1 probabilistic. We show that every omega-regular language is recognised by a unique minimal consistent layered automaton, and that this canonical form can be computed in polynomial time from every layered or deterministic automaton. We further establish that for layered automata both consistency checking and inclusion testing can be performed in polynomial time. Much like deterministic finite automata, minimal consistent layered automata admit a characterisation based on congruences.
|
https://arxiv.org/abs/2601.15940
|
Academic Papers
|
svg
|
3459878f5876310897902e403fae8df594aa2bd7ecf92f637c4c4b6618b3667c
|
2026-01-23T00:00:00-05:00
|
Accurate Calibration and Robust LiDAR-Inertial Odometry for Spinning Actuated LiDAR Systems
|
arXiv:2601.15946v1 Announce Type: new Abstract: Accurate calibration and robust localization are fundamental for downstream tasks in spinning actuated LiDAR applications. Existing methods, however, require parameterizing extrinsic parameters based on different mounting configurations, limiting their generalizability. Additionally, spinning actuated LiDAR inevitably scans featureless regions, which complicates the balance between scanning coverage and localization robustness. To address these challenges, this letter presents a targetless LiDAR-motor calibration (LM-Calibr) on the basis of the Denavit-Hartenberg convention and an environmental adaptive LiDAR-inertial odometry (EVA-LIO). LM-Calibr supports calibration of LiDAR-motor systems with various mounting configurations. Extensive experiments demonstrate its accuracy and convergence across different scenarios, mounting angles, and initial values. Additionally, EVA-LIO adaptively selects downsample rates and map resolutions according to spatial scale. This adaptivity enables the actuator to operate at maximum speed, thereby enhancing scanning completeness while ensuring robust localization, even when LiDAR briefly scans featureless areas. The source code and hardware design are available on GitHub: \textcolor{blue}{\href{https://github.com/zijiechenrobotics/lm_calibr}{github.com/zijiechenrobotics/lm\_calibr}}. The video is available at \textcolor{blue}{\href{https://youtu.be/cZyyrkmeoSk}{youtu.be/cZyyrkmeoSk}}
|
https://arxiv.org/abs/2601.15946
|
Academic Papers
|
svg
|
bafe79ecc85d613fe3e12c742e4fa19c8dd24bf2caac525e0abf8f5d546fd7a9
|
2026-01-23T00:00:00-05:00
|
Natural Language-Driven Global Mapping of Martian Landforms
|
arXiv:2601.15949v1 Announce Type: new Abstract: Planetary surfaces are typically analyzed using high-level semantic concepts in natural language, yet vast orbital image archives remain organized at the pixel level. This mismatch limits scalable, open-ended exploration of planetary surfaces. Here we present MarScope, a planetary-scale vision-language framework enabling natural language-driven, label-free mapping of Martian landforms. MarScope aligns planetary images and text in a shared semantic space, trained on over 200,000 curated image-text pairs. This framework transforms global geomorphic mapping on Mars by replacing pre-defined classifications with flexible semantic retrieval, enabling arbitrary user queries across the entire planet in 5 seconds with F1 scores up to 0.978. Applications further show that it extends beyond morphological classification to facilitate process-oriented analysis and similarity-based geomorphological mapping at a planetary scale. MarScope establishes a new paradigm where natural language serves as a direct interface for scientific discovery over massive geospatial datasets.
|
https://arxiv.org/abs/2601.15949
|
Academic Papers
|
svg
|
e86f00c1ba67e72de883bbe1af05cacd90a0aa34acfe92d1514fc9d42f6de6da
|
2026-01-23T00:00:00-05:00
|
EVolSplat4D: Efficient Volume-based Gaussian Splatting for 4D Urban Scene Synthesis
|
arXiv:2601.15951v1 Announce Type: new Abstract: Novel view synthesis (NVS) of static and dynamic urban scenes is essential for autonomous driving simulation, yet existing methods often struggle to balance reconstruction time with quality. While state-of-the-art neural radiance fields and 3D Gaussian Splatting approaches achieve photorealism, they often rely on time-consuming per-scene optimization. Conversely, emerging feed-forward methods frequently adopt per-pixel Gaussian representations, which lead to 3D inconsistencies when aggregating multi-view predictions in complex, dynamic environments. We propose EvolSplat4D, a feed-forward framework that moves beyond existing per-pixel paradigms by unifying volume-based and pixel-based Gaussian prediction across three specialized branches. For close-range static regions, we predict consistent geometry of 3D Gaussians over multiple frames directly from a 3D feature volume, complemented by a semantically-enhanced image-based rendering module for predicting their appearance. For dynamic actors, we utilize object-centric canonical spaces and a motion-adjusted rendering module to aggregate temporal features, ensuring stable 4D reconstruction despite noisy motion priors. Far-Field scenery is handled by an efficient per-pixel Gaussian branch to ensure full-scene coverage. Experimental results on the KITTI-360, KITTI, Waymo, and PandaSet datasets show that EvolSplat4D reconstructs both static and dynamic environments with superior accuracy and consistency, outperforming both per-scene optimization and state-of-the-art feed-forward baselines.
|
https://arxiv.org/abs/2601.15951
|
Academic Papers
|
svg
|
3292f83c68481c9fdce9dbeaa32c5d581640c651529d96d47a1bc77cfd561890
|
2026-01-23T00:00:00-05:00
|
Decoupling Return-to-Go for Efficient Decision Transformer
|
arXiv:2601.15953v1 Announce Type: new Abstract: The Decision Transformer (DT) has established a powerful sequence modeling approach to offline reinforcement learning. It conditions its action predictions on Return-to-Go (RTG), using it both to distinguish trajectory quality during training and to guide action generation at inference. In this work, we identify a critical redundancy in this design: feeding the entire sequence of RTGs into the Transformer is theoretically unnecessary, as only the most recent RTG affects action prediction. We show that this redundancy can impair DT's performance through experiments. To resolve this, we propose the Decoupled DT (DDT). DDT simplifies the architecture by processing only observation and action sequences through the Transformer, using the latest RTG to guide the action prediction. This streamlined approach not only improves performance but also reduces computational cost. Our experiments show that DDT significantly outperforms DT and establishes competitive performance against state-of-the-art DT variants across multiple offline RL tasks.
|
https://arxiv.org/abs/2601.15953
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.