id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
8b0a730cf07d5592b8bd892dcb2d9c4b939459de1179d20d7a42c92b143de286
2026-01-23T00:00:00-05:00
LLM-based Multimodal Feedback Produces Equivalent Learning and Better Student Perceptions than Educator Feedback
arXiv:2601.15280v1 Announce Type: cross Abstract: Providing timely, targeted, and multimodal feedback helps students quickly correct errors, build deep understanding and stay motivated, yet making it at scale remains a challenge. This study introduces a real-time AI-facilitated multimodal feedback system that integrates structured textual explanations with dynamic multimedia resources, including the retrieved most relevant slide page references and streaming AI audio narration. In an online crowdsourcing experiment, we compared this system against fixed business-as-usual feedback by educators across three dimensions: (1) learning effectiveness, (2) learner engagement, (3) perceived feedback quality and value. Results showed that AI multimodal feedback achieved learning gains equivalent to original educator feedback while significantly outperforming it on perceived clarity, specificity, conciseness, motivation, satisfaction, and reducing cognitive load, with comparable correctness, trust, and acceptance. Process logs revealed distinct engagement patterns: for multiple-choice questions, educator feedback encouraged more submissions; for open-ended questions, AI-facilitated targeted suggestions lowered revision barriers and promoted iterative improvement. These findings highlight the potential of AI multimodal feedback to provide scalable, real-time, and context-aware support that both reduces instructor workload and enhances student experience.
https://arxiv.org/abs/2601.15280
Academic Papers
svg
82f2cd497dac0e4f66973e729cfbfdb45ddfc2a3e141f0fd6ad8da93c2634f4a
2026-01-23T00:00:00-05:00
Ecosystem Competition and Cross-Market Subsidization: A Dynamic Theory of Platform Pricing
arXiv:2601.15303v1 Announce Type: cross Abstract: Platform giants in China have operated with persistently compressed margins in highly concentrated markets for much of the past decade, despite market shares exceeding 60\% in core segments. Standard theory predicts otherwise: either the weaker firm exits, or survivors raise prices to monopoly levels. We argue the puzzle dissolves once firms are viewed as ecosystem optimizers rather than single-market profit maximizers. We develop a dynamic game in which a firm's willingness to subsidize depends on the spillover value its users generate in adjacent markets -- what we call \textit{ecosystem complementarity}. When this complementarity is strong enough, perpetual below-cost pricing emerges as the unique stable equilibrium. The result is not predation in the classical sense; there is no recoupment phase. It is a permanent state of subsidized competition, rational for each firm individually but potentially inefficient in aggregate. We characterize the equilibrium, establish its dynamic stability, and show that welfare losses compound over time as capital flows into subsidy wars rather than innovation. The model's predictions are consistent with observed patterns in Chinese platform markets and suggest that effective antitrust intervention should target cross-market capital flows rather than prices.
https://arxiv.org/abs/2601.15303
Academic Papers
svg
a54de7f3c7ef6e308629aeabf9fa6b897c065e3a9c1866fb22a88678d7d580bd
2026-01-23T00:00:00-05:00
An Explainable Market Integrity Monitoring System with Multi-Source Attention Signals and Transparent Scoring
arXiv:2601.15304v1 Announce Type: cross Abstract: Market integrity monitoring is difficult because suspicious price/volume behavior can arise from many benign mechanisms, while modern detection systems often rely on opaque models that are hard to audit and communicate. We present AIMM-X, an explainable monitoring pipeline that combines market microstructure-style signals derived from OHLCV time series with multi-source public attention signals (e.g., news and online discussion proxies) to surface time windows that merit analyst review. The system detects candidate anomalous windows using transparent thresholding and aggregation, then assigns an interpretable integrity score decomposed into a small set of additive components, allowing practitioners to trace why a window was flagged and which factors drove the score. We provide an end-to-end, reproducible implementation that downloads data, constructs attention features, builds unified panels, detects windows, computes component signals, and generates summary figures/tables. Our goal is not to label manipulation, but to provide a practical, auditable screening tool that supports downstream investigation by compliance teams, exchanges, or researchers.
https://arxiv.org/abs/2601.15304
Academic Papers
svg
b81a8d82eef271e35161f8b42345899d8c6f44d417a80fd110f07378dd7b1a39
2026-01-23T00:00:00-05:00
Mind the Gap: Why Neural Memory Fails Under Semantic Density
arXiv:2601.15313v1 Announce Type: cross Abstract: The brain solves a problem that current AI architectures struggle to manage: storing specific episodic facts without corrupting general semantic knowledge. Neuroscience explains this through Complementary Learning Systems theory - a fast hippocampal system for episodic storage using pattern-separated representations, and a slow neocortical system for extracting statistical regularities. Current AI systems lack this separation, attempting both functions through neural weights alone. We identify the 'Stability Gap' in online neural memory: fast-weight mechanisms that write facts into shared continuous parameters collapse to near-random accuracy within tens of semantically related facts. Through semantic density (rho), we show collapse occurs with as few as N=5 facts at high density (rho > 0.6) or N ~ 20-75 at moderate density - a phenomenon we formalise as the Orthogonality Constraint. This failure persists even with perfect attention and unlimited context, arising from write-time interference when storage and retrieval share the same substrate. We also identify schema drift and version ambiguity as primary failure modes in production systems, observing 40-70% schema consistency and 0-100% clean correction rates. Context-based memory incurs 30-300% cost premium over selective retrieval. We propose Knowledge Objects (KOs): discrete, typed memory units with controlled vocabularies and explicit version chains. Paired with neural weights, KOs enable a true complementary learning architecture, suggesting reliable AI memory may require this bicameral design.
https://arxiv.org/abs/2601.15313
Academic Papers
svg
c9910c26cd6609cd3186e5be7ee8c0fb3009df85d2f23ac254fc477d08293db0
2026-01-23T00:00:00-05:00
Beyond the Einstein-Bohr Debate: Cognitive Complementarity and the Emergence of Quantum Intuition
arXiv:2601.15314v1 Announce Type: cross Abstract: Recent high-precision experimental confirmations of quantum complementarity have revitalized foundational debates about measurement, description, and realism. This article argues that complementarity is most productively interpreted as an epistemic principle--constraining what can be simultaneously accessed and represented--rather than as an ontological claim about quantum reality. Reexamining the Einstein-Bohr debate through this lens reveals a persistent tension between descriptive completeness and contextual meaning, a tension experiments clarify but do not dissolve. Building on this analysis, we introduce cognitive complementarity as a structural principle governing reasoning under non-classical uncertainty, where mutually constraining representations cannot be jointly optimized. Within this framework, we propose quantum intuition as a testable cognitive capacity: the ability to sustain representational plurality, regulate commitment timing, and resolve perspective-incompatibilities in a context-sensitive manner. Formulated as a naturalistic construct grounded in shared informational constraints, quantum intuition offers a principled bridge between quantum measurement theory and cognition. This work reframes the historical debate, extends epistemic lessons from quantum foundations into cognitive science, and outlines empirical pathways for studying decision-making in contexts of irreducible uncertainty.
https://arxiv.org/abs/2601.15314
Academic Papers
svg
73003302b862273cb552953640f8f657722a90d562af40aa687c054e26e574d7
2026-01-23T00:00:00-05:00
The Impossibility of Cohesion Without Fragmentation
arXiv:2601.15317v1 Announce Type: cross Abstract: Most models in game theory and network formation implicitly assume that relations between agents are feasible whenever incentives are aligned or interaction opportunities exist. Under this premise analytical attention is directed toward equilibrium efficiency or probabilistic link formation while the possibility that a relation may be structurally infeasible is rarely examined. This paper develops a static axiomatic framework in which relation maintenance is treated as a problem of structural compatibility rather than strategic choice or stochastic realization. Agents occupy positions in an abstract space and relations are subject to minimum conditions defined over these positions. A bifurcation event such as a vote declaration or institutional assignment fixes agents positions and thereby determines which relations are compatible. We identify position dependent gain axes as the key source of structural selectivity and prove an impossibility result under any non degenerate positional constraint no bifurcation event can preserve all relations. Instead the post event network necessarily exhibits either the simultaneous emergence of fragmentation and cohesion or a degenerate trivial case in which constraints are position independent. The result is purely structural and does not rely on preferences beliefs incentives or dynamic adjustment. It establishes a fundamental limit on universally cohesive outcomes and reframes division not as a failure of design or coordination but as a logical consequence of positional constraints.
https://arxiv.org/abs/2601.15317
Academic Papers
svg
fa393f1355cb655026bbda07a994435d675a9fb717f2e05292fdf3f05d4acc1a
2026-01-23T00:00:00-05:00
Large Language Models as Simulative Agents for Neurodivergent Adult Psychometric Profiles
arXiv:2601.15319v1 Announce Type: cross Abstract: Adult neurodivergence, including Attention-Deficit/Hyperactivity Disorder (ADHD), high-functioning Autism Spectrum Disorder (ASD), and Cognitive Disengagement Syndrome (CDS), is marked by substantial symptom overlap that limits the discriminant sensitivity of standard psychometric instruments. While recent work suggests that Large Language Models (LLMs) can simulate human psychometric responses from qualitative data, it remains unclear whether they can accurately and stably model neurodevelopmental traits rather than broad personality characteristics. This study examines whether LLMs can generate psychometric responses that approximate those of real individuals when grounded in a structured qualitative interview, and whether such simulations are sensitive to variations in trait intensity. Twenty-six adults completed a 29-item open-ended interview and four standardized self-report measures (ASRS, BAARS-IV, AQ, RAADS-R). Two LLMs (GPT-4o and Qwen3-235B-A22B) were prompted to infer an individual psychological profile from interview content and then respond to each questionnaire in-role. Accuracy, reliability, and sensitivity were assessed using group-level comparisons, error metrics, exact-match scoring, and a randomized baseline. Both models outperformed random responses across instruments, with GPT-4o showing higher accuracy and reproducibility. Simulated responses closely matched human data for ASRS, BAARS-IV, and RAADS-R, while the AQ revealed subscale-specific limitations, particularly in Attention to Detail. Overall, the findings indicate that interview-grounded LLMs can produce coherent and above-chance simulations of neurodevelopmental traits, supporting their potential use as synthetic participants in early-stage psychometric research, while highlighting clear domain-specific constraints.
https://arxiv.org/abs/2601.15319
Academic Papers
svg
3fac7fd86220e8d85d10cebc6317ddc7216aa36f5cef946dddaecad3e6f4e8a6
2026-01-23T00:00:00-05:00
Analysis of the Ventriloquism Aftereffect Using Network Theory Techniques
arXiv:2601.15321v1 Announce Type: cross Abstract: Ventriloquism After-Effect is the phenomenon where sustained exposure to the ventriloquist illusion causes a change in unisensory auditory localization towards the location where the visual stimulus was present. We investigate the recalibration in EEG networks that causes this change and the track the timeline of changes in the auditory processing pathway. Our results obtained using network analysis, non-stationary time series analysis and multivariate pattern classification show that recalibration takes place early in the auditory processing pathway and the after-effect decays with time after exposure to the illusion.
https://arxiv.org/abs/2601.15321
Academic Papers
svg
0e074156b7e729492a0633d3ff368a18178b628f73a1a0ffeb8ce1b2f2a65461
2026-01-23T00:00:00-05:00
ECGomics: An Open Platform for AI-ECG Digital Biomarker Discovery
arXiv:2601.15326v1 Announce Type: cross Abstract: Background: Conventional electrocardiogram (ECG) analysis faces a persistent dichotomy: expert-driven features ensure interpretability but lack sensitivity to latent patterns, while deep learning offers high accuracy but functions as a black box with high data dependency. We introduce ECGomics, a systematic paradigm and open-source platform for the multidimensional deconstruction of cardiac signals into digital biomarker. Methods: Inspired by the taxonomic rigor of genomics, ECGomics deconstructs cardiac activity across four dimensions: Structural, Intensity, Functional, and Comparative. This taxonomy synergizes expert-defined morphological rules with data-driven latent representations, effectively bridging the gap between handcrafted features and deep learning embeddings. Results: We operationalized this framework into a scalable ecosystem consisting of a web-based research platform and a mobile-integrated solution (https://github.com/PKUDigitalHealth/ECGomics). The web platform facilitates high-throughput analysis via precision parameter configuration, high-fidelity data ingestion, and 12-lead visualization, allowing for the systematic extraction of biomarkers across the four ECGomics dimensions. Complementarily, the mobile interface, integrated with portable sensors and a cloud-based engine, enables real-time signal acquisition and near-instantaneous delivery of structured diagnostic reports. This dual-interface architecture successfully transitions ECGomics from theoretical discovery to decentralized, real-world health management, ensuring professional-grade monitoring in diverse clinical and home-based settings. Conclusion: ECGomics harmonizes diagnostic precision, interpretability, and data efficiency. By providing a deployable software ecosystem, this paradigm establishes a robust foundation for digital biomarker discovery and personalized cardiovascular medicine.
https://arxiv.org/abs/2601.15326
Academic Papers
svg
7827f4eaf2306a483925f955f8029631444ab885ebda345f113dea0899095324
2026-01-23T00:00:00-05:00
Learning Discrete Successor Transitions in Continuous Attractor Networks: Emergence, Limits, and Topological Constraints
arXiv:2601.15336v1 Announce Type: cross Abstract: Continuous attractor networks (CANs) are a well-established class of models for representing low-dimensional continuous variables such as head direction, spatial position, and phase. In canonical spatial domains, transitions along the attractor manifold are driven by continuous displacement signals, such as angular velocity-provided by sensorimotor systems external to the CAN itself. When such signals are not explicitly provided as dedicated displacement inputs, it remains unclear whether attractor-based circuits can reliably acquire recurrent dynamics that support stable state transitions, or whether alternative predictive strategies dominate. In this work, we present an experimental framework for training CANs to perform successor-like transitions between stable attractor states in the absence of externally provided displacement signals. We compare two recurrent topologies, a circular ring and a folded snake manifold, and systematically vary the temporal regime under which stability is evaluated. We find that, under short evaluation windows, networks consistently converge to impulse-driven associative solutions that achieve high apparent accuracy yet lack persistent attractor dynamics. Only when stability is explicitly enforced over extended free-run periods do genuine attractor-based transition dynamics emerge. This suggests that shortcut solutions are the default outcome of local learning in recurrent networks, while attractor dynamics represent a constrained regime rather than a generic result. Furthermore, we demonstrate that topology strictly limits the capacity for learned transitions. While the continuous ring topology achieves perfect stability over long horizons, the folded snake topology hits a geometric limit characterized by failure at manifold discontinuities, which neither curriculum learning nor basal ganglia-inspired gating can fully overcome.
https://arxiv.org/abs/2601.15336
Academic Papers
svg
9e1c1d3746d79c79b33dfc5068fbdde90a47f394e7b212a5fb0a23a8af71087a
2026-01-23T00:00:00-05:00
Learning Nonlinear Heterogeneity in Physical Kolmogorov-Arnold Networks
arXiv:2601.15340v1 Announce Type: cross Abstract: Physical neural networks typically train linear synaptic weights while treating device nonlinearities as fixed. We show the opposite - by training the synaptic nonlinearity itself, as in Kolmogorov-Arnold Network (KAN) architectures, we yield markedly higher task performance per physical resource and improved performance-parameter scaling than conventional linear weight-based networks, demonstrating ability of KAN topologies to exploit reconfigurable nonlinear physical dynamics. We experimentally realise physical KANs in silicon-on-insulator devices we term 'Synaptic Nonlinear Elements' (SYNEs), operating at room temperature, 0.1-1 microampere currents, and 2 MHz speeds with no observed degradation over 10^13 measurements and months-long timescales. We demonstrate nonlinear function regression, classification, and prediction of Li-Ion battery dynamics from noisy real-world multi-sensor data. Physical KANs outperform equivalently-parameterised software multilayer perceptron networks across all tasks, with up to two orders of magnitude fewer parameters, and two orders of magnitude fewer devices than linear weight based physical networks. These results establish learned physical nonlinearity as a hardware-native computational primitive for compact and efficient learning systems, and SYNE devices as effective substrates for heterogenous nonlinear computing.
https://arxiv.org/abs/2601.15340
Academic Papers
svg
6360d8a5a5a2d57c23b3b21c64636b8d282a4ccc03684a53f6d305a8645fc7e1
2026-01-23T00:00:00-05:00
OmniSpectra: A Unified Foundation Model for Native Resolution Astronomical Spectra
arXiv:2601.15351v1 Announce Type: cross Abstract: We present OmniSpectra, the first native-resolution foundation model for astronomy spectra. Unlike traditional models, which are limited to fixed-length input sizes or configurations, OmniSpectra handles spectra of any length at their original size, without resampling or interpolation. Despite the large-scale spectroscopic data from diverse surveys fueling the rapid growth of astronomy, existing foundation models are limited to a fixed wavelength range and specific instruments. OmniSpectra is the first foundation model to learn simultaneously from multiple real-world spectra surveys with different configurations at a large scale. We achieve this by designing a novel architecture with adaptive patching across variable lengths, sinusoidal global wavelength encoding, local positional embeddings through depthwise convolution, and validity-aware self-attention masks. Allowing us to learn multi-scale spatial patterns while skipping attention for invalid patches. Even with a limited training example, OmniSpectra demonstrates excellent zero-shot generalization compared to methods tailored for specific tasks. This transfer learning capability makes this model the state-of-the-art across various astronomy tasks, including source classification, redshift estimation, and properties prediction for stars and galaxies. OmniSpectra reduces the need for training individual models for different tasks from scratch, establishing itself as the next-generation astronomy foundation model.
https://arxiv.org/abs/2601.15351
Academic Papers
svg
830f0b480a5e71f99ea51a75b2a4c99f1b64f2ed367bdfb9df7513d637281ac5
2026-01-23T00:00:00-05:00
Statistical Reinforcement Learning in the Real World: A Survey of Challenges and Future Directions
arXiv:2601.15353v1 Announce Type: cross Abstract: Reinforcement learning (RL) has achieved remarkable success in real-world decision-making across diverse domains, including gaming, robotics, online advertising, public health, and natural language processing. Despite these advances, a substantial gap remains between RL research and its deployment in many practical settings. Two recurring challenges often underlie this gap. First, many settings offer limited opportunity for the agent to interact extensively with the target environment due to practical constraints. Second, many target environments often undergo substantial changes, requiring redesign and redeployment of RL systems (e.g., advancements in science and technology that change the landscape of healthcare delivery). Addressing these challenges and bridging the gap between basic research and application requires theory and methodology that directly inform the design, implementation, and continual improvement of RL systems in real-world settings. In this paper, we frame the application of RL in practice as a three-component process: (i) online learning and optimization during deployment, (ii) post- or between-deployment offline analyses, and (iii) repeated cycles of deployment and redeployment to continually improve the RL system. We provide a narrative review of recent advances in statistical RL that address these components, including methods for maximizing data utility for between-deployment inference, enhancing sample efficiency for online learning within-deployment, and designing sequences of deployments for continual improvement. We also outline future research directions in statistical RL that are use-inspired -- aiming for impactful application of RL in practice.
https://arxiv.org/abs/2601.15353
Academic Papers
svg
a3b5bcd7fa12289c7cf0efdec00fd3608c02d43852a1295dab615c8995b71a47
2026-01-23T00:00:00-05:00
Q-Probe: Scaling Image Quality Assessment to High Resolution via Context-Aware Agentic Probing
arXiv:2601.15356v1 Announce Type: cross Abstract: Reinforcement Learning (RL) has empowered Multimodal Large Language Models (MLLMs) to achieve superior human preference alignment in Image Quality Assessment (IQA). However, existing RL-based IQA models typically rely on coarse-grained global views, failing to capture subtle local degradations in high-resolution scenarios. While emerging "Thinking with Images" paradigms enable multi-scale visual perception via zoom-in mechanisms, their direct adaptation to IQA induces spurious "cropping-implies-degradation" biases and misinterprets natural depth-of-field as artifacts. To address these challenges, we propose Q-Probe, the first agentic IQA framework designed to scale IQA to high resolution via context-aware probing. First, we construct Vista-Bench, a pioneering benchmark tailored for fine-grained local degradation analysis in high-resolution IQA settings. Furthermore, we propose a three-stage training paradigm that progressively aligns the model with human preferences, while simultaneously eliminating causal bias through a novel context-aware cropping strategy. Extensive experiments demonstrate that Q-Probe achieves state-of-the-art performance in high-resolution settings while maintaining superior efficacy across resolution scales.
https://arxiv.org/abs/2601.15356
Academic Papers
svg
5028ebf1976975bde06282b04fceba26b9be97fe7dadacabdb52e473f4625fd6
2026-01-23T00:00:00-05:00
High-Fidelity 3D Tooth Reconstruction by Fusing Intraoral Scans and CBCT Data via a Deep Implicit Representation
arXiv:2601.15358v1 Announce Type: cross Abstract: High-fidelity 3D tooth models are essential for digital dentistry, but must capture both the detailed crown and the complete root. Clinical imaging modalities are limited: Cone-Beam Computed Tomography (CBCT) captures the root but has a noisy, low-resolution crown, while Intraoral Scanners (IOS) provide a high-fidelity crown but no root information. A naive fusion of these sources results in unnatural seams and artifacts. We propose a novel, fully-automated pipeline that fuses CBCT and IOS data using a deep implicit representation. Our method first segments and robustly registers the tooth instances, then creates a hybrid proxy mesh combining the IOS crown and the CBCT root. The core of our approach is to use this noisy proxy to guide a class-specific DeepSDF network. This optimization process projects the input onto a learned manifold of ideal tooth shapes, generating a seamless, watertight, and anatomically coherent model. Qualitative and quantitative evaluations show our method uniquely preserves both the high-fidelity crown from IOS and the patient-specific root morphology from CBCT, overcoming the limitations of each modality and naive stitching.
https://arxiv.org/abs/2601.15358
Academic Papers
svg
d689a25be5b66b0624f301188a52d6d8daaa0b21677e35b8e2ec7794c19efa4d
2026-01-23T00:00:00-05:00
Robust X-Learner: Breaking the Curse of Imbalance and Heavy Tails via Robust Cross-Imputation
arXiv:2601.15360v1 Announce Type: cross Abstract: Estimating Heterogeneous Treatment Effects (HTE) in industrial applications such as AdTech and healthcare presents a dual challenge: extreme class imbalance and heavy-tailed outcome distributions. While the X-Learner framework effectively addresses imbalance through cross-imputation, we demonstrate that it is fundamentally vulnerable to "Outlier Smearing" when reliant on Mean Squared Error (MSE) minimization. In this failure mode, the bias from a few extreme observations ("whales") in the minority group is propagated to the entire majority group during the imputation step, corrupting the estimated treatment effect structure. To resolve this, we propose the Robust X-Learner (RX-Learner). This framework integrates a redescending {\gamma}-divergence objective -- structurally equivalent to the Welsch loss under Gaussian assumptions -- into the gradient boosting machinery. We further stabilize the non-convex optimization using a Proxy Hessian strategy grounded in Majorization-Minimization (MM) principles. Empirical evaluation on a semi-synthetic Criteo Uplift dataset demonstrates that the RX-Learner reduces the Precision in Estimation of Heterogeneous Effect (PEHE) metric by 98.6% compared to the standard X-Learner, effectively decoupling the stable "Core" population from the volatile "Periphery".
https://arxiv.org/abs/2601.15360
Academic Papers
svg
1bf67e7ee8fcb87a1b9e791e45453f692798e3fe5e6f6834058910a492d3c981
2026-01-23T00:00:00-05:00
USDs: A universal stabilizer decoder framework using symmetry
arXiv:2601.15361v1 Announce Type: cross Abstract: Quantum error correction is indispensable to achieving reliable quantum computation. When quantum information is encoded redundantly, a larger Hilbert space is constructed using multiple physical qubits, and the computation is performed within a designated subspace. When applying deep learning to the decoding of quantum error-correcting codes, a key challenge arises from the non-uniqueness between the syndrome measurements provided to the decoder and the corresponding error patterns that constitute the ground-truth labels. Building upon prior work that addressed this issue for the toric code by re-optimizing the decoder with respect to the symmetry inherent in the parity-check structure, we generalize this approach to arbitrary stabilizer codes. In our experiments, we employed multilayer perceptrons to approximate continuous functions that complement the syndrome measurements of the Color code and the Golay code. Using these models, we performed decoder re-optimization for each code. For the Color code, we achieved an improvement of approximately 0.8% in decoding accuracy at a physical error rate of 5%, while for the Golay code the accuracy increased by about 0.1%. Furthermore, from the evaluation of the geometric and algebraic structures in the continuous function approximation for each code, we showed that the design of generalized continuous functions is advantageous for learning the geometric structure inherent in the code. Our results also indicate that approximations that faithfully reproduce the code structure can have a significant impact on the effectiveness of reoptimization. This study demonstrates that the re-optimization technique previously shown to be effective for the Toric code can be generalized to address the challenge of label degeneracy that arises when applying deep learning to the decoding of stabilizer codes.
https://arxiv.org/abs/2601.15361
Academic Papers
svg
aa98b51e14fd9ab933786946a5527a55f2743f2b1cc35e9b8502c51192252e96
2026-01-23T00:00:00-05:00
Non-Stationary Functional Bilevel Optimization
arXiv:2601.15363v1 Announce Type: cross Abstract: Functional bilevel optimization (FBO) provides a powerful framework for hierarchical learning in function spaces, yet current methods are limited to static offline settings and perform suboptimally in online, non-stationary scenarios. We propose SmoothFBO, the first algorithm for non-stationary FBO with both theoretical guarantees and practical scalability. SmoothFBO introduces a time-smoothed stochastic hypergradient estimator that reduces variance through a window parameter, enabling stable outer-loop updates with sublinear regret. Importantly, the classical parametric bilevel case is a special reduction of our framework, making SmoothFBO a natural extension to online, non-stationary settings. Empirically, SmoothFBO consistently outperforms existing FBO methods in non-stationary hyperparameter optimization and model-based reinforcement learning, demonstrating its practical effectiveness. Together, these results establish SmoothFBO as a general, theoretically grounded, and practically viable foundation for bilevel optimization in online, non-stationary scenarios.
https://arxiv.org/abs/2601.15363
Academic Papers
svg
12ed5066a284147d3f81a92a6ee27fff8f93d963aed981052e1056c830e8f86e
2026-01-23T00:00:00-05:00
OpenVision 3: A Family of Unified Visual Encoder for Both Understanding and Generation
arXiv:2601.15369v1 Announce Type: cross Abstract: This paper presents a family of advanced vision encoder, named OpenVision 3, that learns a single, unified visual representation that can serve both image understanding and image generation. Our core architecture is simple: we feed VAE-compressed image latents to a ViT encoder and train its output to support two complementary roles. First, the encoder output is passed to the ViT-VAE decoder to reconstruct the original image, encouraging the representation to capture generative structure. Second, the same representation is optimized with contrastive learning and image-captioning objectives, strengthening semantic features. By jointly optimizing reconstruction- and semantics-driven signals in a shared latent space, the encoder learns representations that synergize and generalize well across both regimes. We validate this unified design through extensive downstream evaluations with the encoder frozen. For multimodal understanding, we plug the encoder into the LLaVA-1.5 framework: it performs comparably with a standard CLIP vision encoder (e.g., 62.4 vs 62.2 on SeedBench, and 83.7 vs 82.9 on POPE). For generation, we test it under the RAE framework: ours substantially surpasses the standard CLIP-based encoder (e.g., gFID: 1.89 vs 2.54 on ImageNet). We hope this work can spur future research on unified modeling.
https://arxiv.org/abs/2601.15369
Academic Papers
svg
75ac4888f2e98ff2fe51ae1ddbea37d29c5ff3f7bcc50c3cb7fb2808ea5d2dfb
2026-01-23T00:00:00-05:00
The computational two-way quantum capacity
arXiv:2601.15393v1 Announce Type: cross Abstract: Quantum channel capacities are fundamental to quantum information theory. Their definition, however, does not limit the computational resources of sender and receiver. In this work, we initiate the study of computational quantum capacities. These quantify how much information can be reliably transmitted when imposing the natural requirement that en- and decoding have to be computationally efficient. We focus on the computational two-way quantum capacity and showcase that it is closely related to the computational distillable entanglement of the Choi state of the channel. This connection allows us to show a stark computational capacity separation. Under standard cryptographic assumptions, there exists a quantum channel of polynomial complexity whose computational two-way quantum capacity vanishes while its unbounded counterpart is nearly maximal. More so, we show that there exists a sharp transition in computational quantum capacity from nearly maximal to zero when the channel complexity leaves the polynomial realm. Our results demonstrate that the natural requirement of computational efficiency can radically alter the limits of quantum communication.
https://arxiv.org/abs/2601.15393
Academic Papers
svg
57985ae0d8c5c24effee688bf76c94434cc72b1ff4c23a2b2f5bdc1f5f396cb8
2026-01-23T00:00:00-05:00
ISAC-over-NTN: HAPS-UAV Framework for Post-Disaster Responsive 6G Networks
arXiv:2601.15422v1 Announce Type: cross Abstract: In disaster scenarios, ensuring both reliable communication and situational awareness becomes a critical challenge due to the partial or complete collapse of terrestrial networks. This paper proposes an integrated sensing and communication (ISAC) over non-terrestrial networks (NTN) architecture referred to as ISAC-over-NTN that integrates multiple uncrewed aerial vehicles (UAVs) and a high-altitude platform station (HAPS) to maintain resilient and reliable network operations in post-disaster conditions. We aim to achieve two main objectives: i) provide a reliable communication infrastructure, thereby ensuring the continuity of search-and-rescue activities and connecting people to their loved ones, and ii) detect users, such as those trapped under rubble or those who are mobile, using a Doppler-based mobility detection model. We employ an innovative beamforming method that simultaneously transmits data and detects Doppler-based mobility by integrating multi-user multiple-input multiple-output (MU-MIMO) communication and monostatic sensing within the same transmission chain. The results show that the proposed framework maintains reliable connectivity and achieves high detection accuracy of users in critical locations, reaching 90% motion detection sensitivity and 88% detection accuracy.
https://arxiv.org/abs/2601.15422
Academic Papers
svg
c0d30a92dbba18d18318d554dac34fcd1e10b5ed8a1e545fd7dfa519aa69c384
2026-01-23T00:00:00-05:00
Low-Dimensional Adaptation of Rectified Flow: A New Perspective through the Lens of Diffusion and Stochastic Localization
arXiv:2601.15500v1 Announce Type: cross Abstract: In recent years, Rectified flow (RF) has gained considerable popularity largely due to its generation efficiency and state-of-the-art performance. In this paper, we investigate the degree to which RF automatically adapts to the intrinsic low dimensionality of the support of the target distribution to accelerate sampling. We show that, using a carefully designed choice of the time-discretization scheme and with sufficiently accurate drift estimates, the RF sampler enjoys an iteration complexity of order $O(k/\varepsilon)$ (up to log factors), where $\varepsilon$ is the precision in total variation distance and $k$ is the intrinsic dimension of the target distribution. In addition, we show that the denoising diffusion probabilistic model (DDPM) procedure is equivalent to a stochastic version of RF by establishing a novel connection between these processes and stochastic localization. Building on this connection, we further design a stochastic RF sampler that also adapts to the low-dimensionality of the target distribution under milder requirements on the accuracy of the drift estimates, and also with a specific time schedule. We illustrate with simulations on the synthetic data and text-to-image data experiments the improved performance of the proposed samplers implementing the newly designed time-discretization schedules.
https://arxiv.org/abs/2601.15500
Academic Papers
svg
4752677c96c1046df0c71ae7c2e4b1e725889fd545b0532e7a5c1ccd7b2a2d73
2026-01-23T00:00:00-05:00
Applicability and Limitation Analysis of PMU Data and Phasor Concept for Low- and High- Frequency Oscillations
arXiv:2601.15529v1 Announce Type: cross Abstract: Phasor Measurement Units (PMUs) convert high-speed waveform data into low-speed phasor data, which are fundamental to wide-area monitoring and control in power systems, with oscillation detection and localization among their most prominent applications. However, representing electrical waveform signals with oscillations using PMU phasors is effective only for low-frequency oscillations. This paper investigates the root causes of this limitation, focusing on errors introduced by Discrete Fourier Transform (DFT)-based signal processing, in addition to the attenuation effects of anti-aliasing filters, and the impact of low reporting rates. To better represent and estimate waveform signals with oscillations, we propose a more general signal model and a multi-step estimation method that leverages one-cycle DFT, the Matrix Pencil Method, and the Least Squares Method. Numerical experiments demonstrate the superior performance of the proposed signal model and estimation method. Furthermore, this paper reveals that the phasor concept, let alone PMU phasors, can become invalid for waveform signals with high-frequency oscillations characterized by asymmetric sub- and super-synchronous components. These findings highlight the fundamental limitations of PMU data and phasor concept, and emphasize the need to rely on waveform data for analyzing high-frequency oscillations in modern power systems.
https://arxiv.org/abs/2601.15529
Academic Papers
svg
48903d2185e4f902d0ded2df5eb84bb6f9fed4ae0017ab77c9d6898aba8d2277
2026-01-23T00:00:00-05:00
A Machine Vision Approach to Preliminary Skin Lesion Assessments
arXiv:2601.15539v1 Announce Type: cross Abstract: Early detection of malignant skin lesions is critical for improving patient outcomes in aggressive, metastatic skin cancers. This study evaluates a comprehensive system for preliminary skin lesion assessment that combines the clinically established ABCD rule of dermoscopy (analyzing Asymmetry, Borders, Color, and Dermoscopic Structures) with machine learning classification. Using a 1,000-image subset of the HAM10000 dataset, the system implements an automated, rule-based pipeline to compute a Total Dermoscopy Score (TDS) for each lesion. This handcrafted approach is compared against various machine learning solutions, including traditional classifiers (Logistic Regression, Random Forest, and SVM) and deep learning models. While the rule-based system provides high clinical interpretability, results indicate a performance bottleneck when reducing complex morphology to five numerical features. Experimental findings show that transfer learning with EfficientNet-B0 failed significantly due to domain shift between natural and medical images. In contrast, a custom three-layer Convolutional Neural Network (CNN) trained from scratch achieved 78.5% accuracy and 86.5% recall on median-filtered images, representing a 19-point accuracy improvement over traditional methods. The results demonstrate that direct pixel-level learning captures diagnostic patterns beyond handcrafted features and that purpose-built lightweight architectures can outperform large pretrained models for small, domain-specific medical datasets.
https://arxiv.org/abs/2601.15539
Academic Papers
svg
aa68a7cbe6dc17d100809d3003a8893746aca15a3b355142002fb481a035c3c3
2026-01-23T00:00:00-05:00
Stabilizing Welfare-Maximizing Decisions via Endogenous Transfers
arXiv:2601.15563v1 Announce Type: cross Abstract: Many multiagent systems rely on collective decision-making among self-interested agents, which raises deep questions about coalition formation and stability. We study social choice with endogenous, outcome-contingent transfers, where agents voluntarily form contracts that redistribute utility depending on the collective decision, allowing fully strategic, incentive-aligned coalition formation. We show that under consensus rules, individually rational strong Nash equilibria (IR-SNE) always exist, implementing welfare-maximizing outcomes with feasible transfers, and provide a simple, efficient algorithm to construct them. For more general anonymous, monotonic, and resolute rules, we identify necessary conditions for profitable deviations, sharply limiting destabilizing coalitions. By bridging cooperative and noncooperative perspectives, our approach shows that transferable utility can achieve core-like stability, restoring efficiency and budget balance even where classical impossibility results apply. Overall, this framework offers a practical and robust way to coordinate large-scale strategic multiagent systems.
https://arxiv.org/abs/2601.15563
Academic Papers
svg
394d7fbbef7c30884e70b79d74648012520961ed1c8650aa1cdb2f6297cc36e3
2026-01-23T00:00:00-05:00
FUGC: Benchmarking Semi-Supervised Learning Methods for Cervical Segmentation
arXiv:2601.15572v1 Announce Type: cross Abstract: Accurate segmentation of cervical structures in transvaginal ultrasound (TVS) is critical for assessing the risk of spontaneous preterm birth (PTB), yet the scarcity of labeled data limits the performance of supervised learning approaches. This paper introduces the Fetal Ultrasound Grand Challenge (FUGC), the first benchmark for semi-supervised learning in cervical segmentation, hosted at ISBI 2025. FUGC provides a dataset of 890 TVS images, including 500 training images, 90 validation images, and 300 test images. Methods were evaluated using the Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), and runtime (RT), with a weighted combination of 0.4/0.4/0.2. The challenge attracted 10 teams with 82 participants submitting innovative solutions. The best-performing methods for each individual metric achieved 90.26\% mDSC, 38.88 mHD, and 32.85 ms RT, respectively. FUGC establishes a standardized benchmark for cervical segmentation, demonstrates the efficacy of semi-supervised methods with limited labeled data, and provides a foundation for AI-assisted clinical PTB risk assessment.
https://arxiv.org/abs/2601.15572
Academic Papers
svg
6dd4e5f819507c81c9d0adb52c8a8bc3a1b88ef4abddc5d4f5549480adbe0ea1
2026-01-23T00:00:00-05:00
Screening for Choice Sets
arXiv:2601.15580v1 Announce Type: cross Abstract: We study a screening problem in which an agent privately observes a set of feasible technologies and can strategically disclose only a subset to the principal. The principal then takes an action whose payoff consequences for both players are publicly known. Under the assumption that the possible technology sets are ordered by set inclusion, we show that the optimal mechanism promises the agent a utility that is weakly increasing as the reported set expands, and the choice of the principal maximizes her own utility subject to this promised utility constraint. Moreover, the optimal promised utility either coincides with the agent's utility under the complete information benchmark or remains locally constant, with the number of constant segments bounded by the number of downward-sloping segments of the complete information benchmark.
https://arxiv.org/abs/2601.15580
Academic Papers
svg
1738a446c87082a42fab20c04d48971fb38f510c0395dcc675b475e806137c1e
2026-01-23T00:00:00-05:00
Does 6G Need a New Waveform: Comparing Zak-OTFS with CP-OFDM
arXiv:2601.15602v1 Announce Type: cross Abstract: Across the world, there is growing interest in new waveforms, Zak-OTFS in particular, and over-the-air implementations are starting to appear. The choice between OFDM and Zak-OTFS is not so much a choice between waveforms as it is an architectural choice between preventing inter-carrier interference (ICI) and embracing ICI. In OFDM, once the Input-Output (I/O) relation is known, equalization is relatively simple, at least when there is no ICI. However, in the presence of ICI the I/O relation is non-predictable and its acquisition is non-trivial. In contrast, equalization is more involved in Zak-OTFS due to inter-symbol-interference (ISI), however the I/O relation is predictable and its acquisition is simple. {Zak-OTFS exhibits superior performance in doubly-spread 6G use cases with high delay/Doppler channel spreads (i.e., high mobility and/or large cells), but architectural choice is governed by the typical use case, today and in the future. What is typical depends to some degree on geography, since large delay spread is a characteristic of large cells which are the rule rather than the exception in many important wireless markets.} This paper provides a comprehensive performance comparison of cyclic prefix OFDM (CP-OFDM) and Zak-OTFS across the full range of 6G propagation environments. The performance results provide insights into the fundamental architectural choice.
https://arxiv.org/abs/2601.15602
Academic Papers
svg
ebd7554a477129c3a8cef6533ba889c6ef80d694dc571f5a3379d7c537561846
2026-01-23T00:00:00-05:00
On the Nonasymptotic Scaling Guarantee of Hyperparameter Estimation in Inhomogeneous, Weakly-Dependent Complex Network Dynamical Systems
arXiv:2601.15603v1 Announce Type: cross Abstract: Hierarchical Bayesian models are increasingly used in large, inhomogeneous complex network dynamical systems by modeling parameters as draws from a hyperparameter-governed distribution. However, theoretical guarantees for these estimates as the system size grows have been lacking. A critical concern is that hyperparameter estimation may diverge for larger networks, undermining the model's reliability. Formulating the system's evolution in a measure transport perspective, we propose a theoretical framework for estimating hyperparameters with mean-type observations, which are prevalent in many scientific applications. Our primary contribution is a nonasymptotic bound for the deviation of estimate of hyperparameters in inhomogeneous complex network dynamical systems with respect to network population size, which is established for a general family of optimization algorithms within a fixed observation duration. While we firstly establish a consistency result for systems with independent nodes, our main result extends this guarantee to the more challenging and realistic setting of weakly-dependent nodes. We validate our theoretical findings with numerical experiments on two representative models: a Susceptible-Infected-Susceptible model and a Spiking Neuronal Network model. In both cases, the results confirm that the estimation error decreases as the network population size increases, aligning with our theoretical guarantees. This research proposes the foundational theory to ensure that hierarchical Bayesian methods are statistically consistent for large-scale inhomogeneous systems, filling a gap in this area of theoretical research and justifying their application in practice.
https://arxiv.org/abs/2601.15603
Academic Papers
svg
ce440d91b75be33280ba9f992f4028192e26e0104a70f555b3e865c67945232a
2026-01-23T00:00:00-05:00
Machine Failure Detection Based on Projected Quantum Models
arXiv:2601.15641v1 Announce Type: cross Abstract: Detecting machine failures promptly is of utmost importance in industry for maintaining efficiency and minimizing downtime. This paper introduces a failure detection algorithm based on quantum computing and a statistical change-point detection approach. Our method leverages the potential of projected quantum feature maps to enhance the precision of anomaly detection in machine monitoring systems. We empirically validate our approach on benchmark multi-dimensional time series datasets as well as on a real-world dataset comprising IoT sensor readings from operational machines, ensuring the practical relevance of our study. The algorithm was executed on IBM's 133-qubit Heron quantum processor, demonstrating the feasibility of integrating quantum computing into industrial maintenance procedures. The presented results underscore the effectiveness of our quantum-based failure detection system, showcasing its capability to accurately identify anomalies in noisy time series data. This work not only highlights the potential of quantum computing in industrial diagnostics but also paves the way for more sophisticated quantum algorithms in the realm of predictive maintenance.
https://arxiv.org/abs/2601.15641
Academic Papers
svg
ab8d2ccf061b82c9a0e39fba62aa9d42ede6e973874966135c9349600f21f2b4
2026-01-23T00:00:00-05:00
Algebraic Statistics in OSCAR
arXiv:2601.15807v1 Announce Type: cross Abstract: We introduce the AlgebraicStatistics section of the OSCAR computer algebra system. We give an overview of its extensible design and highlight its features including serialization of data types for sharing results and creating databases, and state-of-the-art implicitization algorithms.
https://arxiv.org/abs/2601.15807
Academic Papers
svg
20b2072b219b9231833aa49de70df4ef403763abb749e3c0512f9a9ff1a5d61c
2026-01-23T00:00:00-05:00
Quantum Coherence Spaces Revisited: A von Neumann (Co)Algebraic Approach
arXiv:2601.15832v1 Announce Type: cross Abstract: We describe a categorical model of MALL (Multiplicative Additive Linear Logic) inspired by the Heisenberg-Schr\"odinger duality of finite-dimensional quantum theory. Proofs of formulas with positive logical polarity correspond to CPTP (completely positive trace-preserving) maps in our model, i.e. the quantum operations in the Schr\"odinger picture, whereas proofs of formulas with negative logical polarity correspond to CPU (completely positive unital) maps, i.e. the quantum operations in the Heisenberg picture. The mathematical development is based on noncommutative geometry and finite-dimensional von Neumann (co)algebras, which can be defined as special kinds of (co)monoid objects internal to the category of finite-dimensional operator spaces.
https://arxiv.org/abs/2601.15832
Academic Papers
svg
0854b4a25857e5d4a202aa28431e97f2c06cd45e477b893f1f7fffec00bd81b4
2026-01-23T00:00:00-05:00
A Stabilized Hybrid Active Noise Control Algorithm of GFANC and FxNLMS with Online Clustering
arXiv:2601.15889v1 Announce Type: cross Abstract: The Filtered-x Normalized Least Mean Square (FxNLMS) algorithm suffers from slow convergence and a risk of divergence, although it can achieve low steady-state errors after sufficient adaptation. In contrast, the Generative Fixed-Filter Active Noise Control (GFANC) method offers fast response speed, but its lack of adaptability may lead to large steady-state errors. This paper proposes a hybrid GFANC-FxNLMS algorithm to leverage the complementary advantages of both approaches. In the hybrid GFANC-FxNLMS algorithm, GFANC provides a frame-level control filter as an initialization for FxNLMS, while FxNLMS performs continuous adaptation at the sampling rate. Small variations in the GFANC-generated filter may repeatedly reinitialize FxNLMS, interrupting its adaptation process and destabilizing the system. An online clustering module is introduced to avoid unnecessary re-initializations and improve system stability. Simulation results show that the proposed algorithm achieves fast response, very low steady-state error, and high stability, requiring only one pre-trained broadband filter.
https://arxiv.org/abs/2601.15889
Academic Papers
svg
7aa074868e394471bdac7ab1c2b10f3780de87df7ce2a395dcd5898037ef8b13
2026-01-23T00:00:00-05:00
Progressive Power Homotopy for Non-convex Optimization
arXiv:2601.15915v1 Announce Type: cross Abstract: We propose a novel first-order method for non-convex optimization of the form $\max_{\bm{w}\in\mathbb{R}^d}\mathbb{E}_{\bm{x}\sim\mathcal{D}}[f_{\bm{w}}(\bm{x})]$, termed Progressive Power Homotopy (Prog-PowerHP). The method applies stochastic gradient ascent to a surrogate objective obtained by first performing a power transformation and then Gaussian smoothing, $F_{N,\sigma}(\bm{\mu}):=\mathbb{E}_{\bm{w}\sim\mathcal{N}(\bm{\mu},\sigma^2I_d),\bm{x}\sim\mathcal{D}}[e^{Nf_w(\bm{x})}]$, while progressively increasing the power parameter $N$ and decreasing the smoothing scale $\sigma$ along the optimization trajectory. We prove that, under mild regularity conditions, Prog-PowerHP converges to a small neighborhood of the global optimum with an iteration complexity scaling nearly as $O(d^2\varepsilon^{-2})$. Empirically, Prog-PowerHP demonstrates clear advantages in phase retrieval when the samples-to-dimension ratio approaches the information-theoretic limit, and in training two-layer neural networks in under-parameterized regimes. These results suggest that Prog-PowerHP is particularly effective for navigating cluttered non-convex landscapes where standard first-order methods struggle.
https://arxiv.org/abs/2601.15915
Academic Papers
svg
f2b6ae748185a87aaf0bcad0bce8d3d58d83f7e67f6bb13134e16c7d7e453c38
2026-01-23T00:00:00-05:00
An Efficient Algorithm to Generate all Labeled Triangle-free Graphs with a given Graphical Degree Sequence
arXiv:2601.15943v1 Announce Type: cross Abstract: We extend our previous algorithm that generates all labeled graphs with a given graphical degree sequence to generate all labeled triangle-free graphs with a given graphical degree sequence. The algorithm uses various pruning techniques to avoid having to first generate all labeled realizations of the input sequence and then testing whether each labeled realization is triangle-free. It can be further extended to generate all labeled bipartite graphs with a given graphical degree sequence by adding a simple test whether each generated triangle-free realization is a bipartite graph. All output graphs are generated in the lexicographical ordering as in the original algorithm. The algorithms can also be easily parallelized.
https://arxiv.org/abs/2601.15943
Academic Papers
svg
364b36d548651b590b73e38b6c0ae714bed894a4a6d41f77dd3c8c16173375d5
2026-01-23T00:00:00-05:00
Performance Scaling Laws for PD Array-based Receivers in IM/DD Optical Wireless Communication Systems
arXiv:2601.15973v1 Announce Type: cross Abstract: We study the performance scaling laws for electrical-domain combining in photodetector (PD) array-based receivers employing intensity modulation and direct detection, taking into account the inherent square-law relationship between the optical and electrical received powers. The performance of PD array-based systems is compared, in terms of signal-to-noise ratio (SNR) and achievable rate, to that of a reference receiver employing a single PD. Analytical and numerical results show that PD arrays provide performance gains for sufficiently narrow beams and above an SNR threshold. Furthermore, increasing the number of PDs alone does not enhance performance, and joint optimization of beam pattern, transverse electromagnetic mode, received power, and PD positions is necessary. Our model and derived insights provide practical guidelines and highlight the trade-offs for the design of next-generation high-bandwidth PD array receivers.
https://arxiv.org/abs/2601.15973
Academic Papers
svg
c90fe58e1efdbfc2d00c02f9460c934ca93bf33a5013cc49c52ff12418aba250
2026-01-23T00:00:00-05:00
Time-Optimal Switching Surfaces for Triple Integrator under Full Box Constraints
arXiv:2601.16003v1 Announce Type: cross Abstract: Time-optimal control for triple integrator under full box constraints is a fundamental problem in the field of optimal control, which has been widely applied in the industry. However, scenarios involving asymmetric constraints, non-stationary boundary conditions, and active position constraints pose significant challenges. This paper provides a complete characterization of time-optimal switching surfaces for the problem, leading to novel insights into the geometric and algebraic structure of the optimal control. The active condition of position constraints is derived, which is absent from the literature. An efficient algorithm is proposed, capable of planning time-optimal trajectories under asymmetric full constraints and arbitrary boundary states, with a 100% success rate. Computational time for each trajectory is within approximately 10{\mu}s, achieving a 5-order-of-magnitude reduction compared to optimization-based baselines.
https://arxiv.org/abs/2601.16003
Academic Papers
svg
5cdf0f837a348ff1b3b8077826d91b5bf5fd68b1f41d1df35a5c6e95dd36963e
2026-01-23T00:00:00-05:00
Wigner's Friend as a Circuit: Inter-Branch Communication Witness Benchmarks on Superconducting Quantum Hardware
arXiv:2601.16004v1 Announce Type: cross Abstract: We implement and benchmark on IBM Quantum hardware the circuit family proposed by Violaris for estimating operational inter-branch communication witnesses, defined as correlations in classical measurement records produced by compiled Wigner's-friend-style circuits. We realize a five-qubit instance of the protocol as an inter-register message-transfer pattern within a single circuit, rather than physical signaling, and evaluate its behavior under realistic device noise and compilation constraints. The circuit encodes branch-conditioned evolution of an observer subsystem whose dynamics depend on a control qubit, followed by a controlled transfer operation that probes correlations between conditional measurement contexts. Executing on the ibm_fez backend with 20000 shots, we observe population-based visibility of 0.877, coherence witnesses of 0.840 and -0.811 along orthogonal axes, and a phase-sensitive magnitude of approximately 1.17. While the visibility metric is insensitive to some classes of dephasing, the coherence witnesses provide complementary sensitivity to off-diagonal noise. This work does not test or discriminate among interpretations of quantum mechanics. Instead, it provides a reproducible operational constraint pipeline for evaluating detectability of non-ideal channels relative to calibrated device noise.
https://arxiv.org/abs/2601.16004
Academic Papers
svg
88b2ebb1703ea6dfe387d35b728db3340c1d1901c21032c9c8957a3b134323dd
2026-01-23T00:00:00-05:00
THOR: A Versatile Foundation Model for Earth Observation Climate and Society Applications
arXiv:2601.16011v1 Announce Type: cross Abstract: Current Earth observation foundation models are architecturally rigid, struggle with heterogeneous sensors and are constrained to fixed patch sizes. This limits their deployment in real-world scenarios requiring flexible computeaccuracy trade-offs. We propose THOR, a "computeadaptive" foundation model that solves both input heterogeneity and deployment rigidity. THOR is the first architecture to unify data from Copernicus Sentinel-1, -2, and -3 (OLCI & SLSTR) satellites, processing their native 10 m to 1000 m resolutions in a single model. We pre-train THOR with a novel randomized patch and input image size strategy. This allows a single set of pre-trained weights to be deployed at inference with any patch size, enabling a dynamic trade-off between computational cost and feature resolution without retraining. We pre-train THOR on THOR Pretrain, a new, large-scale multi-sensor dataset and demonstrate state-of-the-art performance on downstream benchmarks, particularly in data-limited regimes like the PANGAEA 10% split, validating that THOR's flexible feature generation excels for diverse climate and society applications.
https://arxiv.org/abs/2601.16011
Academic Papers
svg
a2ec204618f3d83af7e11fb63fac3f116bdf991946c9e3f23f42f5167fd2b106
2026-01-23T00:00:00-05:00
Timbre-Aware LLM-based Direct Speech-to-Speech Translation Extendable to Multiple Language Pairs
arXiv:2601.16023v1 Announce Type: cross Abstract: Direct Speech-to-Speech Translation (S2ST) has gained increasing attention for its ability to translate speech from one language to another, while reducing error propagation and latency inherent in traditional cascaded pipelines. However, existing direct S2ST systems continue to face notable challenges, including instability in semantic-acoustic alignment when parallel speech data is scarce, difficulty in preserving speaker identity, and limited multilingual scalability. In this work, we introduce DS2ST-LM, a scalable, single-stage direct S2ST framework leveraging a multilingual Large Language Model (LLM). The architecture integrates a Whisper speech encoder, a learnable projection module, a Qwen2-0.5B LLM, and a timbre-controlled vocoder. We construct GigaS2S-1000, a 1000-hour bilingual corpus by extending the GigaST dataset with high-fidelity synthetic target speech, and show that this synthetic data alleviates data scarcity to some extent. We investigate two semantic token generation strategies: speech-derived S3 tokens and text-derived tokens generated by a pre-trained LLM, and analyze their impact on training stability and semantic consistency. We further evaluate three projection architectures (Linear, Conv1D-Linear, and Q-Former) and observe that while higher-capacity projectors converge faster, the simple Linear projector achieves higher performance. Extensive experiments demonstrate that DS2ST-LM outperforms traditional cascaded and ST (Qwen-Audio) + TTS baselines across both lexical (BLEU, METEOR) and semantic (BLEURT, COMET) metrics, while extending to multiple language pairs, including French, Spanish, German, Hindi, Bengali, and Urdu. Furthermore, we incorporate timbre-aware speech synthesis to preserve speaker information, enabling DS2ST-LM to surpass prior direct S2ST systems in both speaker similarity and perceptual naturalness.
https://arxiv.org/abs/2601.16023
Academic Papers
svg
851fc6116edabff478ffaaf3c10dd299ca42955d1b217d5fd72342ef1ca10c05
2026-01-23T00:00:00-05:00
Risk reversal for least squares estimators under nested convex constraints
arXiv:2601.16041v1 Announce Type: cross Abstract: In constrained stochastic optimization, one naturally expects that imposing a stricter feasible set does not increase the statistical risk of an estimator defined by projection onto that set. In this paper, we show that this intuition can fail even in canonical settings. We study the Gaussian sequence model, a deliberately austere test best, where for a compact, convex set $\Theta \subset \mathbb{R}^d$ one observes \[ Y = \theta^\star + \sigma Z, \qquad Z \sim N(0, I_d), \] and seeks to estimate an unknown parameter $\theta^\star \in \Theta$. The natural estimator is the least squares estimator (LSE), which coincides with the Euclidean projection of $Y$ onto $\Theta$. We construct an explicit example exhibiting \emph{risk reversal}: for sufficiently large noise, there exist nested compact convex sets $\Theta_S \subset \Theta_L$ and a parameter $\theta^\star \in \Theta_S$ such that the LSE constrained to $\Theta_S$ has strictly larger risk than the LSE constrained to $\Theta_L$. We further show that this phenomenon can persist at the level of worst-case risk, with the supremum risk over the smaller constraint set exceeding that over the larger one. We clarify this behavior by contrasting noise regimes. In the vanishing-noise limit, the risk admits a first-order expansion governed by the statistical dimension of the tangent cone at $\theta^\star$, and tighter constraints uniformly reduce risk. In contrast, in the diverging-noise regime, the risk is determined by global geometric interactions between the constraint set and random noise directions. Here, the embedding of $\Theta_S$ within $\Theta_L$ can reverse the risk ordering. These results reveal a previously unrecognized failure mode of projection-based estimators: in sufficiently noisy settings, tightening a constraint can paradoxically degrade statistical performance.
https://arxiv.org/abs/2601.16041
Academic Papers
svg
12eeefd366407399088dac93f78192b1d12afcf98b598710b0d51eacfb6768d5
2026-01-23T00:00:00-05:00
Phi-SegNet: Phase-Integrated Supervision for Medical Image Segmentation
arXiv:2601.16064v1 Announce Type: cross Abstract: Deep learning has substantially advanced medical image segmentation, yet achieving robust generalization across diverse imaging modalities and anatomical structures remains a major challenge. A key contributor to this limitation lies in how existing architectures, ranging from CNNs to Transformers and their hybrids, primarily encode spatial information while overlooking frequency-domain representations that capture rich structural and textural cues. Although few recent studies have begun exploring spectral information at the feature level, supervision-level integration of frequency cues-crucial for fine-grained object localization-remains largely untapped. To this end, we propose Phi-SegNet, a CNN-based architecture that incorporates phase-aware information at both architectural and optimization levels. The network integrates Bi-Feature Mask Former (BFMF) modules that blend neighboring encoder features to reduce semantic gaps, and Reverse Fourier Attention (RFA) blocks that refine decoder outputs using phase-regularized features. A dedicated phase-aware loss aligns these features with structural priors, forming a closed feedback loop that emphasizes boundary precision. Evaluated on five public datasets spanning X-ray, US, histopathology, MRI, and colonoscopy, Phi-SegNet consistently achieved state-of-the-art performance, with an average relative improvement of 1.54+/-1.26% in IoU and 0.98+/-0.71% in F1-score over the next best-performing model. In cross-dataset generalization scenarios involving unseen datasets from the known domain, Phi-SegNet also exhibits robust and superior performance, highlighting its adaptability and modality-agnostic design. These findings demonstrate the potential of leveraging spectral priors in both feature representation and supervision, paving the way for generalized segmentation frameworks that excel in fine-grained object localization.
https://arxiv.org/abs/2601.16064
Academic Papers
svg
4d5e33bfa6172468941c3b03f923d2caf0f0647fb496eef85bb3c6be6389208c
2026-01-23T00:00:00-05:00
On damage of interpolation to adversarial robustness in regression
arXiv:2601.16070v1 Announce Type: cross Abstract: Deep neural networks (DNNs) typically involve a large number of parameters and are trained to achieve zero or near-zero training error. Despite such interpolation, they often exhibit strong generalization performance on unseen data, a phenomenon that has motivated extensive theoretical investigations. Comforting results show that interpolation indeed may not affect the minimax rate of convergence under the squared error loss. In the mean time, DNNs are well known to be highly vulnerable to adversarial perturbations in future inputs. A natural question then arises: Can interpolation also escape from suboptimal performance under a future $X$-attack? In this paper, we investigate the adversarial robustness of interpolating estimators in a framework of nonparametric regression. A finding is that interpolating estimators must be suboptimal even under a subtle future $X$-attack, and achieving perfect fitting can substantially damage their robustness. An interesting phenomenon in the high interpolation regime, which we term the curse of simple size, is also revealed and discussed. Numerical experiments support our theoretical findings.
https://arxiv.org/abs/2601.16070
Academic Papers
svg
1085929e9cc290f0ff3956325e392d7a529576ed39cb66c889f9465a2f781f2d
2026-01-23T00:00:00-05:00
Algorithms for Algebraic and Arithmetic Attributes of Hypergeometric Functions
arXiv:2601.16105v1 Announce Type: cross Abstract: We discuss algorithms for arithmetic properties of hypergeometric functions. Most notably, we are able to compute the p-adic valuation of a hypergeometric function on any disk of radius smaller than the p-adic radius of convergence. This we use, building on work of Christol, to determine the set of prime numbers modulo which it can be reduced. Moreover, we describe an algorithm to find an annihilating polynomial of the reduction of a hypergeometric function modulo p.
https://arxiv.org/abs/2601.16105
Academic Papers
svg
202020747611be872ab8825b44e155ee2bf60f9f04e7661e82d1306cf919726a
2026-01-23T00:00:00-05:00
Synthetic Augmentation in Imbalanced Learning: When It Helps, When It Hurts, and How Much to Add
arXiv:2601.16120v1 Announce Type: cross Abstract: Imbalanced classification, where one class is observed far less frequently than the other, often causes standard training procedures to prioritize the majority class and perform poorly on rare but important cases. A classic and widely used remedy is to augment the minority class with synthetic examples, but two basic questions remain under-resolved: when does synthetic augmentation actually help, and how many synthetic samples should be generated? We develop a unified statistical framework for synthetic augmentation in imbalanced learning, studying models trained on imbalanced data augmented with synthetic minority samples and evaluated under the balanced population risk. Our theory shows that synthetic data is not always beneficial. In a ``local symmetry" regime, imbalance is not the dominant source of error near the balanced optimum, so adding synthetic samples cannot improve learning rates and can even degrade performance by amplifying generator mismatch. When augmentation can help (a ``local asymmetry" regime), the optimal synthetic size depends on generator accuracy and on whether the generator's residual mismatch is directionally aligned with the intrinsic majority-minority shift. This structure can make the best synthetic size deviate from naive full balancing, sometimes by a small refinement and sometimes substantially when generator bias is systematic. Practically, we recommend Validation-Tuned Synthetic Size (VTSS): select the synthetic size by minimizing balanced validation loss over a range centered near the fully balanced baseline, while allowing meaningful departures when the data indicate them. Simulations and a real sepsis prediction study support the theory and illustrate when synthetic augmentation helps, when it cannot, and how to tune its quantity effectively.
https://arxiv.org/abs/2601.16120
Academic Papers
svg
95586bfb1c6a174967d8b5722b887e916f8bf3f7dd8a52579ef3e35cc67636c8
2026-01-23T00:00:00-05:00
Beyond Predictive Uncertainty: Reliable Representation Learning with Structural Constraints
arXiv:2601.16174v1 Announce Type: cross Abstract: Uncertainty estimation in machine learning has traditionally focused on the prediction stage, aiming to quantify confidence in model outputs while treating learned representations as deterministic and reliable by default. In this work, we challenge this implicit assumption and argue that reliability should be regarded as a first-class property of learned representations themselves. We propose a principled framework for reliable representation learning that explicitly models representation-level uncertainty and leverages structural constraints as inductive biases to regularize the space of feasible representations. Our approach introduces uncertainty-aware regularization directly in the representation space, encouraging representations that are not only predictive but also stable, well-calibrated, and robust to noise and structural perturbations. Structural constraints, such as sparsity, relational structure, or feature-group dependencies, are incorporated to define meaningful geometry and reduce spurious variability in learned representations, without assuming fully correct or noise-free structure. Importantly, the proposed framework is independent of specific model architectures and can be integrated with a wide range of representation learning methods.
https://arxiv.org/abs/2601.16174
Academic Papers
svg
a78008c1479f762180dbb89282a3b73ef10064582bcc3123ea2aa82a4fec6a37
2026-01-23T00:00:00-05:00
A Rolling-Space Branch-and-Price Algorithm for the Multi-Compartment Vehicle Routing Problem with Multiple Time Windows
arXiv:2601.16194v1 Announce Type: cross Abstract: This paper investigates the multi-compartment vehicle routing problem with multiple time windows (MCVRPMTW), an extension of the classical vehicle routing problem with time windows that considers vehicles equipped with multiple compartments and customers requiring service across several delivery time windows. The problem incorporates three key compartment-related features: (i) compartment flexibility in the number of compartments, (ii) item-to-compartment compatibility, and (iii) item-to-item compatibility. The problem also accommodates practical operational requirements such as driver breaks. To solve the MCVRPMTW, we develop an exact branch-and-price (B&P) algorithm in which the pricing problem is solved using a labeling algorithm. Several acceleration strategies are introduced to limit symmetry during label extensions, improve the stability of dual solutions in column generation, and enhance the branching process. To handle large-scale instances, we propose a rolling-space B&P algorithm that integrates clustering techniques into the solution framework. Extensive computational experiments on instances inspired by a real-world industrial application demonstrate the effectiveness of the proposed approach and provide useful managerial insights for practical implementation.
https://arxiv.org/abs/2601.16194
Academic Papers
svg
1157d4a5ee376233c802874395381f28c624cf616f46ff160ed4e68c6d433c0c
2026-01-23T00:00:00-05:00
Representation-Driven Reinforcement Learning
arXiv:2305.19922v3 Announce Type: replace Abstract: We present a representation-driven framework for reinforcement learning. By representing policies as estimates of their expected values, we leverage techniques from contextual bandits to guide exploration and exploitation. Particularly, embedding a policy network into a linear feature space allows us to reframe the exploration-exploitation problem as a representation-exploitation problem, where good policy representations enable optimal exploration. We demonstrate the effectiveness of this framework through its application to evolutionary and policy gradient-based approaches, leading to significantly improved performance compared to traditional methods. Our framework provides a new perspective on reinforcement learning, highlighting the importance of policy representation in determining optimal exploration-exploitation strategies.
https://arxiv.org/abs/2305.19922
Academic Papers
svg
734e032083ffa4c80220d69d9c4b97ce4d578d4f9bf074509492b02a893ab78a
2026-01-23T00:00:00-05:00
Multi-event Video-Text Retrieval
arXiv:2308.11551v3 Announce Type: replace Abstract: Video-Text Retrieval (VTR) is a crucial multi-modal task in an era of massive video-text data on the Internet. A plethora of work characterized by using a two-stream Vision-Language model architecture that learns a joint representation of video-text pairs has become a prominent approach for the VTR task. However, these models operate under the assumption of bijective video-text correspondences and neglect a more practical scenario where video content usually encompasses multiple events, while texts like user queries or webpage metadata tend to be specific and correspond to single events. This establishes a gap between the previous training objective and real-world applications, leading to the potential performance degradation of earlier models during inference. In this study, we introduce the Multi-event Video-Text Retrieval (MeVTR) task, addressing scenarios in which each video contains multiple different events, as a niche scenario of the conventional Video-Text Retrieval Task. We present a simple model, Me-Retriever, which incorporates key event video representation and a new MeVTR loss for the MeVTR task. Comprehensive experiments show that this straightforward framework outperforms other models in the Video-to-Text and Text-to-Video tasks, effectively establishing a robust baseline for the MeVTR task. We believe this work serves as a strong foundation for future studies. Code is available at https://github.com/gengyuanmax/MeVTR.
https://arxiv.org/abs/2308.11551
Academic Papers
svg
9ddc1b7d28ac592382d488476164cd1a0bb49e4b89734833bea2a21acff5398d
2026-01-23T00:00:00-05:00
Strategic forecasting of internet of things technologies through patent social network and innovation cluster analysis
arXiv:2309.00707v2 Announce Type: replace Abstract: The rapid proliferation of Internet of Things (IoT) technologies necessitates robust forecasting mechanisms to guide strategic decision-making amid increasingly complex innovation landscapes. Despite extensive research employing patent analysis for technology forecasting, existing studies lack systematic integration of social network analysis, advanced text mining, and life cycle modeling to comprehensively map IoT technological evolution and collaborative dynamics. This study addresses these gaps by analyzing 154,227 IoT-related patents through a unified methodological framework combining BERT-based text embeddings, k-means clustering with Davies-Bouldin optimization, S-curve life cycle modeling, and Louvain community detection. The analysis identified nine distinct technology clusters spanning foundational infrastructure (Smart Monitoring and Sensor Systems, Network Communication and Data Transmission) to domain-specific applications (Agricultural IoT, Connected Vehicle Technologies). Life cycle assessment revealed temporal convergence, with eight clusters reaching saturation between 2023 and 2027, reflecting ecosystem-wide synchronization driven by standardization imperatives, platform consolidation, and pandemic-accelerated digital transformation. Social network analysis uncovered five major collaborative communities exhibiting divergent strategic orientations: from extreme specialization (Global Telecommunications Technology Leaders: 87.7% network communication focus) to diversified portfolios (China State Grid IoT Consortium: balanced infrastructure investment). Cross-analysis revealed complementary innovation strategies where infrastructure operators pursue breadth, telecommunications specialists maintain focused expertise, and academic researchers emphasize development-aligned agendas.
https://arxiv.org/abs/2309.00707
Academic Papers
svg
a9d29c0998a2b06970c0f4fe230dffbcfd4c344d84a6fb9e06fa7575ea6fa228
2026-01-23T00:00:00-05:00
Multi-Layered Reasoning from a Single Viewpoint for Learning See-Through Grasping
arXiv:2312.09822v5 Announce Type: replace Abstract: Sensory substitution enables biological systems to perceive stimuli that are typically perceived by another organ, which is inspirational for physical agents. Multimodal perception of intrinsic and extrinsic interactions is critical in building an intelligent robot that learns. This study presents a Vision-based See-Through Perception (VBSeeThruP) architecture that simultaneously perceives multiple intrinsic and extrinsic modalities from a single visual input, in a markerless manner, all packed into a soft robotic finger using the Soft Polyhedral Network design. It is generally applicable to miniature vision systems placed beneath deformable networks with a see-through design, capturing real-time images of the network's physical interactions induced by contact-based events, overlaid on the visual scene of the external environment, as demonstrated in the ablation study. We present the VBSeeThruP's capability for learning reactive grasping without using external cameras or dedicated force and torque sensors on the fingertips. Using the inpainted scene and the deformation mask, we further demonstrate the multimodal performance of the VBSeeThruP architecture to simultaneously achieve various perceptions, including but not limited to scene inpainting, object detection, depth sensing, scene segmentation, masked deformation tracking, 6D force/torque sensing, and contact event detection, all within a single sensory input from the in-finger vision markerlessly.
https://arxiv.org/abs/2312.09822
Academic Papers
svg
437754f16013fb46f56d744b76ea7df1fc97e758e9025aff74fd34c902d3a8df
2026-01-23T00:00:00-05:00
Paramanu: Compact and Competitive Monolingual Language Models for Low-Resource Morphologically Rich Indian Languages
arXiv:2401.18034v3 Announce Type: replace Abstract: Multilingual large language models (LLMs) are expensive to pretrain and often suffer from imbalances across languages and datasets, English-centric bias, tokenizer oversegmentation for morphologically rich low-resource languages, and the curse of multilinguality. We introduce PARAMANU, the first family of Indian-only autoregressive language models trained from scratch on open-source language-specific data for the five most spoken Indian languages: Bengali, Hindi, Marathi, Tamil, and Telugu. All models are designed for affordability and are trained on a single GPU with a budget under $1,000, allowing under-resourced researchers to build competitive language models. To address low-resource challenges, we develop morphology-aligned, low-fertility tokenizers, propose an interpolation-based method for token position indices in RoPE based scaling to train longer sequences efficiently. We also create instruction-tuning datasets in Bangla that are translated to the other four languages. Despite their small size (108M-367M parameters), Paramanu achieves a strong performance-efficiency tradeoff and outperforms most larger multilingual models across all five languages. Our collection is available at https://huggingface.co/collections/mitodru/paramanu.
https://arxiv.org/abs/2401.18034
Academic Papers
svg
8b85e5cd36ba8150b76d6e7c15dbec852d473d32b5ca5d271d0805fda083975d
2026-01-23T00:00:00-05:00
Scalable Multi-view Clustering via Explicit Kernel Features Maps
arXiv:2402.04794v2 Announce Type: replace Abstract: The proliferation of high-dimensional data from sources such as social media, sensor networks, and online platforms has created new challenges for clustering algorithms. Multi-view clustering, which integrates complementary information from multiple data perspectives, has emerged as a powerful solution. However, existing methods often struggle with scalability and efficiency, particularly on large attributed networks. In this work, we address these limitations by leveraging explicit kernel feature maps and a non-iterative optimization strategy, enabling efficient and accurate clustering on datasets with millions of points.
https://arxiv.org/abs/2402.04794
Academic Papers
svg
33922f707b247829487e0793f5d9427e5d92573313ffbbdb580caab111b78c36
2026-01-23T00:00:00-05:00
Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes
arXiv:2402.05406v4 Announce Type: replace Abstract: Structured pruning is a promising approach to create smaller, faster large language models. However, existing methods typically rely on computing the gradient via backward passes, which can inflate memory requirements and compute costs. In this work we introduce Bonsai, a gradient-free structured pruning method that eliminates the need for backpropagation, significantly reducing memory requirements and compute costs while achieving state-of-the-art pruning performance. Bonsai uses forward-pass-only perturbative pruning to enable efficient compression of large models on a broader range of hardware configurations. Unlike existing structured pruning approaches, Bonsai not only achieves better compression with fewer resources but also produces models that are twice as fast as those generated by semi-structured pruning. As a concrete demonstration, we use Bonsai to prune 7B and 8B models to 50% sparsity on a single A6000 GPU -- a task challenging for backprop-based methods in memory-constrained settings, as they require 2-3x the memory. Our results show that removing backprop as a requirement not only enables pruning larger models on constrained hardware but can also lead to state-of-the-art efficiency and performance.
https://arxiv.org/abs/2402.05406
Academic Papers
svg
b8556d0e0dcfeb696ac08dcfc61b6705fc75381f255e7a2829aef2753ae857fb
2026-01-23T00:00:00-05:00
Thought of Search: Planning with Language Models Through The Lens of Efficiency
arXiv:2404.11833v3 Announce Type: replace Abstract: Among the most important properties of algorithms investigated in computer science are soundness, completeness, and complexity. These properties, however, are rarely analyzed for the vast collection of recently proposed methods for planning with large language models. In this work, we alleviate this gap. We analyse these properties of using LLMs for planning and highlight that recent trends abandon both soundness and completeness for the sake of inefficiency. We propose a significantly more efficient approach that can, at the same time, maintain both soundness and completeness. We exemplify on four representative search problems, comparing to the LLM-based solutions from the literature that attempt to solve these problems. We show that by using LLMs to produce the code for the search components we can solve the entire datasets with 100\% accuracy with only a few calls to the LLM. We argue for a responsible use of compute resources; urging research community to investigate sound and complete LLM-based approaches that uphold efficiency.
https://arxiv.org/abs/2404.11833
Academic Papers
svg
e3b17e8f61001b95c891222c4d79508222c37469b1891c1406f034ec4027f1bd
2026-01-23T00:00:00-05:00
Sign Language-Based versus Touch-Based Input for Deaf Users with Interactive Personal Assistants in Simulated Kitchen Environments
arXiv:2404.14610v2 Announce Type: replace Abstract: In this study, we assess the usability of interactive personal assistants (IPAs), such as Amazon Alexa, in a simulated kitchen smart home environment, with deaf and hard of hearing users. Participants engage in activities in a way that causes their hands to get dirty. With these dirty hands, they are tasked with two different input methods for IPAs: American Sign Language (ASL) in a Wizard-of-Oz design, and smart home apps with a touchscreen. Usability ratings show that participants significantly preferred ASL over touch-based apps with dirty hands, although not to a larger extent than in comparable previous work with clean hands. Participants also expressed significant enthusiasm for ASL-based IPA interaction in Netpromoter scores and in questions about their overall preferences. Preliminary observations further suggest that having dirty hands may affect the way people sign, which may pose challenges for building IPAs that natively support sign language input.
https://arxiv.org/abs/2404.14610
Academic Papers
svg
070103bf51ea97c09c87dc1605b0cfb2a5083dc511dbf7769be7126872671922
2026-01-23T00:00:00-05:00
Efficient Multimodal Large Language Models: A Survey
arXiv:2405.10739v3 Announce Type: replace Abstract: In the past year, Multimodal Large Language Models (MLLMs) have demonstrated remarkable performance in tasks such as visual question answering, visual understanding and reasoning. However, the extensive model size and high training and inference costs have hindered the widespread application of MLLMs in academia and industry. Thus, studying efficient and lightweight MLLMs has enormous potential, especially in edge computing scenarios. In this survey, we provide a comprehensive and systematic review of the current state of efficient MLLMs. Specifically, we summarize the timeline of representative efficient MLLMs, research state of efficient structures and strategies, and the applications. Finally, we discuss the limitations of current efficient MLLM research and promising future directions. Please refer to our GitHub repository for more details: https://github.com/lijiannuist/Efficient-Multimodal-LLMs-Survey.
https://arxiv.org/abs/2405.10739
Academic Papers
svg
74f6bde368992d4ecd7434ca77242552b0801a8fca5b1d410e9cf0b05e9a3445
2026-01-23T00:00:00-05:00
Neural Green's Operators for Parametric Partial Differential Equations
arXiv:2406.01857v5 Announce Type: replace Abstract: This work introduces a paradigm for constructing parametric neural operators that are derived from finite-dimensional representations of Green's operators for linear partial differential equations (PDEs). We refer to such neural operators as Neural Green's Operators (NGOs). Our construction of NGOs preserves the linear action of Green's operators on the inhomogeneity fields, while approximating the nonlinear dependence of the Green's function on the coefficients of the PDE using neural networks. This construction reduces the complexity of the problem from learning the entire solution operator and its dependence on all parameters to only learning the Green's function and its dependence on the PDE coefficients. Furthermore, we show that our explicit representation of Green's functions enables the embedding of desirable mathematical attributes in our NGO architectures, such as symmetry, spectral, and conservation properties. Through numerical benchmarks on canonical PDEs, we demonstrate that NGOs achieve comparable or superior accuracy to Deep Operator Networks, Variationally Mimetic Operator Networks, and Fourier Neural Operators with similar parameter counts, while generalizing significantly better when tested on out-of-distribution data. For parametric time-dependent PDEs, we show that NGOs that are trained on a single time step can produce pointwise-accurate dynamics in an auto-regressive manner over arbitrarily large numbers of time steps. For parametric nonlinear PDEs, we demonstrate that NGOs trained exclusively on solutions of corresponding linear problems can be embedded within iterative solvers to yield accurate solutions, provided a suitable initial guess is available. Finally, we show that we can leverage the explicit representation of Green's functions returned by NGOs to construct effective matrix preconditioners that accelerate iterative solvers for PDEs.
https://arxiv.org/abs/2406.01857
Academic Papers
svg
3d8e001eae0855159554377276acf9c5660641fc6bd7ba6e1a080f85e906486b
2026-01-23T00:00:00-05:00
A Comprehensive Study on Large Language Models for Mutation Testing
arXiv:2406.09843v5 Announce Type: replace Abstract: Large Language Models (LLMs) have recently been used to generate mutants in both research work and in industrial practice. However, there has been no comprehensive empirical study of their performance for this increasingly important LLM-based Software Engineering application. To address this, we conduct a comprehensive empirical study evaluating BugFarm and LLMorpheus (the two state-of-the-art LLM-based approaches), alongside seven LLMs using our newly designed prompt, including both leading open- and closed-source models, on 851 real bugs from two Java real-world bug benchmarks. Our results reveal that, compared to existing rule-based approaches, LLMs generate more diverse mutants, that are behaviorally closer to real bugs and, most importantly, with 111.29% higher fault detection. That is, 87.98% (for LLMs) vs. 41.64% (for rule-based); an increase of 46.34 percentage points. Nevertheless, our results also reveal that these impressive results for improved effectiveness come at a cost: the LLM-generated mutants have worse non-compilability, duplication, and equivalent mutant rates by 26.60, 10.14, and 3.51 percentage points, respectively. These findings are immediately actionable for both research and practice. They allow practitioners to have greater confidence in deploying LLM-based mutation, while researchers now have a baseline for the state-of-the-art, with which they can research techniques to further improve effectiveness and reduce cost.
https://arxiv.org/abs/2406.09843
Academic Papers
svg
30866ef5e331d0f4a924bbbebf17a04d0f24e231ecab742ed8ae45473ac29742
2026-01-23T00:00:00-05:00
On the Exponential Convergence for Offline RLHF with Pairwise Comparisons
arXiv:2406.12205v2 Announce Type: replace Abstract: We consider the problem of offline reinforcement learning from human feedback (RLHF) with pairwise comparisons proposed by Zhu et al. (2023), where the implicit reward is a linear function of an unknown parameter. Given an offline dataset, our objective consists in ascertaining the optimal action for each state, with the ultimate goal of minimizing the {\em simple regret}. We propose an algorithm, \underline{RL} with \underline{L}ocally \underline{O}ptimal \underline{W}eights or {\sc RL-LOW}, which yields an exponential form of simple regret of $\exp ( - \Omega(n/H) )$ where $n$ is the number of data samples and $H$ denotes an instance-dependent hardness quantity that depends explicitly on the suboptimality gap of each action. Furthermore, we derive a first-of-its-kind instance-dependent lower bound in offline RLHF with pairwise comparisons. Interestingly, we observe that the lower and upper bounds on the simple regret match order-wise in the exponent, demonstrating order-wise optimality of our {\sc RL-LOW}. In view of privacy considerations in practical applications, we also extend {\sc RL-LOW} to the setting of $(\varepsilon,\delta)$-differential privacy and show, somewhat surprisingly, that the hardness parameter $H$ is unchanged in the asymptotic regime as $n$ tends to infinity; this underscores the inherent efficiency of {\sc RL-LOW} in terms of preserving the privacy of the observed rewards. Given our focus on establishing instance-dependent bounds of exponential convergence, our research fills the research gap in existing studies that concentrate on establishing worst-case regrets of {\em inverse polynomial convergence} (e.g., $\widetilde{O}(\frac{1}{\sqrt{n}})$) for offline RLHF with pairwise comparisons.
https://arxiv.org/abs/2406.12205
Academic Papers
svg
5beca48cac402794f56564287d404b95d0bfcf5e6ffe7779f895fd5dd7d519fd
2026-01-23T00:00:00-05:00
Vision-Language Models Align with Human Neural Representations in Concept Processing
arXiv:2407.17914v3 Announce Type: replace Abstract: Recent studies suggest that transformer-based vision-language models (VLMs) capture the multimodality of concept processing in the human brain. However, a systematic evaluation exploring different types of VLM architectures and the role played by visual and textual context is still lacking. Here, we analyse multiple VLMs employing different strategies to integrate visual and textual modalities, along with language-only counterparts. We measure the alignment between concept representations by models and existing (fMRI) brain responses to concept words presented in two experimental conditions, where either visual (pictures) or textual (sentences) context is provided. Our results reveal that VLMs outperform the language-only counterparts in both experimental conditions. However, controlled ablation studies show that only for some VLMs, such as LXMERT and IDEFICS2, brain alignment stems from genuinely learning more human-like concepts during pretraining, while others are highly sensitive to the context provided at inference. Additionally, we find that vision-language encoders are more brain-aligned than more recent, generative VLMs. Altogether, our study shows that VLMs align with human neural representations in concept processing, while highlighting differences among architectures. We open-source code and materials to reproduce our experiments at: https://github.com/dmg-illc/vl-concept-processing.
https://arxiv.org/abs/2407.17914
Academic Papers
svg
c2d22d7d15776120779f2b3c6f9741c9e165a24ed3bc1ca18dd2a88b3092abe1
2026-01-23T00:00:00-05:00
120 Domain-Specific Languages for Security
arXiv:2408.06219v3 Announce Type: replace Abstract: Security engineering, from security requirements engineering to the implementation of cryptographic protocols, is often supported by domain-specific languages (DSLs). Unfortunately, a lack of knowledge about these DSLs, such as which security aspects are addressed and when, hinders their effective use and further research. This systematic literature review examines 120 security-oriented DSLs based on six research questions concerning security aspects and goals, language-specific characteristics, integration into the software development lifecycle (SDLC), and effectiveness of the DSLs. We observe a high degree of fragmentation, which leads to opportunities for integration. We also need to improve the usability and evaluation of security DSLs.
https://arxiv.org/abs/2408.06219
Academic Papers
svg
7e2fe5040fbcfa2d338c293ca918f3524d7cd3f7579ef57b882c93a43e5ab22b
2026-01-23T00:00:00-05:00
Reinforcement Learning Compensated Model Predictive Control for Off-road Driving on Unknown Deformable Terrain
arXiv:2408.09253v2 Announce Type: replace Abstract: This study presents an Actor-Critic reinforcement learning Compensated Model Predictive Controller (AC2MPC) designed for high-speed, off-road autonomous driving on deformable terrains. Addressing the difficulty of modeling unknown tire-terrain interaction and ensuring real-time control feasibility and performance, this framework integrates deep reinforcement learning with a model predictive controller to manage unmodeled nonlinear dynamics. We evaluate the controller framework over constant and varying velocity profiles using high-fidelity simulator Project Chrono. Our findings demonstrate that our controller statistically outperforms standalone model-based and learning-based controllers over three unknown terrains that represent sandy deformable track, sandy and rocky track and cohesive clay-like deformable soil track. Despite varied and previously unseen terrain characteristics, this framework generalized well enough to track longitudinal reference speeds with the least error. Furthermore, this framework required significantly less training data compared to purely learning based controller, converging in fewer steps while delivering better performance. Even when under-trained, this controller outperformed the standalone controllers, highlighting its potential for safer and more efficient real-world deployment.
https://arxiv.org/abs/2408.09253
Academic Papers
svg
ddc6216d88c1e1eedfaffc28568d6e68206e5420f3ec289677161e03403d1fa5
2026-01-23T00:00:00-05:00
Medal Matters: Probing LLMs' Failure Cases Through Olympic Rankings
arXiv:2409.06518v3 Announce Type: replace Abstract: Large language models (LLMs) have achieved remarkable success in natural language processing tasks, yet their internal knowledge structures remain poorly understood. This study examines these structures through the lens of historical Olympic medal tallies, evaluating LLMs on two tasks: (1) retrieving medal counts for specific teams and (2) identifying rankings of each team. While state-of-the-art LLMs excel in recalling medal counts, they struggle with providing rankings, highlighting a key difference between their knowledge organization and human reasoning. These findings shed light on the limitations of LLMs' internal knowledge integration and suggest directions for improvement. To facilitate further research, we release our code, dataset, and model outputs.
https://arxiv.org/abs/2409.06518
Academic Papers
svg
c6d3db36884c41a89201aea2ec8b08a60bb8ad57e913ab289857cd9c1e6c63b4
2026-01-23T00:00:00-05:00
How hard can it be? Quantifying MITRE attack campaigns with attack trees and cATM logic
arXiv:2410.06692v4 Announce Type: replace Abstract: The landscape of cyber threats grows more complex by the day. Advanced Persistent Threats carry out attack campaigns - e.g. operations Dream Job, Wocao, and WannaCry - against which cybersecurity practitioners must defend. To prioritise which of these to defend against, cybersecurity experts must be equipped with the right toolbox to evaluate the most threatening ones. In particular, they would strongly benefit from (a) an estimation of the likelihood values for each attack recorded in the wild, and (b) transparently operationalising these values to compare campaigns quantitatively. Security experts could then perform transparent and accountable quantitatively-informed decisions. Here we construct such a framework: (1) quantifying the likelihood of attack campaigns via data-driven procedures on the MITRE knowledge-base, (2) introducing a methodology for automatic modelling of MITRE intelligence data, that captures any attack campaign via template attack tree models, and (3) proposing an open-source tool to perform these comparisons based on the cATM logic. Finally, we quantify the likelihood of all MITRE Enterprise campaigns, and compare the likelihood of the Wocao and Dream Job MITRE campaigns - generated with our proposed approach - against manually-built attack tree models. We demonstrate how our methodology is substantially lighter in modelling effort, and capable of capturing all the quantitative relevant data.
https://arxiv.org/abs/2410.06692
Academic Papers
svg
be494436c3dc3d47d2f9fda286f760babc5e80c5cf032980bdd535b23c5070b9
2026-01-23T00:00:00-05:00
CropCraft: Complete Structural Characterization of Crop Plants From Images
arXiv:2411.09693v2 Announce Type: replace Abstract: The ability to automatically build 3D digital twins of plants from images has countless applications in agriculture, environmental science, robotics, and other fields. However, current 3D reconstruction methods fail to recover complete shapes of plants due to heavy occlusion and complex geometries. In this work, we present a novel method for 3D modeling of agricultural crops based on optimizing a parametric model of plant morphology via inverse procedural modeling. Our method first estimates depth maps by fitting a neural radiance field and then optimizes a specialized loss to estimate morphological parameters that result in consistent depth renderings. The resulting 3D model is complete and biologically plausible. We validate our method on a dataset of real images of agricultural fields, and demonstrate that the reconstructed canopies can be used for a variety of monitoring and simulation applications.
https://arxiv.org/abs/2411.09693
Academic Papers
svg
8b0575878067d9546d835fdef03e5895c95eeaa18dccfd6028f06913e059f1be
2026-01-23T00:00:00-05:00
Robust Output Tracking for Induced Seismicity Mitigation in Underground Reservoirs Governed by a Nonlinear 3D PDE-ODE System
arXiv:2412.06327v3 Announce Type: replace Abstract: This paper presents a robust output-feedback controller for induced seismicity mitigation in geological reservoirs described by a coupled 3D PDE-ODE model. The controller is a MIMO Super-Twisting design, producing a continuous control signal and requiring minimal model information, while accommodating parameter uncertainty and spatial heterogeneity. Two operational outputs are regulated simultaneously: regional pressures and seismicity rates computed over reservoir sub-regions. Closed-loop properties are established via explicit bounds on the solution and its time derivative for both the infinite-dimensional dynamics and the nonlinear ODE system, yielding finite-time or exponential convergence of the tracking errors. The method is evaluated on a Groningen gas-field case study in two scenarios: gas production while not exceeding the intrinsic seismicity of the region, and combined production with CO$_2$ injection toward net-zero operation. Simulations demonstrate accurate tracking of pressure and seismicity targets across regions under significant parameter uncertainty, supporting safer reservoir operation without sacrificing production objectives.
https://arxiv.org/abs/2412.06327
Academic Papers
svg
ef416410e68f31f3d8939222c4234eb1c504a58ef2a2c0d8bd77f0de513ef5fc
2026-01-23T00:00:00-05:00
FREYJA: Efficient Join Discovery in Data Lakes
arXiv:2412.06637v2 Announce Type: replace Abstract: Data lakes are massive repositories of raw and heterogeneous data, designed to meet the requirements of modern data storage. Nonetheless, this same philosophy increases the complexity of performing discovery tasks to find relevant data for subsequent processing. As a response to these growing challenges, we present FREYJA, a modern data discovery system capable of effectively exploring data lakes, aimed at finding candidates to perform joins and increase the number of attributes for downstream tasks. More precisely, we want to compute rankings that sort potential joins by their relevance. Modern mechanisms apply advanced table representation learning (TRL) techniques to yield accurate joins. Yet, this incurs high computational costs when dealing with elevated volumes of data. In contrast to the state-of-the-art, we adopt a novel notion of join quality tailored to data lakes, which leverages syntactic measurements while achieving accuracy comparable to that of TRL approaches. To obtain this metric in a scalable manner we train a general purpose predictive model. Predictions are based, rather than on large-scale datasets, on data profiles, succinct representations that capture the underlying characteristics of the data. Our experiments show that our system, FREYJA, matches the results of the state-of-the-art whilst reducing the execution times by several orders of magnitude.
https://arxiv.org/abs/2412.06637
Academic Papers
svg
8b4c36ebb2f292671240a8791022964e352cef5a35b6c7a53d04a85f54ab3eb0
2026-01-23T00:00:00-05:00
Unexpected but informative: What fixation-related potentials tell us about the processing of confusing program code
arXiv:2412.10099v3 Announce Type: replace Abstract: As software pervades more and more areas of our professional and personal lives, there is an ever-increasing need to maintain software and for programmers to efficiently write and understand program code. In the first study of its kind, we analyze fixation-related potentials (FRPs) to explore the online processing of program code patterns that are confusing to programmers, but not to the computer (so-called atoms of confusion), and their underlying neurocognitive mechanisms in an ecologically valid setting. Relative to clean counterparts in program code without an atom of confusion, confusing code elicits a late frontal positivity of about 400 to 700 ms after first looking at the atom of confusion. This frontal positivity resembles an event-related potential (ERP) component found during natural language processing that is elicited by unexpected but plausible words in sentence context. Thus, we suggest that the brain engages similar neurocognitive mechanisms in response to unexpected and informative inputs in program code and in natural language. In both domains, these inputs update a comprehender's situation model, which is essential for information extraction from a quickly unfolding input. Our results have far-reaching implications for programming and pave the way for interdisciplinary collaborations between software engineering and psycholinguistics.
https://arxiv.org/abs/2412.10099
Academic Papers
svg
c5289f7fc473dd676b5d1047540d5b779181abb622661de29758dff42a155ed6
2026-01-23T00:00:00-05:00
ViSymRe: Vision-guided Multimodal Symbolic Regression
arXiv:2412.11139v3 Announce Type: replace Abstract: Extracting simple mathematical expression from an observational dataset to describe complex natural phenomena is one of the core objectives of artificial intelligence (AI). This field is known as symbolic regression (SR). Traditional SR models are based on genetic programming (GP) or reinforcement learning (RL), facing well-known challenges, such as low efficiency and overfitting. Recent studies have integrated SR with large language models (LLMs), enabling fast zero-shot inference by learning mappings from millions of dataset-expression pairs. However, since the input and output are inherently different modalities, such models often struggle to converge effectively. In this paper, we introduce ViSymRe, a vision-guided multimodal SR model that incorporates the third resource, expression graph, to bridge the modality gap. Different from traditional multimodal models, ViSymRe is trained to extract vision, termed virtual vision, from datasets, without relying on the global availability of expression graphs, which addresses the essential challenge of visual SR, i.e., expression graphs are not available during inference. Evaluation results on multiple mainstream benchmarks show that ViSymRe achieves more competitive performance than the state-of-the-art dataset-only baselines. The expressions predicted by ViSymRe not only fit the dataset well but are also simple and structurally accurate, goals that SR models strive to achieve.
https://arxiv.org/abs/2412.11139
Academic Papers
svg
984789daa37786fbe2922c8a2751793fda4cb383a50835c15fac0d7c4a7ab5ea
2026-01-23T00:00:00-05:00
Language-guided Medical Image Segmentation with Target-informed Multi-level Contrastive Alignments
arXiv:2412.13533v3 Announce Type: replace Abstract: Medical image segmentation is a fundamental task in numerous medical engineering applications. Recently, language-guided segmentation has shown promise in medical scenarios where textual clinical reports are readily available as semantic guidance. Clinical reports contain diagnostic information provided by clinicians, which can provide auxiliary textual semantics to guide segmentation. However, existing language-guided segmentation methods neglect the inherent pattern gaps between image and text modalities, resulting in sub-optimal visual-language integration. Contrastive learning is a well-recognized approach to align image-text patterns, but it has not been optimized for bridging the pattern gaps in medical language-guided segmentation that relies primarily on medical image details to characterize the underlying disease/targets. Current contrastive alignment techniques typically align high-level global semantics without involving low-level localized target information, and thus cannot deliver fine-grained textual guidance on crucial image details. In this study, we propose a Target-informed Multi-level Contrastive Alignment framework (TMCA) to bridge image-text pattern gaps for medical language-guided segmentation. TMCA enables target-informed image-text alignments and fine-grained textual guidance by introducing: (i) a target-sensitive semantic distance module that utilizes target information for more granular image-text alignment modeling, (ii) a multi-level contrastive alignment strategy that directs fine-grained textual guidance to multi-scale image details, and (iii) a language-guided target enhancement module that reinforces attention to critical image regions based on the aligned image-text patterns. Extensive experiments on four public benchmark datasets demonstrate that TMCA enabled superior performance over state-of-the-art language-guided medical image segmentation methods.
https://arxiv.org/abs/2412.13533
Academic Papers
svg
5ae8c0ae5f189a2e38c1efc93d52878704516d6ebccc078e9884f402afae5909
2026-01-23T00:00:00-05:00
Data-driven tool wear prediction in milling, based on a process-integrated single-sensor approach
arXiv:2412.19950v5 Announce Type: replace Abstract: Accurate tool wear prediction is essential for maintaining productivity and minimizing costs in machining. However, the complex nature of the tool wear process poses significant challenges to achieving reliable predictions. This study explores data-driven methods, in particular deep learning, for tool wear prediction. Traditional data-driven approaches often focus on a single process, relying on multi-sensor setups and extensive data generation, which limits generalization to new settings. Moreover, multi-sensor integration is often impractical in industrial environments. To address these limitations, this research investigates the transferability of predictive models using minimal training data, validated across two processes. Furthermore, it uses a simple setup with a single acceleration sensor to establish a low-cost data generation approach that facilitates the generalization of models to other processes via transfer learning. The study evaluates several machine learning models, including transformer-inspired convolutional neural networks (CNN), long short-term memory networks (LSTM), support vector machines (SVM), and decision trees, trained on different input formats such as feature vectors and short-time Fourier transform (STFT). The performance of the models is evaluated on two machines and on different amounts of training data, including scenarios with significantly reduced datasets, providing insight into their effectiveness under constrained data conditions. The results demonstrate the potential of specific models and configurations for effective tool wear prediction, contributing to the development of more adaptable and efficient predictive maintenance strategies in machining. Notably, the ConvNeXt model has an exceptional performance, achieving 99.1\% accuracy in identifying tool wear using data from only four milling tools operated until they are worn.
https://arxiv.org/abs/2412.19950
Academic Papers
svg
48cb1fba6ff031e79cfcb82a6f3c4e27bf3df5576d1050461f884e8f7849c8d8
2026-01-23T00:00:00-05:00
Explaining k-Nearest Neighbors: Abductive and Counterfactual Explanations
arXiv:2501.06078v2 Announce Type: replace Abstract: Despite the wide use of $k$-Nearest Neighbors as classification models, their explainability properties remain poorly understood from a theoretical perspective. While nearest neighbors classifiers offer interpretability from a ``data perspective'', in which the classification of an input vector $\bar{x}$ is explained by identifying the vectors $\bar{v}_1, \ldots, \bar{v}_k$ in the training set that determine the classification of $\bar{x}$, we argue that such explanations can be impractical in high-dimensional applications, where each vector has hundreds or thousands of features and it is not clear what their relative importance is. Hence, we focus on understanding nearest neighbor classifications through a ``feature perspective'', in which the goal is to identify how the values of the features in $\bar{x}$ affect its classification. Concretely, we study abductive explanations such as ``minimum sufficient reasons'', which correspond to sets of features in $\bar{x}$ that are enough to guarantee its classification, and counterfactual explanations based on the minimum distance feature changes one would have to perform in $\bar{x}$ to change its classification. We present a detailed landscape of positive and negative complexity results for counterfactual and abductive explanations, distinguishing between discrete and continuous feature spaces, and considering the impact of the choice of distance function involved. Finally, we show that despite some negative complexity results, Integer Quadratic Programming and SAT solving allow for computing explanations in practice.
https://arxiv.org/abs/2501.06078
Academic Papers
svg
a8eae12501297908e493d1bf6ec8f1035b4cfec3fb1d9b314df5103a8bef2b2f
2026-01-23T00:00:00-05:00
A domain decomposition strategy for natural imposition of mixed boundary conditions in port-Hamiltonian systems
arXiv:2501.06107v4 Announce Type: replace Abstract: In this contribution, a finite element scheme to impose mixed boundary conditions without introducing Lagrange multipliers is presented for hyperbolic systems described as port-Hamiltonian systems. The strategy relies on finite element exterior calculus and domain decomposition to interconnect two systems with dual input-output behavior. The spatial domain is split into two parts by introducing an arbitrary interface. Each subdomain is discretized with a mixed finite element formulation that introduces a uniform boundary condition in a natural way as the input. In each subdomain the finite element spaces are selected from a finite element subcomplex to obtain a stable discretization. The two systems are then interconnected together by making use of a feedback interconnection. This is achieved by discretizing the boundary inputs using appropriate spaces that couple the two formulations. The final systems include all boundary conditions explicitly and do not contain any Lagrange multiplier. Time integration is performed using the implicit midpoint or St\"ormer-Verlet scheme. The method can also be applied to semilinear systems containing algebraic nonlinearities. The proposed strategy is tested on different examples: geometrically exact intrinsic beam model, the wave equation, membrane elastodynamics and the Mindlin plate. Numerical tests assess the conservation properties of the scheme, the effectiveness of the methodology and its robustness against shear locking phenomena.
https://arxiv.org/abs/2501.06107
Academic Papers
svg
27304fd438d17f1df1a0193b6c999edfa8cb3d77e69e8907b2b1258650a19bf7
2026-01-23T00:00:00-05:00
NP-Hard Lower Bound Complexity for Semantic Self-Verification
arXiv:2501.15446v2 Announce Type: replace Abstract: We model Semantic Self-Verification (SSV) as the problem of determining whether a statement accurately characterizes its own semantic properties within a given interpretive framework that formalizes a challenge in AI safety and fairness: can an AI system verify that it has correctly interpreted rules intended to govern its behavior? We prove that SSV, in this specification, is NP-complete by constructing a polynomial-time reduction from 3-Satisfiability (3-SAT). Our reduction maps a 3-SAT formula to an instance of SSV involving ambiguous terms with binary interpretations and semantic constraints derived from logical clauses. This establishes that even simplified forms of semantic self-verification should face computational barriers. The NP-complete lower bound has implications for AI safety and fairness approaches that rely on semantic interpretation of instructions, including but not limited to constitutional AI, alignment via natural language, and instruction-following systems. Approaches where an AI system verify its understanding of directives may face this computational barrier. We argue that more realistic verification scenarios likely face even greater complexity.
https://arxiv.org/abs/2501.15446
Academic Papers
svg
d9d337d75bfe2d2788471f585d4b7d83be8bf1e0b7253546eb32046c0abf6aa2
2026-01-23T00:00:00-05:00
Information-theoretic Distinctions Between Deception and Confusion
arXiv:2501.16448v2 Announce Type: replace Abstract: We propose an information-theoretic formalization of the distinction between two fundamental AI safety failure modes: deceptive alignment and goal drift. While both can lead to systems that appear misaligned, we demonstrate that they represent distinct forms of information divergence occurring at different interfaces in the human-AI system. Deceptive alignment creates entropy between an agent's true goals and its observable behavior, while goal drift, or confusion, creates entropy between the intended human goal and the agent's actual goal. Though often observationally equivalent, these failures necessitate different interventions. We present a formal model and an illustrative thought experiment to clarify this distinction. We offer a formal language for re-examining prominent alignment challenges observed in Large Language Models (LLMs), offering novel perspectives on their underlying causes.
https://arxiv.org/abs/2501.16448
Academic Papers
svg
d971e8b307f8d90bcc2b3ca2e686e7c3037fc8d44fd97c1d700b7001fd2c22c9
2026-01-23T00:00:00-05:00
UniAttn: Reducing Inference Costs via Softmax Unification for Post-Training LLMs
arXiv:2502.00439v2 Announce Type: replace Abstract: Post-training is essential for adapting Large Language Models (LLMs) to real-world applications. Deploying post-trained models faces significant challenges due to substantial memory overhead and noticeable inference latency. Existing work has identified significant redundancies in LLMs and proposed efficient architectures, namely intra-layer KV sharing and cross-layer KV sharing. However, these methods still result in high inference time overhead, remaining suboptimal for post-training pre-trained LLMs. In this paper, we identify that the \texttt{Softmax} operation is a primary bottleneck for LLM inference and discover that it is actually highly redundant during post-training. We propose Softmax \textbf{Uni}fication in \textbf{Att}e\textbf{n}tion (\textbf{UniAttn}), a novel post-training method that unifies Softmax activations across transformer blocks to reduce LLM inference costs. Additionally, UniAttn adopts a linear projection to compensate for the errors induced by Softmax unification. Experiments show that UniAttn matches the performance of standard post-training while significantly reducing inference costs, outperforming existing efficient architectures during post-training.
https://arxiv.org/abs/2502.00439
Academic Papers
svg
7da6edde89270fbe42ca3df9b20a0bb67ad9d01f0a6da95246838d2b642cac91
2026-01-23T00:00:00-05:00
Sparse Data Diffusion for Scientific Simulations in Biology and Physics
arXiv:2502.02448v3 Announce Type: replace Abstract: Sparse data is fundamental to scientific simulations in biology and physics, from single-cell gene expression to particle calorimetry, where exact zeros encode physical absence rather than weak signal. However, existing diffusion models lack the physical rigor to faithfully represent this sparsity. This work introduces Sparse Data Diffusion (SDD), a generative method that explicitly models exact zeros via Sparsity Bits, unifying efficient ML generation with physically grounded sparsity handling. Empirical validation in particle physics and single-cell biology demonstrates that SDD achieves higher fidelity than baseline methods in capturing sparse patterns critical for scientific analysis, advancing scalable and physically faithful simulation.
https://arxiv.org/abs/2502.02448
Academic Papers
svg
d4def3398230cf253b9cc9fd4ee4e37c8de7ac65ee6edc238ca834899b46c7a3
2026-01-23T00:00:00-05:00
A Match Made in Heaven? AI-driven Matching of Vulnerabilities and Security Unit Tests
arXiv:2502.03365v4 Announce Type: replace Abstract: Software vulnerabilities are often detected via taint analysis, penetration testing, or fuzzing. They are also found via unit tests that exercise security-sensitive behavior with specific inputs, called vulnerability-witnessing tests. Generative AI models could help developers in writing them, but they require many examples to learn from, which are currently scarce. This paper introduces VuTeCo, an AI-driven framework for collecting examples of vulnerability-witnessing tests from Java repositories. VuTeCo carries out two tasks: (1) The "Finding" task to determine whether a unit test case is security-related, and (2) the "Matching" task to relate a test case to the vulnerability it witnesses. VuTeCo addresses the Finding task with UniXcoder, achieving an F0.5 score of 0.73 and a precision of 0.83 on a test set of unit tests from Vul4J. The Matching task is addressed using DeepSeek Coder, achieving an F0.5 score of 0.65 and a precision of 0.75 on a test set of pairs of unit tests and vulnerabilities from Vul4J. VuTeCo has been used in the wild on 427 Java projects and 1,238 vulnerabilities, obtaining 224 test cases confirmed to be security-related and 35 tests correctly matched to 29 vulnerabilities. The validated tests were collected in a new dataset called Test4Vul. VuTeCo lays the foundation for large-scale retrieval of vulnerability-witnessing tests, enabling future AI models to better understand and generate security unit tests.
https://arxiv.org/abs/2502.03365
Academic Papers
svg
1750a92d067b002e6f622dd245ff499ecc92348b73e01c3ea85fc561da17d580
2026-01-23T00:00:00-05:00
Cognitive AI framework 2.0: advances in the simulation of human thought
arXiv:2502.04259v2 Announce Type: replace Abstract: The Human Cognitive Simulation Framework proposes a governed cognitive AI architecture designed to improve personalization, adaptability, and long-term coherence in human AI interaction. The framework integrates short-term memory (conversation context), long-term memory (interaction context), cognitive processing modules, and managed knowledge persistence into a unified architectural model that ensures contextual continuity across sessions and controlled accumulation of relevant information. A central contribution is a unified memory architecture supervised by explicit governance mechanisms, including algorithmic relevance validation, selective persistence, and auditability. The framework incorporates differentiated processing modules for logical, creative, and analogical reasoning, enabling both structured task execution and complex contextual inference. Through dynamic and selective knowledge updating, the system augments the capabilities of large language models without modifying their internal parameters, relying instead on retrieval augmented generation and governed external memory. The proposed architecture addresses key challenges related to scalability, bias mitigation, and ethical compliance by embedding operational safeguards directly into the cognitive loop. These mechanisms establish a foundation for future work on continuous learning, sustainability, and multimodal cognitive interaction. This manuscript is a substantially revised and extended version of the previously released preprint (DOI:10.48550/arXiv.2502.04259).
https://arxiv.org/abs/2502.04259
Academic Papers
svg
b3d74753bbc24cb6abf056e28a1504209acde6a89838a81a0936962ea81b67ba
2026-01-23T00:00:00-05:00
"I never would have thought to say this": Example-Based Exploration to Balance Scientists' Writing Preferences with Public Science Communication Strategies
arXiv:2502.05287v3 Announce Type: replace Abstract: Public-facing science communication is important in garnering interest, engagement, and trust in science. Social media platforms provide scientists with opportunities to reach broader audiences, yet many resist adopting social media writing strategies because the strategies conflict with traditional science writing norms and personal preferences. To address this gap, we first evaluate readers' preferences for strategies such as examples, walkthroughs, and personal language. While many readers enjoyed science narratives that used these strategies, their effectiveness was nuanced and context-dependent, varying by topic and individual preference. Building on these findings, we design a system that uses contrastive examples to help scientists adopt and integrate these social media science writing strategies. In a user study with scientists, we found that presenting contrastive examples helped writers critically evaluate different narrative options, balance competing goals, and gain confidence in adapting social media writing strategies to fit both their topic and audience.
https://arxiv.org/abs/2502.05287
Academic Papers
svg
11b9ea8f78ff70f54fca664d653eed9cb571eba0e2c89032180822ec8b8ae050
2026-01-23T00:00:00-05:00
GENERator: A Long-Context Generative Genomic Foundation Model
arXiv:2502.07272v4 Announce Type: replace Abstract: The rapid advancement of DNA sequencing has produced vast genomic datasets, yet interpreting and engineering genomic function remain fundamental challenges. Recent large language models have opened new avenues for genomic analysis, but existing approaches are often limited by restricted training scope, constrained generative capability, or prohibitive computational cost. We introduce GENErator, a generative genomic foundation model for long-context DNA modeling, with a context length of 98k nucleotides, pre-trained on 386 billion nucleotides of eukaryotic DNA. Without task-specific fine-tuning, GENERator exhibits strong intrinsic capabilities: unsupervised embedding analyses reveal phylogenetically coherent structure, and sequence recovery benchmarks demonstrate generative accuracy comparable to or exceeding state-of-the-art models with substantially improved computational efficiency. In a zero-shot setting, GENERator achieves competitive variant effect prediction performance relative to alignment-based methods, while remaining fully alignment-free and broadly applicable across species. With task-specific fine-tuning, the model attains leading performance on established genomic benchmarks. We further demonstrate practical generative applications. GENERator can generate protein-coding DNA sequences that translate into structurally plausible proteins and, through a prompt-guided design framework, design cis-regulatory elements with targeted activity profiles, including synthetic super-enhancers validated by high-throughput UMI-STARR-seq assays. Together, these results establish GENERator as an efficient and biologically grounded framework for genomic interpretation and programmable sequence design. Code and supplementary resources are available at https://github.com/GenerTeam/GENERator.
https://arxiv.org/abs/2502.07272
Academic Papers
svg
c2999198a3cf3b558e801005e90b27aeb7abf8bcf634d379f94d65a26db3ebc4
2026-01-23T00:00:00-05:00
SCALAR: Scientific Citation-based Live Assessment of Long-context Academic Reasoning
arXiv:2502.13753v2 Announce Type: replace Abstract: Long-context understanding has emerged as a critical capability for large language models (LLMs). However, evaluating this ability remains challenging. We present SCALAR, a benchmark designed to assess citation-grounded long-context reasoning in academic writing. SCALAR leverages academic papers and their citation structure to automatically generate high-quality ground-truth labels without human annotation. It features controllable difficulty levels and a dynamic updating mechanism that mitigates data contamination. The benchmark includes two tasks: a multiple-choice QA format and a cloze-style citation prediction. We evaluate a range of state-of-the-art LLMs and find that the multiple-choice task effectively distinguishes model capabilities. While human experts achieve over 90% accuracy, most models struggle. The cloze-style task is even more challenging, with no model exceeding 50% accuracy. SCALAR provides a domain-grounded, continuously updating framework for tracking progress in citation-based long-context understanding.
https://arxiv.org/abs/2502.13753
Academic Papers
svg
e83367e184df3a7658b523faa2fbdf6cbeae8fdd8bac2ce85252acfc1caab46d
2026-01-23T00:00:00-05:00
I-MCTS: Enhancing Agentic AutoML via Introspective Monte Carlo Tree Search
arXiv:2502.14693v4 Announce Type: replace Abstract: Recent advancements in large language models (LLMs) have shown remarkable potential in automating machine learning tasks. However, existing LLM-based agents often struggle with low-diversity and suboptimal code generation. While recent work has introduced Monte Carlo Tree Search (MCTS) to address these issues, limitations persist in the quality and diversity of thoughts generated, as well as in the scalar value feedback mechanisms used for node selection. In this study, we introduce Introspective Monte Carlo Tree Search (I-MCTS), a novel approach that iteratively expands tree nodes through an introspective process that meticulously analyzes solutions and results from parent and sibling nodes. This facilitates a continuous refinement of the node in the search tree, thereby enhancing the overall decision-making process. Furthermore, we integrate a Large Language Model (LLM)-based value model to facilitate direct evaluation of each node's solution prior to conducting comprehensive computational rollouts. A hybrid rewarding mechanism is implemented to seamlessly transition the Q-value from LLM-estimated scores to actual performance scores. This allows higher-quality nodes to be traversed earlier. Applied to the various ML tasks, our approach demonstrates a 4% absolute improvement in performance compared to the strong open-source AutoML agents, showcasing its effectiveness in enhancing agentic AutoML systems. Resource available at https://github.com/jokieleung/I-MCTS
https://arxiv.org/abs/2502.14693
Academic Papers
svg
8f80a8a1398ec9a48ca73427cfbe0db403fc99fe0e42ddaa5bbdb6ae1f218585
2026-01-23T00:00:00-05:00
English K_Quantization of LLMs Does Not Disproportionately Diminish Multilingual Performance
arXiv:2503.03592v4 Announce Type: replace Abstract: For consumer usage of locally deployed LLMs, the GGUF format and k\_quantization are invaluable tools for maintaining the performance of the original model while reducing it to sizes deployable with consumer-grade hardware. The number of bits dedicated to each weight from the original model is reduced based on how important they are thought to be during model inference. This importance is arrived at through the application of an 'importance matrix'-a relatively small text document meant to be representative of the LLM's standard use-cases. In the vast majority of quants available online, this document is primarily written in English. It was therefore an open question whether performance on English language tasks was preserved through the sacrifice of multilingual performance and whether it can be preserved with alternate importance matrices. This article investigates these hypotheses by quantizing Llama3.3 70B on importance matrices written in three languages (English, Norwegian, and Malayalam) and evaluating them on the MixEval dataset in both English and Norwegian. All experiments related to yielded non-significant results indicating that current quantization practices do not disproportionately harm multilingual performance.
https://arxiv.org/abs/2503.03592
Academic Papers
svg
43daf0d190cf0f4b9a7dd3891037b55ec47e430e2c85901ade535ce9fcf51fe5
2026-01-23T00:00:00-05:00
Games with $\omega$-Automatic Preference Relations
arXiv:2503.04759v3 Announce Type: replace Abstract: This paper investigates Nash equilibria (NEs) in multi-player turn-based games on graphs, where player preferences are modeled as $\omega$-automatic relations via deterministic parity automata. Unlike much of the existing literature, which focuses on specific reward functions, our results apply to any preference relation definable by an $\omega$-automatic relation. We analyze the computational complexity of determining the existence of an NE (possibly under some constraints), verifying whether a given strategy profile forms an NE, and checking whether a specific outcome can be realized by an NE. When a (constrained) NE exists, we show that there always exists one with finite-memory strategies. Finally, we explore fundamental properties of $\omega$-automatic relations and their implications for the existence of equilibria.
https://arxiv.org/abs/2503.04759
Academic Papers
svg
cb96875ec432f2d00eb5065930d9c4cd3fc00e4ff30fe7e18bccc3a2c80b2439
2026-01-23T00:00:00-05:00
Decoding Safety Feedback from Diverse Raters: A Data-driven Lens on Responsiveness to Severity
arXiv:2503.05609v5 Announce Type: replace Abstract: Ensuring the safety of Generative AI requires a nuanced understanding of pluralistic viewpoints. In this paper, we introduce a novel data-driven approach for analyzing ordinal safety ratings in pluralistic settings. Specifically, we address the challenge of interpreting nuanced differences in safety feedback from a diverse population expressed via ordinal scales (e.g., a Likert scale). We define non-parametric responsiveness metrics that quantify how raters convey broader distinctions and granular variations in the severity of safety violations. Leveraging publicly available datasets of pluralistic safety feedback as our case studies, we investigate how raters from different demographic groups use an ordinal scale to express their perceptions of the severity of violations. We apply our metrics across violation types, demonstrating their utility in extracting nuanced insights that are crucial for aligning AI systems reliably in multi-cultural contexts. We show that our approach can inform rater selection and feedback interpretation by capturing nuanced viewpoints across different demographic groups, hence improving the quality of pluralistic data collection and in turn contributing to more robust AI alignment.
https://arxiv.org/abs/2503.05609
Academic Papers
svg
c1465b67c7ece4e99b793a84cd2a8bef84e070a10bce0b384fced3dc197796ae
2026-01-23T00:00:00-05:00
MedSimAI: Simulation and Formative Feedback Generation to Enhance Deliberate Practice in Medical Education
arXiv:2503.05793v2 Announce Type: replace Abstract: Medical education faces challenges in providing scalable, consistent clinical skills training. Simulation with standardized patients (SPs) develops communication and diagnostic skills but remains resource-intensive and variable in feedback quality. Existing AI-based tools show promise yet often lack comprehensive assessment frameworks, evidence of clinical impact, and integration of self-regulated learning (SRL) principles. Through a multi-phase co-design process with medical education experts, we developed MedSimAI, an AI-powered simulation platform that enables deliberate practice through interactive patient encounters with immediate, structured feedback. Leveraging large language models, MedSimAI generates realistic clinical interactions and provides automated assessments aligned with validated evaluation frameworks. In a multi-institutional deployment (410 students; 1,024 encounters across three medical schools), 59.5 percent engaged in repeated practice. At one site, mean Objective Structured Clinical Examination (OSCE) history-taking scores rose from 82.8 to 88.8 (p < 0.001, Cohen's d = 0.75), while a second site's pilot showed no significant change. Automated scoring achieved 87 percent accuracy in identifying proficiency thresholds on the Master Interview Rating Scale (MIRS). Mixed-effects analyses revealed institution and case effects. Thematic analysis of 840 learner reflections highlighted challenges in missed items, organization, review of systems, and empathy. These findings position MedSimAI as a scalable formative platform for history-taking and communication, motivating staged curriculum integration and realism enhancements for advanced learners.
https://arxiv.org/abs/2503.05793
Academic Papers
svg
d87bd6cf75599be1713d25d1ff09d9ea67cd524ea86eac3b56f2c1803c363cf7
2026-01-23T00:00:00-05:00
GRITHopper: Decomposition-Free Multi-Hop Dense Retrieval
arXiv:2503.07519v2 Announce Type: replace Abstract: Decomposition-based multi-hop retrieval methods rely on many autoregressive steps to break down complex queries, which breaks end-to-end differentiability and is computationally expensive. Decomposition-free methods tackle this, but current decomposition-free approaches struggle with longer multi-hop problems and generalization to out-of-distribution data. To address these challenges, we introduce GRITHopper-7B, a novel multi-hop dense retrieval model that achieves state-of-the-art performance on both in-distribution and out-of-distribution benchmarks. GRITHopper combines generative and representational instruction tuning by integrating causal language modeling with dense retrieval training. Through controlled studies, we find that incorporating additional context after the retrieval process, referred to as post-retrieval language modeling, enhances dense retrieval performance. By including elements such as final answers during training, the model learns to better contextualize and retrieve relevant information. GRITHopper-7B offers a robust, scalable, and generalizable solution for multi-hop dense retrieval, and we release it to the community for future research and applications requiring multi-hop reasoning and retrieval capabilities.
https://arxiv.org/abs/2503.07519
Academic Papers
svg
230bd79f8da7dc374bd0765e51436e49f48597effc6e1e9fa18776e5b7a02605
2026-01-23T00:00:00-05:00
Chat-TS: Enhancing Multi-Modal Reasoning Over Time-Series and Natural Language Data
arXiv:2503.10883v2 Announce Type: replace Abstract: Large language models are being rapidly deployed across many fields such as healthcare, finance, transportation, and energy, where time-series data are fundamental components. The current works are still limited in their ability to perform reasoning that involves both time-series and the corresponding textual content. We address this gap by introducing Chat-TS, a large language model (LLM) based framework designed to support reasoning over time series and textual data. Unlike traditional models, Chat-TS integrates time-series tokens into LLMs' vocabulary, enhancing its reasoning ability over both modalities without compromising core natural language capabilities. To support learning and evaluation, we contribute new datasets: the TS Instruct Training Dataset (pairing diverse time-series data with relevant text instructions and responses for instruction tuning), the TS Instruct Question and Answer (QA) Gold Dataset (multiple-choice questions to evaluate multimodal reasoning), and a TS Instruct Quantitative Probing Set (a small subset of TS Instruct QA reasoning tasks alongside math and decision-making questions for LLM evaluation). We design a training strategy to preserve the inherent reasoning capabilities of LLMs while augmenting them for time-series reasoning. Experiments show that Chat-TS achieves state-of-the-art performance in multimodal reasoning tasks by maintaining strong natural language proficiency while improving time-series reasoning.
https://arxiv.org/abs/2503.10883
Academic Papers
svg
de5642fe1372be608870d55386525394ae80334efba4fc989e9bb6d4e5f1dddd
2026-01-23T00:00:00-05:00
Variational Bayesian Personalized Ranking
arXiv:2503.11067v2 Announce Type: replace Abstract: Pairwise learning underpins implicit collaborative filtering, yet its effectiveness is often hindered by sparse supervision, noisy interactions, and popularity-driven exposure bias. In this paper, we propose Variational Bayesian Personalized Ranking (VarBPR), a tractable variational framework for implicit-feedback pairwise learning that offers principled exposure controllability and theoretical interpretability. VarBPR reformulates pairwise learning as variational inference over discrete latent indexing variables, explicitly modeling noise and indexing uncertainty, and divides training into two stages: variational inference and variational learning. In the variational inference stage, we develop a variational formulation that integrates preference alignment, denoising, and popularity debiasing under a unified ELBO/regularization objective, deriving closed-form posteriors with clear control semantics: the prior encodes a target exposure pattern, while temperature/regularization strength controls posterior-prior adherence. As a result, exposure controllability becomes an endogenous and interpretable outcome of variational inference. In the variational learning stage, we propose a posterior-compression objective that reduces the ideal ELBO's computational complexity from polynomial to linear, with the approximation justified by an explicit Jensen-gap upper bound. Theoretically, we provide interpretable generalization guarantees by identifying a structural error component and revealing the opportunity cost of prioritizing certain exposure patterns (e.g., long-tail), offering a concrete analytical lens for designing controllable recommender systems. Empirically, we validate VarBPR across popular backbones; it demonstrates consistent gains in ranking accuracy, enables controlled long-tail exposure, and preserves the linear-time complexity of BPR.
https://arxiv.org/abs/2503.11067
Academic Papers
svg
7fffed33b880e26c0766d80f5cbf7c7978c2004066f68b866718eb825c6ba9ba
2026-01-23T00:00:00-05:00
Simulating Dual-Pixel Images From Ray Tracing For Depth Estimation
arXiv:2503.11213v2 Announce Type: replace Abstract: Many studies utilize dual-pixel (DP) sensor phase characteristics for various applications, such as depth estimation and deblurring. However, since the DP image features are entirely determined by the camera hardware, DP-depth paired datasets are very scarce, especially when performing depth estimation on customized cameras. To overcome this, studies simulate DP images using ideal optical system models. However, these simulations often violate real optical propagation laws, leading to poor generalization to real DP data. To address this, we investigate the domain gap between simulated and real DP data, and propose solutions using the Simulating DP images from ray tracing (Sdirt) scheme. The Sdirt generates realistic DP images via ray tracing and integrates them into the depth estimation training pipeline. Experimental results show that models trained with Sdirt-simulated images generalize better to real DP data. The code and collected datasets will be available at github.com/LinYark/Sdirt
https://arxiv.org/abs/2503.11213
Academic Papers
svg
63030e10ca82eb8ea65d5c70d241eac07d47a4e41da79ebe57949cc3102d1ac5
2026-01-23T00:00:00-05:00
A Peek Behind the Curtain: Using Step-Around Prompt Engineering to Identify Bias and Misinformation in GenAI Models
arXiv:2503.15205v2 Announce Type: replace Abstract: This research examines the emerging technique of step-around prompt engineering in GenAI research, a method that deliberately bypasses AI safety measures to expose underlying biases and vulnerabilities in GenAI models. We discuss how Internet-sourced training data introduces unintended biases and misinformation into AI systems, which can be revealed through the careful application of step-around techniques. Drawing parallels with red teaming in cybersecurity, we argue that step-around prompting serves a vital role in identifying and addressing potential vulnerabilities while acknowledging its dual nature as both a research tool and a potential security threat. Our findings highlight three key implications: (1) the persistence of Internet-derived biases in AI training data despite content filtering, (2) the effectiveness of step-around techniques in exposing these biases when used responsibly, and (3) the need for robust safeguards against malicious applications of these methods. We conclude by proposing an ethical framework for using step-around prompting in AI research and development, emphasizing the importance of balancing system improvements with security considerations.
https://arxiv.org/abs/2503.15205
Academic Papers
svg
4319c208cf14eabf3600635ac6feb7746c03e457c7816fb393d2599660840cc1
2026-01-23T00:00:00-05:00
ImputeGAP: A Comprehensive Library for Time Series Imputation
arXiv:2503.15250v2 Announce Type: replace Abstract: With the prevalence of sensor failures, imputation, the process of estimating missing values, has emerged as the cornerstone of time series data pre-processing. While numerous imputation algorithms have been developed to repair these data gaps, existing time series libraries provide limited imputation support. Furthermore, they often lack the ability to simulate realistic time series missingness patterns and fail to account for the impact of the imputed data on subsequent downstream analysis. This paper introduces ImputeGAP, a comprehensive library for time series imputation that supports a diverse range of imputation methods and modular missing data simulation, catering to datasets with varying characteristics. The library includes extensive customization options, such as automated hyperparameter tuning, benchmarking, explainability, downstream evaluation, and compatibility with popular time series frameworks.
https://arxiv.org/abs/2503.15250
Academic Papers
svg
d26f8290e1adc0267e4a81e7a024f10a30552e12ac33bad708601c32475ca797
2026-01-23T00:00:00-05:00
Trees in Coalgebra from Generalized Reachability
arXiv:2503.15585v4 Announce Type: replace Abstract: An automaton is called reachable if every state is reachable from the initial state. This notion has been generalized coalgebraically in two ways: first, via a universal property on pointed coalgebras, namely, that a reachable coalgebra has no proper subcoalgebras; and second, a coalgebra is reachable if it arises as the union of an iterative computation of successor states, starting from the initial state. In the current paper, we present corresponding universal properties and iterative constructions for trees. The universal property captures when a coalgebra is a tree, namely, when it has no proper tree unravellings. The iterative construction unravels an arbitrary coalgebra to a tree. We show that this yields the expected notion of tree for a variety of standard examples. We obtain our characterization of trees by first generalizing the previous theory of reachable coalgebras and of a minimal object in a category, related to projectivity. Surprisingly, both the universal property and the iterative construction for trees arise as instances of this generalized notion of reachability. Our iterative construction works for all analytic set functors.
https://arxiv.org/abs/2503.15585
Academic Papers
svg
5616caa55706e23bf74cd47c944d2d96df0f791ec4afd7606000d23ae260ee52
2026-01-23T00:00:00-05:00
Poor Alignment and Steerability of Large Language Models: Evidence from College Admission Essays
arXiv:2503.20062v2 Announce Type: replace Abstract: People are increasingly using technologies equipped with large language models (LLM) to write texts for formal communication, which raises two important questions at the intersection of technology and society: Who do LLMs write like (model alignment); and can LLMs be prompted to change who they write like (model steerability). We investigate these questions in the high-stakes context of undergraduate admissions at a selective university by comparing lexical and sentence variation between essays written by 30,000 applicants to two types of LLM-generated essays: one prompted with only the essay question used by the human applicants; and another with additional demographic information about each applicant. We consistently find that both types of LLM-generated essays are linguistically distinct from human-authored essays, regardless of the specific model and analytical approach. Further, prompting a specific sociodemographic identity is remarkably ineffective in aligning the model with the linguistic patterns observed in human writing from this identity group. This holds along the key dimensions of sex, race, first-generation status, and geographic location. The demographically prompted and unprompted synthetic texts were also more similar to each other than to the human text, meaning that prompting did not alleviate homogenization. These issues of model alignment and steerability in current LLMs raise concerns about the use of LLMs in high-stakes contexts.
https://arxiv.org/abs/2503.20062
Academic Papers
svg
19bb1afc3dbcc9f1d94ad6fa8e8dd8ff7dedaf507536834fff42322646101bea
2026-01-23T00:00:00-05:00
Is Your Writing Being Mimicked by AI? Unveiling Imitation with Invisible Watermarks in Creative Writing
arXiv:2504.00035v3 Announce Type: replace Abstract: Efficient knowledge injection methods for Large Language Models (LLMs), such as In-Context Learning, knowledge editing, and efficient parameter fine-tuning, significantly enhance model utility on downstream tasks. However, they also pose substantial risks of unauthorized imitation and compromised data provenance for high-value unstructured data assets like creative works. Current copyright protection methods for creative works predominantly focus on visual arts, leaving a critical and unaddressed data engineering challenge in the safeguarding of creative writing. In this paper, we propose WIND (Watermarking via Implicit and Non-disruptive Disentanglement), a novel zero-watermarking, verifiable and implicit scheme that safeguards creative writing databases by providing verifiable copyright protection. Specifically, we decompose creative essence into five key elements, which are extracted utilizing LLMs through a designed instance delimitation mechanism and consolidated into condensed-lists. These lists enable WIND to convert core copyright attributes into verifiable watermarks via implicit encoding within a disentanglement creative space, where 'disentanglement' refers to the separation of creative-specific and creative-irrelevant features. This approach, utilizing implicit encoding, avoids distorting fragile textual content. Extensive experiments demonstrate that WIND effectively verifies creative writing copyright ownership against AI imitation, achieving F1 scores above 98% and maintaining robust performance under stringent low false-positive rates where existing state-of-the-art text watermarking methods struggle.
https://arxiv.org/abs/2504.00035
Academic Papers
svg
3f5a3465ac6947a30a52342beeca34bee591c517da0382b3487736c92e8f0deb
2026-01-23T00:00:00-05:00
On shallow feedforward neural networks with inputs from a topological space
arXiv:2504.02321v2 Announce Type: replace Abstract: We study feedforward neural networks with inputs from a topological space (TFNNs). We prove a universal approximation theorem for shallow TFNNs, which demonstrates their capacity to approximate any continuous function defined on this topological space. As an application, we obtain an approximative version of Kolmogorov's superposition theorem for compact metric spaces.
https://arxiv.org/abs/2504.02321
Academic Papers
svg
d4fa470b60c10ad579df6de5dadf11914a8dbe4b2b7569fb312ca1d0ab478289
2026-01-23T00:00:00-05:00
A Scalable Predictive Modelling Approach to Identifying Duplicate Adverse Event Reports for Drugs and Vaccines
arXiv:2504.03729v2 Announce Type: replace Abstract: Objectives: To advance state-of-the-art for duplicate detection in large-scale pharmacovigilance databases and achieve more consistent performance across adverse event reports from different countries. Background: Unlinked adverse event reports referring to the same case impede statistical analysis and may mislead clinical assessment. Pharmacovigilance relies on large databases of adverse event reports to discover potential new causal associations, and computational methods are required to identify duplicates at scale. Current state-of-the-art is statistical record linkage which outperforms rule-based approaches. In particular, vigiMatch is in routine use for VigiBase, the WHO global database of adverse event reports, and represents the first statistical duplicate detection approach in pharmacovigilance deployed at scale. Originally developed for both medicines and vaccines, its application to vaccines has been limited due to inconsistent performance across countries. Methods: This paper extends vigiMatch from probabilistic record linkage to predictive modelling, refining features for medicines, vaccines, and adverse events using country-specific reporting rates, extracting dates from free text, and training separate support vector machine classifiers for medicines and vaccines. Recall was evaluated using 5 independent labelled test sets. Precision was assessed by annotating random selections of report pairs classified as duplicates. Results: Precision for the new method was 92% for vaccines and 54% for medicines, compared with 41% for the comparator method. Recall ranged from 80-85% across test sets for vaccines and from 40-86% for medicines, compared with 24-53% for the comparator method. Conclusion: Predictive modeling, use of free text, and country-specific features advance state-of-the-art for duplicate detection in pharmacovigilance.
https://arxiv.org/abs/2504.03729
Academic Papers
svg
90629c74f9cbdf888588785428d37f9f4ba8b2e21bfbf827488f2be08b06519d
2026-01-23T00:00:00-05:00
Embracing Ambiguity: Bayesian Nonparametrics and Stakeholder Participation for Ambiguity-Aware Safety Evaluation
arXiv:2504.15211v2 Announce Type: replace Abstract: Evaluations of generative AI models often collapse nuanced behaviour into a single number computed for a single decoding configuration. Such point estimates obscure tail risks, demographic disparities, and the existence of multiple near-optimal operating points. We propose a unified framework that embraces multiplicity by modelling the distribution of harmful behaviour across the entire space of decoding knobs and prompts, quantifying risk through tail-focused metrics, and integrating stakeholder preferences. Our technical contributions are threefold: (i) we formalise decoding Rashomon sets, regions of knob space whose risk is near-optimal under given criteria and measure their size and disagreement; (ii) we develop a dependent Dirichlet process (DDP) mixture with stakeholder-conditioned stick-breaking weights to learn multi-modal harm surfaces; and (iii) we introduce an active sampling pipeline that uses Bayesian deep learning surrogates to explore knob space efficiently. Our approach bridges multiplicity theory, Bayesian nonparametrics, and stakeholder-aligned sensitivity analysis, paving the way for trustworthy deployment of generative models.
https://arxiv.org/abs/2504.15211
Academic Papers
svg