id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
1c47c6b12ee81591ef474aa01b12a2aa4d2d8f8fe759c46f3c554d71e41d9e3c
|
2026-01-23T00:00:00-05:00
|
SPOT: An Annotated French Corpus and Benchmark for Detecting Critical Interventions in Online Conversations
|
arXiv:2511.07405v3 Announce Type: replace Abstract: We introduce SPOT (Stopping Points in Online Threads), the first annotated corpus translating the sociological concept of stopping point into a reproducible NLP task. Stopping points are ordinary critical interventions that pause or redirect online discussions through a range of forms (irony, subtle doubt or fragmentary arguments) that frameworks like counterspeech or social correction often overlook. We operationalize this concept as a binary classification task and provide reliable annotation guidelines. The corpus contains 43,305 manually annotated French Facebook comments linked to URLs flagged as false information by social media users, enriched with contextual metadata (article, post, parent comment, page or group, and source). We benchmark fine-tuned encoder models (CamemBERT) and instruction-tuned LLMs under various prompting strategies. Results show that fine-tuned encoders outperform prompted LLMs in F1 score by more than 10 percentage points, confirming the importance of supervised learning for emerging non-English social media tasks. Incorporating contextual metadata further improves encoder models F1 scores from 0.75 to 0.78. We release the anonymized dataset, along with the annotation guidelines and code in our code repository, to foster transparency and reproducible research.
|
https://arxiv.org/abs/2511.07405
|
Academic Papers
|
svg
|
93cd1ffbd1a6d8fac2fa6c4d4e561bc70a9ed3b4531b774d4a4d4c874ec91404
|
2026-01-23T00:00:00-05:00
|
One Router to Route Them All: Homogeneous Expert Routing for Heterogeneous Graph Transformers
|
arXiv:2511.07603v2 Announce Type: replace Abstract: A common practice in heterogeneous graph neural networks (HGNNs) is to condition parameters on node/edge types, assuming types reflect semantic roles. However, this can cause overreliance on surface-level labels and impede cross-type knowledge transfer. We explore integrating Mixture-of-Experts (MoE) into HGNNs--a direction underexplored despite MoE's success in homogeneous settings. Crucially, we question the need for type-specific experts. We propose Homogeneous Expert Routing (HER), an MoE layer for Heterogeneous Graph Transformers (HGT) that stochastically masks type embeddings during routing to encourage type-agnostic specialization. Evaluated on IMDB, ACM, and DBLP for link prediction, HER consistently outperforms standard HGT and a type-separated MoE baseline. Analysis on IMDB shows HER experts specialize by semantic patterns (e.g., movie genres) rather than node types, confirming routing is driven by latent semantics. Our work demonstrates that regularizing type dependence in expert routing yields more generalizable, efficient, and interpretable representations--a new design principle for heterogeneous graph learning.
|
https://arxiv.org/abs/2511.07603
|
Academic Papers
|
svg
|
457f7881445578e762ecb6dd7804976ed38e033fd6a700698b813838b6fbdad7
|
2026-01-23T00:00:00-05:00
|
Online Operator Design in Evolutionary Optimization for Flexible Job Shop Scheduling via Large Language Models
|
arXiv:2511.16485v3 Announce Type: replace Abstract: Customized static operator design has enabled widespread application of Evolutionary Algorithms (EAs), but their search effectiveness often deteriorates as evolutionary progresses. Dynamic operator configuration approaches attempt to alleviate this issue, but they typically rely on predefined operator structures and localized parameter control, lacking sustained adaptive optimization throughout evolution. To overcome these limitations, this work leverages Large Language Models (LLMs) to perceive evolutionary dynamics and enable operator-level meta-evolution. The proposed framework, LLMs for online operator design in Evolutionary Optimization, named LLM4EO, comprises three components: knowledge-transfer-based operator design, evolution perception and analysis, and adaptive operator evolution. Firstly, operators are initialized by leveraging LLMs to distill and transfer knowledge from well-established operators. Then, search behaviors and potential limitations of operators are analyzed by integrating fitness performance with evolutionary features, accompanied by suggestions for improvement. Upon stagnation of population evolution, an LLM-driven meta-operator dynamically optimizes gene selection of operators by prompt-guided improvement strategies. This approach achieves co-evolution of solutions and operators within a unified optimization framework, introducing a novel paradigm for enhancing the efficiency and adaptability of EAs. Finally, extensive experiments on multiple benchmarks of flexible job shop scheduling problem demonstrate that LLM4EO accelerates population evolution and outperforms tailored EAs.
|
https://arxiv.org/abs/2511.16485
|
Academic Papers
|
svg
|
20c66d1cc27206b38341a6201a82bb7b78c1400842b5762654590cbe60f2fb9a
|
2026-01-23T00:00:00-05:00
|
Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter
|
arXiv:2511.16665v2 Announce Type: replace Abstract: The emergence of Large Language Models (LLMs) with strong reasoning capabilities marks a significant milestone, unlocking new frontiers in complex problem-solving. However, training these reasoning models, typically using Reinforcement Learning (RL), encounters critical efficiency bottlenecks: response generation during RL training exhibits a persistent long-tail distribution, where a few very long responses dominate execution time, wasting resources and inflating costs. To address this, we propose TLT, a system that accelerates reasoning RL training losslessly by integrating adaptive speculative decoding. Applying speculative decoding in RL is challenging due to the dynamic workloads, evolving target model, and draft model training overhead. TLT overcomes these obstacles with two synergistic components: (1) Adaptive Drafter, a lightweight draft model trained continuously on idle GPUs during long-tail generation to maintain alignment with the target model at no extra cost; and (2) Adaptive Rollout Engine, which maintains a memory-efficient pool of pre-captured CUDAGraphs and adaptively select suitable SD strategies for each input batch. Evaluations demonstrate that TLT achieves over 1.7x end-to-end RL training speedup over state-of-the-art systems, preserves the model accuracy, and yields a high-quality draft model as a free byproduct suitable for efficient deployment. Code is released at https://github.com/mit-han-lab/fastrl.
|
https://arxiv.org/abs/2511.16665
|
Academic Papers
|
svg
|
5f81e48ae21c3b5f0d815f301fc2b70f7ff50caa5e936e7b97f4477a707fcff7
|
2026-01-23T00:00:00-05:00
|
Boundary-Aware Adversarial Filtering for Reliable Diagnosis under Extreme Class Imbalance
|
arXiv:2511.17629v2 Announce Type: replace Abstract: We study classification under extreme class imbalance where recall and calibration are both critical, for example in medical diagnosis scenarios. We propose AF-SMOTE, a mathematically motivated augmentation framework that first synthesizes minority points and then filters them by an adversarial discriminator and a boundary utility model. We prove that, under mild assumptions on the decision boundary smoothness and class-conditional densities, our filtering step monotonically improves a surrogate of F_beta (for beta >= 1) while not inflating Brier score. On MIMIC-IV proxy label prediction and canonical fraud detection benchmarks, AF-SMOTE attains higher recall and average precision than strong oversampling baselines (SMOTE, ADASYN, Borderline-SMOTE, SVM-SMOTE), and yields the best calibration. We further validate these gains across multiple additional datasets beyond MIMIC-IV. Our successful application of AF-SMOTE to a healthcare dataset using a proxy label demonstrates in a disease-agnostic way its practical value in clinical situations, where missing true positive cases in rare diseases can have severe consequences.
|
https://arxiv.org/abs/2511.17629
|
Academic Papers
|
svg
|
1a3fe0510926311b2e099ac24bf9c76d1eeded944cfd946fab08d8ca9937c426
|
2026-01-23T00:00:00-05:00
|
Radiation-Preserving Selective Imaging for Pediatric Hip Dysplasia: A Cross-Modal Ultrasound-Xray Policy with Limited Labels
|
arXiv:2511.18457v2 Announce Type: replace Abstract: We study an ultrasound-first, radiation-preserving policy for developmental dysplasia of the hip (DDH) that requests a radiograph only when needed. We (i) pretrain modality-specific encoders (ResNet-18) with SimSiam on a large unlabelled registry (37186 ultrasound; 19546 radiographs), (ii) freeze the backbones and fit small, measurement-faithful heads on DDH-relevant landmarks and measurements, (iii) calibrate a one-sided conformal deferral rule on ultrasound predictions that provides finite sample marginal coverage guarantees under exchangeability, using a held-out calibration set. Ultrasound heads predict Graf alpha, beta, and femoral head coverage; X-ray heads predict acetabular index (AI), center-edge (CE) angle and IHDI grade. On our held out labeled evaluation set, ultrasound measurement error is modest (e.g., alpha MAE ~= 9.7 degrees, coverage MAE ~= 14.0%), while radiographic probes achieve AI and CE MAEs of ~= 7.6 degrees and ~= 8.9 degrees, respectively. The calibrated US-only policy is explored across rule families (alpha-only; alpha OR coverage; alpha AND coverage), conformal miscoverage levels, and per-utility trade-offs using decision-curve analysis. Conservative settings yield high coverage with near-zero US-only rates; permissive settings (e.g., alpha OR coverage at larger deltas) achieve non-zero US-only throughput with expected coverage tradeoffs. The result is a simple, reproducible pipeline that turns limited labels into interpretable measurements and tunable selective imaging curves suitable for clinical handoff and future external validation.
|
https://arxiv.org/abs/2511.18457
|
Academic Papers
|
svg
|
9885a1242b2ded57b9bae526150b68db7b9073bf493d59e7fdd4531ff118dc5e
|
2026-01-23T00:00:00-05:00
|
MetaDCSeg: Robust Medical Image Segmentation via Meta Dynamic Center Weighting
|
arXiv:2511.18894v2 Announce Type: replace Abstract: Medical image segmentation is crucial for clinical applications, but it is frequently disrupted by noisy annotations and ambiguous anatomical boundaries, which lead to instability in model training. Existing methods typically rely on global noise assumptions or confidence-based sample selection, which inadequately mitigate the performance degradation caused by annotation noise, especially in challenging boundary regions. To address this issue, we propose MetaDCSeg, a robust framework that dynamically learns optimal pixel-wise weights to suppress the influence of noisy ground-truth labels while preserving reliable annotations. By explicitly modeling boundary uncertainty through a Dynamic Center Distance (DCD) mechanism, our approach utilizes weighted feature distances for foreground, background, and boundary centers, directing the model's attention toward hard-to-segment pixels near ambiguous boundaries. This strategy enables more precise handling of structural boundaries, which are often overlooked by existing methods, and significantly enhances segmentation performance. Extensive experiments across four benchmark datasets with varying noise levels demonstrate that MetaDCSeg consistently outperforms existing state-of-the-art methods.
|
https://arxiv.org/abs/2511.18894
|
Academic Papers
|
svg
|
2f1ac46f2d35fdcec3313df88d24fe29443089d3a6c65ecec4a82370f8ca11a8
|
2026-01-23T00:00:00-05:00
|
Bipartiteness in Progressive Second-Price Multi-Auction Networks with Perfect Substitute
|
arXiv:2511.19225v2 Announce Type: replace Abstract: We consider a bipartite network of buyers and sellers, where the sellers run locally independent Progressive Second-Price (PSP) auctions, and buyers may participate in multiple auctions, forming a multi-auction market with perfect substitute. The paper develops a projection-based influence framework for decentralized PSP auctions. We formalize primary and expanded influence sets using projections on the active bid index set and show how partial orders on bid prices govern allocation, market shifts, and the emergence of saturated one-hop shells. Our results highlight the robustness of PSP auctions in decentralized environments by introducing saturated components and a structured framework for phase transitions in multi-auction dynamics. This structure ensures deterministic coverage of the strategy space, enabling stable and truthful embedding in the larger game. We further model intra-round dynamics using an index to capture coordinated asynchronous seller updates coupled through buyers' joint constraints. Together, these constructions explain how local interactions propagate across auctions and gives premise for coherent equilibria--without requiring global information or centralized control.
|
https://arxiv.org/abs/2511.19225
|
Academic Papers
|
svg
|
183a731d82574ba400138285f9b26f1525ef9c7b3585d33bc7bf5c8dae232240
|
2026-01-23T00:00:00-05:00
|
Design and Validation of a Modular Smart Headband with Embroidered Electrodes for Comfortable EEG Monitoring
|
arXiv:2511.19348v3 Announce Type: replace Abstract: The wearable EEG device sector is advancing rapidly, enabling fast and reliable detection of brain activity for investigating brain function and pathology. However, many current EEG systems remain challenging for users with neurological conditions due to bulky wiring, lengthy skin preparation, gel-induced discomfort, risk of irritation, and high cost, all of which limit long-term monitoring. This study presents a proof-of-concept smart modular headband incorporating adjustable, replaceable embroidered electrodes for EEG acquisition. Compared with conventional devices, the smart headband reduces wiring complexity, removes the need for skin preparation, and minimizes irritation associated with gel-based electrodes. Its modular structure allows adjustable fitting without requiring multiple size options, enhancing comfort and adaptability for everyday EEG monitoring. The smart headband prototype was tested on 10 healthy university students using three behavioral tasks: (1) eyes open/closed, (2) auditory oddball, and (3) visual oddball paradigms. The smart headband successfully captured alpha peaks during the eyes-open/closed task (p = 0.01) and reliably recorded the event-related potentials associated with the oddball effects - the auditory P300 (p = 0.014) and the visual N170 (p = 0.013) - demonstrating an equivalent performance to a commercial sponge-based EEG cap. A user survey indicated improved comfort and usability, with participants reporting that the soft, structurally designed headband enhanced wearability relative to a conventional cap. Overall, this prototype provides a comfortable, modular, and cost-effective solution to reliable EEG monitoring in real-world applications.
|
https://arxiv.org/abs/2511.19348
|
Academic Papers
|
svg
|
711100bd3ca65d7391afe6090436f99ab06f32a769eb62f1a8594a14586eecd6
|
2026-01-23T00:00:00-05:00
|
EfficientXpert: Efficient Domain Adaptation for Large Language Models via Propagation-Aware Pruning
|
arXiv:2511.19935v2 Announce Type: replace Abstract: Large language models (LLMs) are increasingly adapted into domain-specific variants for applications in law, healthcare, and finance. Their scale, however, limits deployment in resource-constrained settings, and existing compression approaches often either degrade after domain adaptation or require substantial additional computation. We introduce EfficientXpert, a lightweight framework for domain pruning that integrates ForeSight Mask, a propagation-aware criterion for selecting weights to prune without backpropagation, and Partial Brain Surgeon, an efficient closed-form update for low-rank adapters under a fixed sparsity pattern. With fine-tuning cost comparable to standard LoRA, EfficientXpert converts a general pretrained model into a sparse, domain-adapted expert in a single pruning step. Across health and legal benchmarks, EfficientXpert reaches up to 98 percent of dense performance at 40 percent sparsity, improving over prior pruning baselines while matching LoRA training time and staying within 1 percent of LoRA peak GPU memory in our experiments.
|
https://arxiv.org/abs/2511.19935
|
Academic Papers
|
svg
|
e6e015dd16aefe782ab16ed771a1ef2bc3164377ba2bd967d7214caf802784fb
|
2026-01-23T00:00:00-05:00
|
Interpretable Air Pollution Forecasting by Physics-Guided Spatiotemporal Decoupling
|
arXiv:2511.20257v2 Announce Type: replace Abstract: Accurate and interpretable air pollution forecasting is crucial for public health, but most models face a trade-off between performance and interpretability. This study proposes a physics-guided, interpretable-by-design spatiotemporal learning framework. The model decomposes the spatiotemporal behavior of air pollutant concentrations into two transparent, additive modules. The first is a physics-guided transport kernel with directed weights conditioned on wind and geography (advection). The second is an explainable attention mechanism that learns local responses and attributes future concentrations to specific historical lags and exogenous drivers. Evaluated on a comprehensive dataset from the Stockholm region, our model consistently outperforms state-of-the-art baselines across multiple forecasting horizons. Our model's integration of high predictive performance and spatiotemporal interpretability provides a more reliable foundation for operational air-quality management in real-world applications.
|
https://arxiv.org/abs/2511.20257
|
Academic Papers
|
svg
|
6574b1fa356459fbaa1a6db2d486d73249db166d27418b3808ff7b5d7bc58cc1
|
2026-01-23T00:00:00-05:00
|
DRS-OSS: Practical Diff Risk Scoring with LLMs
|
arXiv:2511.21964v2 Announce Type: replace Abstract: In large-scale open-source projects, hundreds of pull requests land daily, each a potential source of regressions. Diff risk scoring (DRS) estimates how likely an individual code change is to introduce a defect. This score can help prioritize reviews and tests, gate high-risk changes, and manage CI/CD capacity. Building on this idea, we present DRS-OSS, an open-source DRS tool equipped with a public API, web UI, and GitHub plugin. DRS-OSS is a deployable, LLM-based diff risk scoring system for open-source projects built around a fine-tuned Llama 3.1 8B sequence classifier. The model consumes long-context representations that combine commit messages, structured diffs, and change metrics, and is trained on the ApacheJIT dataset. Using parameter-efficient adaptation, 4-bit QLoRA, and DeepSpeed ZeRO-3 CPU offloading, we train the model with 22k-token contexts on a single 20 GB GPU, demonstrating a highly efficient training procedure. On the ApacheJIT benchmark, DRS-OSS achieves state-of-the-art performance with an F1 score of 0.64 and a ROC-AUC of 0.89. Beyond standard classification metrics, we evaluate DRS-OSS as a gating mechanism. Simulations show that gating only the riskiest 30 percent of commits can prevent up to 86.4 percent of defect-inducing changes from landing. By adjusting the threshold, teams can tune risk trade-offs during periods of high sensitivity or limited review capacity. DRS-OSS integrates directly into developer workflows through a FastAPI gateway and LLM microservices for scalable inference, a React-based dashboard for manual diff analysis, and a GitHub App that posts risk labels and confidence scores on pull requests. The system delivers real-time, reproducible risk feedback and is released with a full replication package including fine-tuning scripts, deployment artifacts, and source code, as well as a project website and an end-to-end demonstration video.
|
https://arxiv.org/abs/2511.21964
|
Academic Papers
|
svg
|
61a1074944d170e5371cb74fa11f6c4dcccc9d84032724745d4372d03944adf4
|
2026-01-23T00:00:00-05:00
|
All for One and One for All: Program Logics for Exploiting Internal Determinism in Parallel Programs
|
arXiv:2511.23283v2 Announce Type: replace Abstract: Nondeterminism makes parallel programs challenging to write and reason about. To avoid these challenges, researchers have developed techniques for internally deterministic parallel programming, in which the steps of a parallel computation proceed in a deterministic way. Internal determinism is useful because it lets a programmer reason about a program as if it executed in a sequential order. However, no verification framework exists to exploit this property and simplify formal reasoning about internally deterministic programs. To capture the essence of why internally deterministic programs should be easier to reason about, this paper defines a property called schedule-independent safety. A program satisfies schedule-independent safety, if, to show that the program is safe across all orderings, it suffices to show that one terminating execution of the program is safe. We then present a separation logic called Musketeer for proving that a program satisfies schedule-independent safety. Once a parallel program has been shown to satisfy schedule-independent safety, we can verify it with a new logic called Angelic, which allows one to dynamically select and verify just one sequential ordering of the program. Using Musketeer, we prove the soundness of MiniDet, an affine type system for enforcing internal determinism. MiniDet supports several core algorithmic primitives for internally deterministic programming that have been identified in the research literature, including a deterministic version of a concurrent hash set. Because any syntactically well-typed MiniDet program satisfies schedule-independent safety, we can apply Angelic to verify such programs. All results in this paper have been verified in Rocq using the Iris separation logic framework.
|
https://arxiv.org/abs/2511.23283
|
Academic Papers
|
svg
|
a75a4352b25296b81fda929f1a6053e9a5a41b674d3c972170336ab345360e24
|
2026-01-23T00:00:00-05:00
|
Four Over Six: More Accurate NVFP4 Quantization with Adaptive Block Scaling
|
arXiv:2512.02010v3 Announce Type: replace Abstract: As large language models have grown larger, interest has grown in low-precision numerical formats such as NVFP4 as a way to improve speed and reduce memory usage. However, quantizing models to NVFP4 remains difficult as the lack of precision generally degrades model performance. In this work, we address this issue with Four Over Six (4/6), a modification to the block-scaled NVFP4 quantization algorithm that yields reduced quantization error. Unlike integer formats, floating point formats have non-uniform step sizes which create larger quantization error on larger values. 4/6 takes advantage of this by adaptively scaling some blocks to smaller FP4 values, making the distribution of representable values more uniform and reducing quantization error for near-maximal values. We show that 4/6 can be implemented efficiently on NVIDIA Blackwell GPUs, resulting in performance gains during both pre-training and inference with minimal computational overhead. In pre-training experiments with the Nemotron 3 Nano 30B-A3B model architecture, we find that 4/6 brings training loss closer to BF16 compared to models trained with current state-of-the-art NVFP4 training recipes. Our code is available at http://github.com/mit-han-lab/fouroversix.
|
https://arxiv.org/abs/2512.02010
|
Academic Papers
|
svg
|
1154c9954f2107e1bb38a87d445420c90ab51c99b73d85e17ba54d75d900d1a8
|
2026-01-23T00:00:00-05:00
|
Do you precondition on the left or on the right?
|
arXiv:2512.05160v2 Announce Type: replace Abstract: This work is a follow-up to a poster that was presented at the DD29 conference. Participants were asked the question: ``Do you precondition on the left or on the right?''. Here we report on the results of this social experiment. We also provide context on left, right and split preconditioning, share our literature review on the topic, and analyze some of the finer points. Two examples illustrate that convergence bounds can sometimes lead to misleading conclusions.
|
https://arxiv.org/abs/2512.05160
|
Academic Papers
|
svg
|
efb5ed35cff92a2fd180a8ab3184572135edb6d2ad3cd730e3c580430a5d74c1
|
2026-01-23T00:00:00-05:00
|
Mitigation of multi-path propagation artefacts in acoustic targets with adaptive cepstral filtering
|
arXiv:2512.11165v2 Announce Type: replace Abstract: Passive acoustic sensing is a cost-effective solution for monitoring moving targets such as vessels and aircraft, but its performance is hindered by complex propagation effects like multi-path reflections and motion-induced artefacts. Existing filtering techniques do not properly incorporate the characteristics of the environment or account for variability in medium properties, limiting their effectiveness in separating source and reflection components. This paper proposes a method for separating target signals from their reflections in a spectrogram. Temporal filtering is applied to cepstral coefficients using an adaptive band-stop filter, which dynamically adjusts its bandwidth based on the relative intensity of the quefrency components. The method improved the signal-to-noise ratio (SNR) and log-spectral distance (LSD) across velocities ranging from 10 to 100 metres per second in aircraft noise with simulated motion. It also enhanced the performance of ship-type classification in underwater tasks by 2.28 and 2.62 Matthews Correlation Coefficient percentage points for the DeepShip and VTUAD v2 datasets, respectively. These results demonstrate the potential of the proposed pipeline to improve acoustic target classification and time-delay estimation in multi-path environments, with future work aimed at amplitude preservation and multi-sensor applications.
|
https://arxiv.org/abs/2512.11165
|
Academic Papers
|
svg
|
c53eca2f6814652c776ff85f3fb0b739d522245bf2dda86d3a8bf298fcf40a11
|
2026-01-23T00:00:00-05:00
|
Words to Describe What I'm Feeling: Exploring the Potential of AI Agents for High Subjectivity Decisions in Advance Care Planning
|
arXiv:2512.11276v2 Announce Type: replace Abstract: Loss of decisional capacity, coupled with the increasing absence of reliable human proxies, raises urgent questions about how individuals' values can be represented in Advance Care Planning (ACP). To probe this fraught design space of high-risk, high-subjectivity decision support, we built an experience prototype (\acpagent{}) and asked 15 participants in 4 workshops to train it to be their personal ACP proxy. We analysed their coping strategies and feature requests and mapped the results onto axes of agent autonomy and human control. Our findings show a surprising 86.7\% agreement with \acpagent{}, arguing for a potential new role of AI in ACP where agents act as personal advocates for individuals, building mutual intelligibility over time. We propose that the key areas of future risk that must be addressed are the moderation of users' expectations and designing accountability and oversight over agent deployment and cutoffs.
|
https://arxiv.org/abs/2512.11276
|
Academic Papers
|
svg
|
05989b378a4e6477c5531396a704abca9ec61f26d22b176486e3ae3aa51a6a57
|
2026-01-23T00:00:00-05:00
|
ProbeMDE: Uncertainty-Guided Active Proprioception for Monocular Depth Estimation in Surgical Robotics
|
arXiv:2512.11773v3 Announce Type: replace Abstract: Monocular depth estimation (MDE) provides a useful tool for robotic perception, but its predictions are often uncertain and inaccurate in challenging environments such as surgical scenes where textureless surfaces, specular reflections, and occlusions are common. To address this, we propose ProbeMDE, a cost-aware active sensing framework that combines RGB images with sparse proprioceptive measurements for MDE. Our approach utilizes an ensemble of MDE models to predict dense depth maps conditioned on both RGB images and on a sparse set of known depth measurements obtained via proprioception, where the robot has touched the environment in a known configuration. We quantify predictive uncertainty via the ensemble's variance and measure the gradient of the uncertainty with respect to candidate measurement locations. To prevent mode collapse while selecting maximally informative locations to propriocept (touch), we leverage Stein Variational Gradient Descent (SVGD) over this gradient map. We validate our method in both simulated and physical experiments on central airway obstruction surgical phantoms. Our results demonstrate that our approach outperforms baseline methods across standard depth estimation metrics, achieving higher accuracy while minimizing the number of required proprioceptive measurements. Project page: https://brittonjordan.github.io/probe_mde/
|
https://arxiv.org/abs/2512.11773
|
Academic Papers
|
svg
|
0034441017756389ebf753f35af44a4f0ee0ecbe7cd5de38d6fede6a44c0d3d2
|
2026-01-23T00:00:00-05:00
|
SeVeDo: A Heterogeneous Transformer Accelerator for Low-Bit Inference via Hierarchical Group Quantization and SVD-Guided Mixed Precision
|
arXiv:2512.12930v2 Announce Type: replace Abstract: Low-bit quantization is a promising technique for efficient transformer inference by reducing computational and memory overhead. However, aggressive bitwidth reduction remains challenging due to activation outliers, leading to accuracy degradation. Existing methods, such as outlier-handling and group quantization, achieve high accuracy but incur substantial energy consumption. To address this, we propose SeVeDo, an energy-efficient SVD-based heterogeneous accelerator that structurally separates outlier-sensitive components into a high-precision low-rank path, while the remaining computations are executed in a low-bit residual datapath with group quantization. To further enhance efficiency, Hierarchical Group Quantization (HGQ) combines coarse-grained floating-point scaling with fine-grained shifting, effectively reducing dequantization cost. Also, SVD-guided mixed precision (SVD-MP) statically allocates higher bitwidths to precision-sensitive components identified through low-rank decomposition, thereby minimizing floating-point operation cost. Experimental results show that SeVeDo achieves a peak energy efficiency of 13.8TOPS/W, surpassing conventional designs, with 12.7TOPS/W on ViT-Base and 13.4TOPS/W on Llama2-7B benchmarks.
|
https://arxiv.org/abs/2512.12930
|
Academic Papers
|
svg
|
865819b26e056c0dc376e1458e667dc05b1d3cd058afbafa530ea9722483d625
|
2026-01-23T00:00:00-05:00
|
TUN: Detecting Significant Points in Persistence Diagrams with Deep Learning
|
arXiv:2512.14274v2 Announce Type: replace Abstract: Persistence diagrams (PDs) provide a powerful tool for understanding the topology of the underlying shape of a point cloud. However, identifying which points in PDs encode genuine signals remains challenging. This challenge directly hinders the practical adoption of topological data analysis in many applications, where automated and reliable interpretation of persistence diagrams is essential for downstream decision-making. In this paper, we study automatic significance detection for one-dimensional persistence diagrams. Specifically, we propose Topology Understanding Net (TUN), a multi-modal network that combines enhanced PD descriptors with self-attention, a PointNet-style point cloud encoder, learned fusion, and per-point classification, alongside stable preprocessing and imbalance-aware training. It provides an automated and effective solution for identifying significant points in PDs, which are critical for downstream applications. Experiments show that TUN outperforms classic methods in detecting significant points in PDs, illustrating its effectiveness in real-world applications.
|
https://arxiv.org/abs/2512.14274
|
Academic Papers
|
svg
|
e12f681746100a3b236fdca85a52efd847a703d7e2d9198380488bbcc973af94
|
2026-01-23T00:00:00-05:00
|
UCCL-EP: Portable Expert-Parallel Communication
|
arXiv:2512.19849v2 Announce Type: replace Abstract: Mixture-of-Experts (MoE) workloads rely on expert parallelism (EP) to achieve high GPU efficiency. State-of-the-art EP communication systems such as DeepEP demonstrate strong performance but exhibit poor portability across heterogeneous GPU and NIC platforms. The poor portability is rooted in architecture: GPU-initiated token-level RDMA communication requires tight vertical integration between GPUs and NICs, e.g., GPU writes to NIC driver/MMIO interfaces. We present UCCL-EP, a portable EP communication system that delivers DeepEP-level performance across heterogeneous GPU and NIC hardware. UCCL-EP replaces GPU-initiated RDMA with a high-throughput GPU-CPU control channel: compact token-routing commands are transferred to multithreaded CPU proxies, which then issue GPUDirect RDMA operations on behalf of GPUs. UCCL-EP further emulates various ordering semantics required by specialized EP communication modes using RDMA immediate data, enabling correctness on NICs that lack such ordering, e.g., AWS EFA. We implement UCCL-EP on NVIDIA and AMD GPUs with EFA and Broadcom NICs. On EFA, it outperforms the best existing EP solution by up to $2.1\times$ for dispatch and combine throughput. On NVIDIA-only platform, UCCL-EP achieves comparable performance to the original DeepEP. UCCL-EP also improves token throughput on SGLang by up to 40% on the NVIDIA+EFA platform, and improves DeepSeek-V3 training throughput over the AMD Primus/Megatron-LM framework by up to 45% on a 16-node AMD+Broadcom platform.
|
https://arxiv.org/abs/2512.19849
|
Academic Papers
|
svg
|
2780d1649eee955cfabfbe49b3be6f590b59f28bc49c8364ca2731b5ef82d16f
|
2026-01-23T00:00:00-05:00
|
Anisotropic Green Coordinates
|
arXiv:2512.20386v2 Announce Type: replace Abstract: We live in a world filled with anisotropy, a ubiquitous characteristic of both natural and engineered systems. In this study, we concentrate on space deformation and introduce \textit{anisotropic Green coordinates}, which provide versatile effects for cage-based and variational deformations in both two and three dimensions. The anisotropic Green coordinates are derived from the anisotropic Laplacian equation $\nabla\cdot(\mathbf{A}\nabla u)=0$, where $\mathbf{A}$ is a symmetric positive definite matrix. This equation belongs to the class of constant-coefficient second-order elliptic equations, exhibiting properties analogous to the Laplacian equation but incorporating the matrix $\mathbf{A}$ to characterize anisotropic behavior. Based on this equation, we establish the boundary integral formulation, which is subsequently discretized to derive anisotropic Green coordinates defined on the vertices and normals of oriented simplicial cages. Our method satisfies basic properties such as linear reproduction and translation invariance, and possesses closed-form expressions for both 2D and 3D scenarios. We also give an intuitive geometric interpretation of the approach, demonstrating that our method generates a quasi-conformal mapping. Furthermore, we derive the gradients and Hessians of the deformation coordinates and employ the local-global optimization framework to facilitate variational shape deformation, enabling flexible shape manipulation while achieving as-rigid-as-possible shape deformation. Experimental results demonstrate that anisotropic Green coordinates offer versatile and diverse deformation options, providing artists with enhanced flexibility and introducing a novel perspective on spatial deformation.
|
https://arxiv.org/abs/2512.20386
|
Academic Papers
|
svg
|
95c761f842a6f2113a04cc35322dfe2e7840b130ab0943cf0e276ec93097150c
|
2026-01-23T00:00:00-05:00
|
Real-World Adversarial Attacks on RF-Based Drone Detectors
|
arXiv:2512.20712v2 Announce Type: replace Abstract: Radio frequency (RF) based systems are increasingly used to detect drones by analyzing their RF signal patterns, converting them into spectrogram images which are processed by object detection models. Existing RF attacks against image based models alter digital features, making over-the-air (OTA) implementation difficult due to the challenge of converting digital perturbations to transmittable waveforms that may introduce synchronization errors and interference, and encounter hardware limitations. We present the first physical attack on RF image based drone detectors, optimizing class-specific universal complex baseband (I/Q) perturbation waveforms that are transmitted alongside legitimate communications. We evaluated the attack using RF recordings and OTA experiments with four types of drones. Our results show that modest, structured I/Q perturbations are compatible with standard RF chains and reliably reduce target drone detection while preserving detection of legitimate drones.
|
https://arxiv.org/abs/2512.20712
|
Academic Papers
|
svg
|
f39674423e5bff874b9e243b2662345cbc89e1f11f026e6c63810df90a90de4d
|
2026-01-23T00:00:00-05:00
|
Monadic Context Engineering
|
arXiv:2512.22431v5 Announce Type: replace Abstract: The proliferation of Large Language Models (LLMs) has catalyzed a shift towards autonomous agents capable of complex reasoning and tool use. However, current agent architectures are frequently constructed using imperative, ad hoc patterns. This results in brittle systems plagued by difficulties in state management, error handling, and concurrency. This paper introduces Monadic Context Engineering (MCE), a novel architectural paradigm leveraging the algebraic structures of Functors, Applicative Functors, and Monads to provide a formal foundation for agent design. MCE treats agent workflows as computational contexts where cross-cutting concerns, such as state propagation, short-circuiting error handling, and asynchronous execution, are managed intrinsically by the algebraic properties of the abstraction. We demonstrate how Monads enable robust sequential composition, how Applicatives provide a principled structure for parallel execution, and crucially, how Monad Transformers allow for the systematic composition of these capabilities. This layered approach enables developers to construct complex, resilient, and efficient AI agents from simple, independently verifiable components. We further extend this framework to describe Meta-Agents, which leverage MCE for generative orchestration, dynamically creating and managing sub-agent workflows through metaprogramming.
|
https://arxiv.org/abs/2512.22431
|
Academic Papers
|
svg
|
6e37214a00409184cdebb5d3057989b514ba33d1e2b1702b6df81fa661b0eff8
|
2026-01-23T00:00:00-05:00
|
A Domain Decomposition-based Solver for Acoustic Wave propagation in Two-Dimensional Random Media
|
arXiv:2512.23027v2 Announce Type: replace Abstract: An acoustic wave propagation problem with a log normal random field approximation for wave speed is solved using a sampling-free intrusive stochastic Galerkin approach. The stochastic partial differential equation with the inputs and outputs expanded using polynomial chaos expansion (PCE) is transformed into a set of deterministic PDEs and further to a system of linear equations. Domain decomposition (DD)-based solvers are utilized to handle the overwhelming computational cost for the resulting system with increasing mesh size, time step and number of random parameters. A conjugate gradient iterative solver with a two-level Neumann-Neumann preconditioner is applied here showing their efficient scalabilities.
|
https://arxiv.org/abs/2512.23027
|
Academic Papers
|
svg
|
5380f7b4afe24272901805661a417bea70a27a781587b07051eb22c7e27882cf
|
2026-01-23T00:00:00-05:00
|
RadixGraph: A Fast, Space-Optimized Data Structure for Dynamic Graph Storage (Extended Version)
|
arXiv:2601.01444v2 Announce Type: replace Abstract: Dynamic graphs model many real-world applications, and as their sizes grow, efficiently storing and updating them becomes critical. We present RadixGraph, a fast and memory-efficient data structure for dynamic graph storage. RadixGraph features a carefully designed radix-tree-based vertex index that strikes an optimal trade-off between query efficiency and space among all pointer-array-based radix trees. For edge storage, it employs a hybrid snapshot-log architecture that enables amortized $O(1)$ update time. RadixGraph supports millions of concurrent updates per second while maintaining competitive performance for graph analytics. Experimental results show that RadixGraph outperforms the most performant baseline by up to $16.27\times$ across various datasets in ingesting graph updates, and reduces memory usage by an average of $40.1\%$. RadixGraph is open-source at https://github.com/ForwardStar/RadixGraph.
|
https://arxiv.org/abs/2601.01444
|
Academic Papers
|
svg
|
8040644b750d5f628cbb23ada1114f5e8e72e2e4d427bec77900c74b21e40935
|
2026-01-23T00:00:00-05:00
|
Crafting Adversarial Inputs for Large Vision-Language Models Using Black-Box Optimization
|
arXiv:2601.01747v4 Announce Type: replace Abstract: Recent advancements in Large Vision-Language Models (LVLMs) have shown groundbreaking capabilities across diverse multimodal tasks. However, these models remain vulnerable to adversarial jailbreak attacks, where adversaries craft subtle perturbations to bypass safety mechanisms and trigger harmful outputs. Existing white-box attacks methods require full model accessibility, suffer from computing costs and exhibit insufficient adversarial transferability, making them impractical for real-world, black-box settings. To address these limitations, we propose a black-box jailbreak attack on LVLMs via Zeroth-Order optimization using Simultaneous Perturbation Stochastic Approximation (ZO-SPSA). ZO-SPSA provides three key advantages: (i) gradient-free approximation by input-output interactions without requiring model knowledge, (ii) model-agnostic optimization without the surrogate model and (iii) lower resource requirements with reduced GPU memory consumption. We evaluate ZO-SPSA on three LVLMs, including InstructBLIP, LLaVA and MiniGPT-4, achieving the highest jailbreak success rate of 83.0% on InstructBLIP, while maintaining imperceptible perturbations comparable to white-box methods. Moreover, adversarial examples generated from MiniGPT-4 exhibit strong transferability to other LVLMs, with ASR reaching 64.18%. These findings underscore the real-world feasibility of black-box jailbreaks and expose critical weaknesses in the safety mechanisms of current LVLMs
|
https://arxiv.org/abs/2601.01747
|
Academic Papers
|
svg
|
41acb59744be166252ab625dbd199bebd15c817b9cda3331c246a56d35b68ba9
|
2026-01-23T00:00:00-05:00
|
MMP-A*: Multimodal Perception Enhanced Incremental Heuristic Search on Path Planning
|
arXiv:2601.01910v2 Announce Type: replace Abstract: Autonomous path planning requires a synergy between global reasoning and geometric precision, especially in complex or cluttered environments. While classical A* is valued for its optimality, it incurs prohibitive computational and memory costs in large-scale scenarios. Recent attempts to mitigate these limitations by using Large Language Models for waypoint guidance remain insufficient, as they rely only on text-based reasoning without spatial grounding. As a result, such models often produce incorrect waypoints in topologically complex environments with dead ends, and lack the perceptual capacity to interpret ambiguous physical boundaries. These inconsistencies lead to costly corrective expansions and undermine the intended computational efficiency. We introduce MMP-A*, a multimodal framework that integrates the spatial grounding capabilities of vision-language models with a novel adaptive decay mechanism. By anchoring high-level reasoning in physical geometry, the framework produces coherent waypoint guidance that addresses the limitations of text-only planners. The adaptive decay mechanism dynamically regulates the influence of uncertain waypoints within the heuristic, ensuring geometric validity while substantially reducing memory overhead. To evaluate robustness, we test the framework in challenging environments characterized by severe clutter and topological complexity. Experimental results show that MMP-A* achieves near-optimal trajectories with significantly reduced operational costs, demonstrating its potential as a perception-grounded and computationally efficient paradigm for autonomous navigation.
|
https://arxiv.org/abs/2601.01910
|
Academic Papers
|
svg
|
e157fe6b1eeb17b63b518906d15060842151dc164c0ef2addaac76b413bfb1cc
|
2026-01-23T00:00:00-05:00
|
Lightweight and perceptually-guided voice conversion for electro-laryngeal speech
|
arXiv:2601.03892v2 Announce Type: replace Abstract: Electro-laryngeal (EL) speech is characterized by constant pitch, limited prosody, and mechanical noise, reducing naturalness and intelligibility. We propose a lightweight adaptation of the state-of-the-art StreamVC framework to this setting by removing pitch and energy modules and combining self-supervised pretraining with supervised fine-tuning on parallel EL and healthy (HE) speech data, guided by perceptual and intelligibility losses. Objective and subjective evaluations across different loss configurations confirm their influence: the best model variant, based on WavLM features and human-feedback predictions (+WavLM+HF), drastically reduces character error rate (CER) of EL inputs, raises naturalness mean opinion score (nMOS) from 1.1 to 3.3, and consistently narrows the gap to HE ground-truth speech in all evaluated metrics. These findings demonstrate the feasibility of adapting lightweight voice conversion architectures to EL voice rehabilitation while also identifying prosody generation and intelligibility improvements as the main remaining bottlenecks.
|
https://arxiv.org/abs/2601.03892
|
Academic Papers
|
svg
|
4d74279abcc80c754549b4b077147fce4acf161877ab33d4d82be41446dc9f6f
|
2026-01-23T00:00:00-05:00
|
FLEx: Language Modeling with Few-shot Language Explanations
|
arXiv:2601.04157v2 Announce Type: replace Abstract: Language models have become effective at a wide range of tasks, from math problem solving to open-domain question answering. However, they still make mistakes, and these mistakes are often repeated across related queries. Natural language explanations can help correct these errors, but collecting them at scale may be infeasible, particularly in domains where expert annotators are required. To address this issue, we introduce FLEx ($\textbf{F}$ew-shot $\textbf{L}$anguage $\textbf{Ex}$planations), a method for improving model behavior using a small number of explanatory examples. FLEx selects representative model errors using embedding-based clustering, verifies that the associated explanations correct those errors, and summarizes them into a prompt prefix that is prepended at inference-time. This summary guides the model to avoid similar errors on new inputs, without modifying model weights. We evaluate FLEx on CounterBench, GSM8K, and ReasonIF. We find that FLEx consistently outperforms chain-of-thought (CoT) prompting across all three datasets and reduces up to 83\% of CoT's remaining errors.
|
https://arxiv.org/abs/2601.04157
|
Academic Papers
|
svg
|
f3e0bd1433598a97aaa33dfa56db64beb44ec1d28af4f89c4d481f35515a1189
|
2026-01-23T00:00:00-05:00
|
The Adverse Effects of Omitting Records in Differential Privacy: How Sampling and Suppression Degrade the Privacy--Utility Tradeoff (Long Version)
|
arXiv:2601.05180v2 Announce Type: replace Abstract: Sampling is renowned for its privacy amplification in differential privacy (DP), and is often assumed to improve the utility of a DP mechanism by allowing a noise reduction. In this paper, we further show that this last assumption is flawed: When measuring utility at equal privacy levels, sampling as preprocessing consistently yields penalties due to utility loss from omitting records over all canonical DP mechanisms -- Laplace, Gaussian, exponential, and report noisy max -- , as well as recent applications of sampling, such as clustering. Extending this analysis, we investigate suppression as a generalized method of choosing, or omitting, records. Developing a theoretical analysis of this technique, we derive privacy bounds for arbitrary suppression strategies under unbounded approximate DP. We find that our tested suppression strategy also fails to improve the privacy--utility tradeoff. Surprisingly, uniform sampling emerges as one of the best suppression methods -- despite its still degrading effect. Our results call into question common preprocessing assumptions in DP practice.
|
https://arxiv.org/abs/2601.05180
|
Academic Papers
|
svg
|
d1550e18fc5bba98c13d194b0fafa99b61d45a4ae982a1dac9a22ad5e41c2a24
|
2026-01-23T00:00:00-05:00
|
Do Sparse Autoencoders Identify Reasoning Features in Language Models?
|
arXiv:2601.05679v4 Announce Type: replace Abstract: We investigate whether sparse autoencoders identify genuine reasoning features in large language models. We first present a stylized theoretical analysis showing that sparsity-regularized decoding favors stable low-dimensional correlates over high-dimensional within-reasoning variation, biasing learned features toward token-level cues. Motivated by this analysis, we introduce a falsification-based evaluation framework that combines causal token injection with LLM-guided counterexample generation to distinguish genuine reasoning features from superficial linguistic correlates. Across 22 configurations spanning multiple model families, layers and datasets, we find that contrastively selected reasoning features are highly sensitive to token interventions, with 45%-90% activating when only a few associated tokens are injected into non-reasoning text. For the remaining features, LLM-guided falsification reliably constructs non-reasoning inputs that instantiate the feature's token-level cues and trigger activation, and meaning-preserving paraphrases of top-activating reasoning traces that suppress it. Steering the highest-ranked features yields no improvements on benchmarks. Overall, our results suggest that when low-dimensional token-level patterns are coupled with high-dimensional reasoning processes, the sparsity bias of SAEs systematically favors low-dimensional linguistic patterns that consistently co-occur with reasoning. Code is available at https://github.com/GeorgeMLP/reasoning-probing.
|
https://arxiv.org/abs/2601.05679
|
Academic Papers
|
svg
|
bc7b871e1c0471edfeda965eb99be72fb3b765b4e9a39f26b77420c538a39e43
|
2026-01-23T00:00:00-05:00
|
Enhancing Large Language Models for Time-Series Forecasting via Vector-Injected In-Context Learning
|
arXiv:2601.07903v3 Announce Type: replace Abstract: The World Wide Web needs reliable predictive capabilities to respond to changes in user behavior and usage patterns. Time series forecasting (TSF) is a key means to achieve this goal. In recent years, the large language models (LLMs) for TSF (LLM4TSF) have achieved good performance. However, there is a significant difference between pretraining corpora and time series data, making it hard to guarantee forecasting quality when directly applying LLMs to TSF; fine-tuning LLMs can mitigate this issue, but often incurs substantial computational overhead. Thus, LLM4TSF faces a dual challenge of prediction performance and compute overhead. To address this, we aim to explore a method for improving the forecasting performance of LLM4TSF while freezing all LLM parameters to reduce computational overhead. Inspired by in-context learning (ICL), we propose LVICL. LVICL uses our vector-injected ICL to inject example information into a frozen LLM, eliciting its in-context learning ability and thereby enhancing its performance on the example-related task (i.e., TSF). Specifically, we first use the LLM together with a learnable context vector adapter to extract a context vector from multiple examples adaptively. This vector contains compressed, example-related information. Subsequently, during the forward pass, we inject this vector into every layer of the LLM to improve forecasting performance. Compared with conventional ICL that adds examples into the prompt, our vector-injected ICL does not increase prompt length; moreover, adaptively deriving a context vector from examples suppresses components harmful to forecasting, thereby improving model performance. Extensive experiments demonstrate the effectiveness of our approach.
|
https://arxiv.org/abs/2601.07903
|
Academic Papers
|
svg
|
05b35dd81a588c3ed26907023d6ef0015c5c94c052ea0963f21673942afec5b7
|
2026-01-23T00:00:00-05:00
|
Attention Projection Mixing with Exogenous Anchors
|
arXiv:2601.08131v2 Announce Type: replace Abstract: Cross-layer reuse of early attention projections can improve optimization and data efficiency, but it creates a structural conflict: the first layer must simultaneously act as a stable, reusable anchor for all deeper layers and as an effective computational block. We show this ''first-layer tension'' is a hidden limiter of internal-anchor designs. We propose ExoFormer, which resolves the conflict by learning exogenous anchor projections outside the sequential layer stack, decoupling the anchor role from computational refinement. We introduce a unified normalized mixing framework that mixes queries, keys, values, and gate logits using learnable coefficients (exploring coefficient granularities: elementwise/headwise/scalar), and we show that normalizing anchor sources is key to stable reuse. ExoFormer variants consistently outperform their internal-anchor counterparts, and the dynamic variant yields 1.5 downstream accuracy points while matching validation loss using 1.5x fewer tokens than Gated Attention. We explain this efficacy via an Offloading Hypothesis: external anchors preserve essential token identity, allowing layers to specialize exclusively in refinement. We release code and models to facilitate future research.
|
https://arxiv.org/abs/2601.08131
|
Academic Papers
|
svg
|
8a9d8ec82e48548c466eb0bb6682c110aeed3d0f0ad0e64fe77c7aac7467480a
|
2026-01-23T00:00:00-05:00
|
Your Group-Relative Advantage Is Biased
|
arXiv:2601.08521v2 Announce Type: replace Abstract: Reinforcement Learning from Verifier Rewards (RLVR) has emerged as a widely used approach for post-training large language models on reasoning tasks, with group-based methods such as GRPO and its variants gaining broad adoption. These methods rely on group-relative advantage estimation to avoid learned critics, yet its theoretical properties remain poorly understood. In this work, we uncover a fundamental issue of group-based RL: the group-relative advantage estimator is inherently biased relative to the true (expected) advantage. We provide the first theoretical analysis showing that it systematically underestimates advantages for hard prompts and overestimates them for easy prompts, leading to imbalanced exploration and exploitation. To address this issue, we propose History-Aware Adaptive Difficulty Weighting (HA-DW), an adaptive reweighting scheme that adjusts advantage estimates based on an evolving difficulty anchor and training dynamics. Both theoretical analysis and experiments on five mathematical reasoning benchmarks demonstrate that HA-DW consistently improves performance when integrated into GRPO and its variants. Our results suggest that correcting biased advantage estimation is critical for robust and efficient RLVR training.
|
https://arxiv.org/abs/2601.08521
|
Academic Papers
|
svg
|
080bea5ad88cad4993d2e46eb3cdc7938329d3b3ad756378877d025ebe2d676d
|
2026-01-23T00:00:00-05:00
|
Contrastive and Multi-Task Learning on Noisy Brain Signals with Nonlinear Dynamical Signatures
|
arXiv:2601.08549v2 Announce Type: replace Abstract: We introduce a two-stage multitask learning framework for analyzing Electroencephalography (EEG) signals that integrates denoising, dynamical modeling, and representation learning. In the first stage, a denoising autoencoder is trained to suppress artifacts and stabilize temporal dynamics, providing robust signal representations. In the second stage, a multitask architecture processes these denoised signals to achieve three objectives: motor imagery classification, chaotic versus non-chaotic regime discrimination using Lyapunov exponent-based labels, and self-supervised contrastive representation learning with NT-Xent loss. A convolutional backbone combined with a Transformer encoder captures spatial-temporal structure, while the dynamical task encourages sensitivity to nonlinear brain dynamics. This staged design mitigates interference between reconstruction and discriminative goals, improves stability across datasets, and supports reproducible training by clearly separating noise reduction from higher-level feature learning. Empirical studies show that our framework not only enhances robustness and generalization but also surpasses strong baselines and recent state-of-the-art methods in EEG decoding, highlighting the effectiveness of combining denoising, dynamical features, and self-supervised learning.
|
https://arxiv.org/abs/2601.08549
|
Academic Papers
|
svg
|
b0c54d85b88de48ce4063857e9b99854009977c898623460fd55fdc8fcd7ee84
|
2026-01-23T00:00:00-05:00
|
Spectral Generative Flow Models: A Physics-Inspired Replacement for Vectorized Large Language Models
|
arXiv:2601.08893v2 Announce Type: replace Abstract: We introduce Spectral Generative Flow Models (SGFMs), a physics-inspired alternative to transformer-based large language models. Instead of representing text or video as sequences of discrete tokens processed by attention, SGFMs treat generation as the evolution of a continuous field governed by constrained stochastic dynamics in a multiscale wavelet basis. This formulation replaces global attention with local operators, spectral projections, and Navier--Stokes-like transport, yielding a generative mechanism grounded in continuity, geometry, and physical structure. Our framework provides three key innovations: (i) a field-theoretic ontology in which text and video are unified as trajectories of a stochastic partial differential equation; (ii) a wavelet-domain representation that induces sparsity, scale separation, and computational efficiency; and (iii) a constrained stochastic flow that enforces stability, coherence, and uncertainty propagation. Together, these components define a generative architecture that departs fundamentally from autoregressive modeling and diffusion-based approaches. SGFMs offer a principled path toward long-range coherence, multimodal generality, and physically structured inductive bias in next-generation generative models.
|
https://arxiv.org/abs/2601.08893
|
Academic Papers
|
svg
|
3e3edaf07d82959d78277c8a797c6cd059243b2149e409a716240e9cd8283231
|
2026-01-23T00:00:00-05:00
|
LLMs Got Rhythm? Hybrid Phonological Filtering for Greek Poetry Rhyme Detection and Generation
|
arXiv:2601.09631v3 Announce Type: replace Abstract: Large Language Models (LLMs), despite their remarkable capabilities across NLP tasks, struggle with phonologically-grounded phenomena like rhyme detection and generation. This is even more evident in lower-resource languages such as Modern Greek. In this paper, we present a hybrid system that combines LLMs with deterministic phonological algorithms to achieve accurate rhyme identification/analysis and generation. Our approach implements a comprehensive taxonomy of Greek rhyme types, including Pure, Rich, Imperfect, Mosaic, and Identical Pre-rhyme Vowel (IDV) patterns, and employs an agentic generation pipeline with phonological verification. We evaluate multiple prompting strategies (zero-shot, few-shot, Chain-of-Thought, and RAG-augmented) across several LLMs including Claude 3.7 and 4.5, GPT-4o, Gemini 2.0 and open-weight models like Llama 3.1 8B and 70B and Mistral Large. Results reveal a significant "Reasoning Gap": while native-like models (Claude 3.7) perform intuitively (40\% accuracy in identification), reasoning-heavy models (Claude 4.5) achieve state-of-the-art performance (54\%) only when prompted with Chain-of-Thought. Most critically, pure LLM generation fails catastrophically (under 4\% valid poems), while our hybrid verification loop restores performance to 73.1\%. We release our system and a corpus of 40,000+ rhymes, derived from the Anemoskala and Interwar Poetry corpora, to support future research.
|
https://arxiv.org/abs/2601.09631
|
Academic Papers
|
svg
|
7e088765194075b1971c191b5854b853647a04ffe6b118b1037198b08e40022a
|
2026-01-23T00:00:00-05:00
|
From Interpretability to Performance: Optimizing Retrieval Heads for Long-Context Language Models
|
arXiv:2601.11020v2 Announce Type: replace Abstract: Advances in mechanistic interpretability have identified special attention heads, known as retrieval heads, that are responsible for retrieving information from the context. However, the role of these retrieval heads in improving model performance remains unexplored. This work investigates whether retrieval heads can be leveraged to enhance the long-context capabilities of LLMs. Specifically, we propose RetMask, a method that generates training signals by contrasting normal model outputs with those from an ablated variant in which the retrieval heads are masked. This mechanism-based approach achieves substantial improvements: +2.28 points on HELMET at 128K for Llama-3.1, with +70% gains on generation with citation and +32% on passage re-ranking, while preserving performance on general tasks. Experiments across three model families reveal that the effectiveness depends on retrieval head organization: models with concentrated patterns of retrieval heads respond strongly, while those with distributed patterns show limited gains. This mechanistic relationship validates the function of retrieval heads and demonstrates that mechanistic insights can be transformed into performance enhancements.
|
https://arxiv.org/abs/2601.11020
|
Academic Papers
|
svg
|
7018f1768a40d704f43d88e14523378056d6aa3ef3a531ff88ba26d770519782
|
2026-01-23T00:00:00-05:00
|
Vision-as-Inverse-Graphics Agent via Interleaved Multimodal Reasoning
|
arXiv:2601.11109v2 Announce Type: replace Abstract: Vision-as-inverse-graphics, the concept of reconstructing an image as an editable graphics program is a long-standing goal of computer vision. Yet even strong VLMs aren't able to achieve this in one-shot as they lack fine-grained spatial and physical grounding capability. Our key insight is that closing this gap requires interleaved multimodal reasoning through iterative execution and verification. Stemming from this, we present VIGA (Vision-as-Inverse-Graphic Agent) that starts from an empty world and reconstructs or edits scenes through a closed-loop write-run-render-compare-revise procedure. To support long-horizon reasoning, VIGA combines (i) a skill library that alternates generator and verifier roles and (ii) an evolving context memory that contains plans, code diffs, and render history. VIGA is task-agnostic as it doesn't require auxiliary modules, covering a wide range of tasks such as 3D reconstruction, multi-step scene editing, 4D physical interaction, and 2D document editing, etc. Empirically, we found VIGA substantially improves one-shot baselines on BlenderGym (35.32%) and SlideBench (117.17%). Moreover, VIGA is also model-agnostic as it doesn't require finetuning, enabling a unified protocol to evaluate heterogeneous foundation VLMs. To better support this protocol, we introduce BlenderBench, a challenging benchmark that stress-tests interleaved multimodal reasoning with graphics engine, where VIGA improves by 124.70%.
|
https://arxiv.org/abs/2601.11109
|
Academic Papers
|
svg
|
5913abd8f64cf7d5657e1a27a4d9c605d407a216348d12c386697e59e18a9c4f
|
2026-01-23T00:00:00-05:00
|
Heterogeneous Uncertainty-Guided Composed Image Retrieval with Fine-Grained Probabilistic Learning
|
arXiv:2601.11393v2 Announce Type: replace Abstract: Composed Image Retrieval (CIR) enables image search by combining a reference image with modification text. Intrinsic noise in CIR triplets incurs intrinsic uncertainty and threatens the model's robustness. Probabilistic learning approaches have shown promise in addressing such issues; however, they fall short for CIR due to their instance-level holistic modeling and homogeneous treatment of queries and targets. This paper introduces a Heterogeneous Uncertainty-Guided (HUG) paradigm to overcome these limitations. HUG utilizes a fine-grained probabilistic learning framework, where queries and targets are represented by Gaussian embeddings that capture detailed concepts and uncertainties. We customize heterogeneous uncertainty estimations for multi-modal queries and uni-modal targets. Given a query, we capture uncertainties not only regarding uni-modal content quality but also multi-modal coordination, followed by a provable dynamic weighting mechanism to derive comprehensive query uncertainty. We further design uncertainty-guided objectives, including query-target holistic contrast and fine-grained contrasts with comprehensive negative sampling strategies, which effectively enhance discriminative learning. Experiments on benchmarks demonstrate HUG's effectiveness beyond state-of-the-art baselines, with faithful analysis justifying the technical contributions.
|
https://arxiv.org/abs/2601.11393
|
Academic Papers
|
svg
|
abdc6530a5d4efe835aab6d328da68ae06fd7c67a1f8f0ee44ead5372ff3b57d
|
2026-01-23T00:00:00-05:00
|
SUG-Occ: An Explicit Semantics and Uncertainty Guided Sparse Learning Framework for Real-Time 3D Occupancy Prediction
|
arXiv:2601.11396v3 Announce Type: replace Abstract: As autonomous driving moves toward full scene understanding, 3D semantic occupancy prediction has emerged as a crucial perception task, offering voxel-level semantics beyond traditional detection and segmentation paradigms. However, such a refined representation for scene understanding incurs prohibitive computation and memory overhead, posing a major barrier to practical real-time deployment. To address this, we propose SUG-Occ, an explicit Semantics and Uncertainty Guided Sparse Learning Enabled 3D Occupancy Prediction Framework, which exploits the inherent sparsity of 3D scenes to reduce redundant computation while maintaining geometric and semantic completeness. Specifically, we first utilize semantic and uncertainty priors to suppress projections from free space during view transformation while employing an explicit unsigned distance encoding to enhance geometric consistency, producing a structurally consistent sparse 3D representation. Secondly, we design an cascade sparse completion module via hyper cross sparse convolution and generative upsampling to enable efficiently coarse-to-fine reasoning. Finally, we devise an object contextual representation (OCR) based mask decoder that aggregates global semantic context from sparse features and refines voxel-wise predictions via lightweight query-context interactions, avoiding expensive attention operations over volumetric features. Extensive experiments on SemanticKITTI benchmark demonstrate that the proposed approach outperforms the baselines, achieving a 7.34/% improvement in accuracy and a 57.8\% gain in efficiency.
|
https://arxiv.org/abs/2601.11396
|
Academic Papers
|
svg
|
cf28633712358cda72ba06fc6d921eb957832ad2bca2249b12b29e8b027d5e92
|
2026-01-23T00:00:00-05:00
|
Frontier AI Auditing: Toward Rigorous Third-Party Assessment of Safety and Security Practices at Leading AI Companies
|
arXiv:2601.11699v2 Announce Type: replace Abstract: Frontier AI is becoming critical societal infrastructure, but outsiders lack reliable ways to judge whether leading developers' safety and security claims are accurate and whether their practices meet relevant standards. Compared to other social and technological systems we rely on daily such as consumer products, corporate financial statements, and food supply chains, AI is subject to less rigorous third-party scrutiny along several dimensions. Ambiguity about whether AI systems are trustworthy can discourage deployment in some contexts where the technology could be beneficial, and make it more likely when it's dangerous. Public transparency alone cannot close this gap: many safety- and security-relevant details are legitimately confidential and require expert interpretation. We define frontier AI auditing as rigorous third-party verification of frontier AI developers' safety and security claims, and evaluation of their systems and practices against relevant standards, based on deep, secure access to non-public information. To make rigor legible and comparable, we introduce AI Assurance Levels (AAL-1 to AAL-4), ranging from time-bounded system audits to continuous, deception-resilient verification.
|
https://arxiv.org/abs/2601.11699
|
Academic Papers
|
svg
|
26bbe7cdf99af7f9cdc364cc1e0fdefab9ac5d69291c134709741af61350fb73
|
2026-01-23T00:00:00-05:00
|
Beyond Tokens: Concept-Level Training Objectives for LLMs
|
arXiv:2601.11791v2 Announce Type: replace Abstract: The next-token prediction (NTP) objective has been foundational in the development of modern large language models (LLMs), driving advances in fluency and generalization. However, NTP operates at the \textit{token} level, treating deviations from a single reference continuation as errors even when alternative continuations are equally plausible or semantically equivalent (e.g., ``mom'' vs. ``mother''). As a result, token-level loss can penalize valid abstractions, paraphrases, or conceptually correct reasoning paths, biasing models toward surface form rather than underlying meaning. This mismatch between the training signal and semantic correctness motivates learning objectives that operate over higher-level representations. We propose a shift from token-level to concept-level prediction, where concepts group multiple surface forms of the same idea (e.g., ``mom,'' ``mommy,'' ``mother'' $\rightarrow$ \textit{MOTHER}). We introduce various methods for integrating conceptual supervision into LLM training and show that concept-aware models achieve lower perplexity, improved robustness under domain shift, and stronger performance than NTP-based models on diverse NLP benchmarks. This suggests \textit{concept-level supervision} as an improved training signal that better aligns LLMs with human semantic abstractions.
|
https://arxiv.org/abs/2601.11791
|
Academic Papers
|
svg
|
d0e501343518f217e03482eb0fcee6df7ef54ca7d9395b0679396fe1887239b2
|
2026-01-23T00:00:00-05:00
|
Codebook-Injected Dialogue Segmentation for Multi-Utterance Constructs Annotation: LLM-Assisted and Gold-Label-Free Evaluation
|
arXiv:2601.12061v2 Announce Type: replace Abstract: Dialogue Act (DA) annotation typically treats communicative or pedagogical intent as localized to individual utterances or turns. This leads annotators to agree on the underlying action while disagreeing on segment boundaries, reducing apparent reliability. We propose codebook-injected segmentation, which conditions boundary decisions on downstream annotation criteria, and evaluate LLM-based segmenters against standard and retrieval-augmented baselines. To assess these without gold labels, we introduce evaluation metrics for span consistency, distinctiveness, and human-AI distributional agreement. We found DA-awareness produces segments that are internally more consistent than text-only baselines. While LLMs excel at creating construct-consistent spans, coherence-based baselines remain superior at detecting global shifts in dialogue flow. Across two datasets, no single segmenter dominates. Improvements in within-segment coherence frequently trade off against boundary distinctiveness and human-AI distributional agreement. These results highlight segmentation as a consequential design choice that should be optimized for downstream objectives rather than a single performance score.
|
https://arxiv.org/abs/2601.12061
|
Academic Papers
|
svg
|
934975ec82044f6b2a78d444a14ece7964ef5dfe29fc981749f01d6af3c4ca98
|
2026-01-23T00:00:00-05:00
|
GutenOCR: A Grounded Vision-Language Front-End for Documents
|
arXiv:2601.14490v2 Announce Type: replace Abstract: GutenOCR is a family of grounded OCR front-ends obtained by fine-tuning Qwen2.5-VL-3B and Qwen2.5-VL-7B. The resulting single-checkpoint vision-language models expose reading, detection, and grounding through a unified, prompt-based interface. Trained on business documents, scientific articles, and synthetic grounding data, the models support full-page and localized reading with line- and paragraph-level bounding boxes and conditional ``where is x?'' queries. We introduce a grounded OCR evaluation protocol and show that GutenOCR-7B more than doubles the composite grounded OCR score of its Qwen2.5-VL-7B backbone on 10.5K held-out business and scientific pages (0.40 to 0.82). On Fox and OmniDocBench v1.5, our approach substantially improves region- and line-level OCR as well as text-detection recall, but reveals trade-offs in page-level linearization, color-guided OCR, and formula-heavy layouts.
|
https://arxiv.org/abs/2601.14490
|
Academic Papers
|
svg
|
eebb493d88ceefcb57824347faedb951b66280a5309af6c3ad767a67e966ae23
|
2026-01-23T00:00:00-05:00
|
Structured Image-based Coding for Efficient Gaussian Splatting Compression
|
arXiv:2601.14510v2 Announce Type: replace Abstract: Gaussian Splatting (GS) has recently emerged as a state-of-the-art representation for radiance fields, combining real-time rendering with high visual fidelity. However, GS models require storing millions of parameters, leading to large file sizes that impair their use in practical multimedia systems. To address this limitation, this paper introduces GS Image-based Compression (GSICO), a novel GS codec that efficiently compresses pre-trained GS models while preserving perceptual fidelity. The core contribution lies in a mapping procedure that arranges GS parameters into structured images, guided by a novel algorithm that enhances spatial coherence. These GS parameter images are then encoded using a conventional image codec. Experimental evaluations on Tanks and Temples, Deep Blending, and Mip-NeRF360 datasets show that GSICO achieves average compression factors of 20.2x with minimal loss in visual quality, as measured by PSNR, SSIM, and LPIPS. Compared with state-of-the-art GS compression methods, the proposed codec consistently yields superior rate-distortion (RD) trade-offs.
|
https://arxiv.org/abs/2601.14510
|
Academic Papers
|
svg
|
283c34f5f802fb55dfa1f76df23a5b07e6a7d7280b7503014e0008458ae013ba
|
2026-01-23T00:00:00-05:00
|
Report for NSF Workshop on AI for Electronic Design Automation
|
arXiv:2601.14541v2 Announce Type: replace Abstract: This report distills the discussions and recommendations from the NSF Workshop on AI for Electronic Design Automation (EDA), held on December 10, 2024 in Vancouver alongside NeurIPS 2024. Bringing together experts across machine learning and EDA, the workshop examined how AI-spanning large language models (LLMs), graph neural networks (GNNs), reinforcement learning (RL), neurosymbolic methods, etc.-can facilitate EDA and shorten design turnaround. The workshop includes four themes: (1) AI for physical synthesis and design for manufacturing (DFM), discussing challenges in physical manufacturing process and potential AI applications; (2) AI for high-level and logic-level synthesis (HLS/LLS), covering pragma insertion, program transformation, RTL code generation, etc.; (3) AI toolbox for optimization and design, discussing frontier AI developments that could potentially be applied to EDA tasks; and (4) AI for test and verification, including LLM-assisted verification tools, ML-augmented SAT solving, security/reliability challenges, etc. The report recommends NSF to foster AI/EDA collaboration, invest in foundational AI for EDA, develop robust data infrastructures, promote scalable compute infrastructure, and invest in workforce development to democratize hardware design and enable next-generation hardware systems. The workshop information can be found on the website https://ai4eda-workshop.github.io/.
|
https://arxiv.org/abs/2601.14541
|
Academic Papers
|
svg
|
ec828e165d8cd177769d3401c7b694fedd0a87f92651ce78d7d6c7cdbbb38ea9
|
2026-01-23T00:00:00-05:00
|
Scribble-Supervised Medical Image Segmentation with Dynamic Teacher Switching and Hierarchical Consistency
|
arXiv:2601.14563v2 Announce Type: replace Abstract: Scribble-supervised methods have emerged to mitigate the prohibitive annotation burden in medical image segmentation. However, the inherent sparsity of these annotations introduces significant ambiguity, which results in noisy pseudo-label propagation and hinders the learning of robust anatomical boundaries. To address this challenge, we propose SDT-Net, a novel dual-teacher, single-student framework designed to maximize supervision quality from these weak signals. Our method features a Dynamic Teacher Switching (DTS) module to adaptively select the most reliable teacher. This selected teacher then guides the student via two synergistic mechanisms: high-confidence pseudo-labels, refined by a Pick Reliable Pixels (PRP) mechanism, and multi-level feature alignment, enforced by a Hierarchical Consistency (HiCo) module. Extensive experiments on the ACDC and MSCMRseg datasets demonstrate that SDT-Net achieves state-of-the-art performance, producing more accurate and anatomically plausible segmentation.
|
https://arxiv.org/abs/2601.14563
|
Academic Papers
|
svg
|
25308161f28c4d64f8cc87944edb171f898ea4c295ba52ac580f5ea04b4b2410
|
2026-01-23T00:00:00-05:00
|
Automatically Tightening Access Control Policies with Restricter
|
arXiv:2601.14582v2 Announce Type: replace Abstract: Robust access control is a cornerstone of secure software, systems, and networks. An access control mechanism is as effective as the policy it enforces. However, authoring effective policies that satisfy desired properties such as the principle of least privilege is a challenging task even for experienced administrators, as evidenced by many real instances of policy misconfiguration. In this paper, we set out to address this pain point by proposing Restricter, which automatically tightens each (permit) policy rule of a policy with respect to an access log, which captures some already exercised access requests and their corresponding access decisions (i.e., allow or deny). Restricter achieves policy tightening by reducing the number of access requests permitted by a policy rule without sacrificing the functionality of the underlying system it is regulating. We implement Restricter for Amazon's Cedar policy language and demonstrate its effectiveness through two realistic case studies.
|
https://arxiv.org/abs/2601.14582
|
Academic Papers
|
svg
|
78af12292445ce622c08b0720effb8d69e3ae862763dc156d6c3ffe9a9acdc7c
|
2026-01-23T00:00:00-05:00
|
An Ion-Intercalation Memristor for Enabling Full Parallel Writing in Crossbar Networks
|
arXiv:2601.14613v2 Announce Type: replace Abstract: Crossbar architectures have long been seen as a promising foundation for in-memory computing, using memristor arrays for high-density, energy-efficient analog computation. However, this conventional architecture suffers from a fundamental limitation: the inability to perform parallel write operations due to the sneak path problem. This arises from the structural overlap of read and write paths, forcing sequential or semi-parallel updates and severely limiting scalability. To address this, we introduce a new memristor design that decouples read and write operations at the device level. This design enables orthogonal conductive paths, and employs a reversible ion doping mechanism, inspired by lithium-ion battery principles, to modulate resistance states independently of computation. Fabricated devices exhibit near-ideal memristive characteristics and stable performance under isolated read/write conditions.
|
https://arxiv.org/abs/2601.14613
|
Academic Papers
|
svg
|
2a234917908e0c5ca9e6d9e71b651ed609c308673dc9fd791253f839d530ddee
|
2026-01-23T00:00:00-05:00
|
Gaming the Judge: Unfaithful Chain-of-Thought Can Undermine Agent Evaluation
|
arXiv:2601.14691v2 Announce Type: replace Abstract: Large language models (LLMs) are increasingly used as judges to evaluate agent performance, particularly in non-verifiable settings where judgments rely on agent trajectories including chain-of-thought (CoT) reasoning. This paradigm implicitly assumes that the agent's CoT faithfully reflects both its internal reasoning and the underlying environment state. We show this assumption is brittle: LLM judges are highly susceptible to manipulation of agent reasoning traces. By systematically rewriting agent CoTs while holding actions and observations fixed, we demonstrate that manipulated reasoning alone can inflate false positive rates of state-of-the-art VLM judges by up to 90% across 800 trajectories spanning diverse web tasks. We study manipulation strategies spanning style-based approaches that alter only the presentation of reasoning and content-based approaches that fabricate signals of task progress, and find that content-based manipulations are consistently more effective. We evaluate prompting-based techniques and scaling judge-time compute, which reduce but do not fully eliminate susceptibility to manipulation. Our findings reveal a fundamental vulnerability in LLM-based evaluation and highlight the need for judging mechanisms that verify reasoning claims against observable evidence.
|
https://arxiv.org/abs/2601.14691
|
Academic Papers
|
svg
|
1fb723767c3a32e5c66f62958c29db663b9c301fa8e77bef4851c5c4e84728a6
|
2026-01-23T00:00:00-05:00
|
Render-of-Thought: Rendering Textual Chain-of-Thought as Images for Visual Latent Reasoning
|
arXiv:2601.14750v2 Announce Type: replace Abstract: Chain-of-Thought (CoT) prompting has achieved remarkable success in unlocking the reasoning capabilities of Large Language Models (LLMs). Although CoT prompting enhances reasoning, its verbosity imposes substantial computational overhead. Recent works often focus exclusively on outcome alignment and lack supervision on the intermediate reasoning process. These deficiencies obscure the analyzability of the latent reasoning chain. To address these challenges, we introduce Render-of-Thought (RoT), the first framework to reify the reasoning chain by rendering textual steps into images, making the latent rationale explicit and traceable. Specifically, we leverage the vision encoders of existing Vision Language Models (VLMs) as semantic anchors to align the vision embeddings with the textual space. This design ensures plug-and-play implementation without incurring additional pre-training overhead. Extensive experiments on mathematical and logical reasoning benchmarks demonstrate that our method achieves 3-4x token compression and substantial inference acceleration compared to explicit CoT. Furthermore, it maintains competitive performance against other methods, validating the feasibility of this paradigm. Our code is available at https://github.com/TencentBAC/RoT
|
https://arxiv.org/abs/2601.14750
|
Academic Papers
|
svg
|
54a7e74ab3c50dc7cdc481df571eefccfa94351d728029713907ec0538c6cd68
|
2026-01-23T00:00:00-05:00
|
Mechanism Shift During Post-training from Autoregressive to Masked Diffusion Language Models
|
arXiv:2601.14758v2 Announce Type: replace Abstract: Post-training pretrained Autoregressive models (ARMs) into Masked Diffusion models (MDMs) has emerged as a cost-effective strategy to overcome the limitations of sequential generation. However, the internal algorithmic transformations induced by this paradigm shift remain unexplored, leaving it unclear whether post-trained MDMs acquire genuine bidirectional reasoning capabilities or merely repackage autoregressive heuristics. In this work, we address this question by conducting a comparative circuit analysis of ARMs and their MDM counterparts. Our analysis reveals a systematic "mechanism shift" dependent on the structural nature of the task. Structurally, we observe a distinct divergence: while MDMs largely retain autoregressive circuitry for tasks dominated by local causal dependencies, they abandon initialized pathways for global planning tasks, exhibiting distinct rewiring characterized by increased early-layer processing. Semantically, we identify a transition from sharp, localized specialization in ARMs to distributed integration in MDMs. Through these findings, we conclude that diffusion post-training does not merely adapt model parameters but fundamentally reorganizes internal computation to support non-sequential global planning.
|
https://arxiv.org/abs/2601.14758
|
Academic Papers
|
svg
|
12e8123937bdf2c9fec14df58c8512d7aa53cbd79be89f63091b1a1c3674eabb
|
2026-01-23T00:00:00-05:00
|
Multi-Task Transformer for Explainable Speech Deepfake Detection via Formant Modeling
|
arXiv:2601.14850v2 Announce Type: replace Abstract: In this work, we introduce a multi-task transformer for speech deepfake detection, capable of predicting formant trajectories and voicing patterns over time, ultimately classifying speech as real or fake, and highlighting whether its decisions rely more on voiced or unvoiced regions. Building on a prior speaker-formant transformer architecture, we streamline the model with an improved input segmentation strategy, redesign the decoding process, and integrate built-in explainability. Compared to the baseline, our model requires fewer parameters, trains faster, and provides better interpretability, without sacrificing prediction performance.
|
https://arxiv.org/abs/2601.14850
|
Academic Papers
|
svg
|
88ef91b79c62af583df2776fa41582f12821c58008e9ca88064a9b268ef6c4b5
|
2026-01-23T00:00:00-05:00
|
Adaptive Exponential Integration for Stable Gaussian Mixture Black-Box Variational Inference
|
arXiv:2601.14855v2 Announce Type: replace Abstract: Black-box variational inference (BBVI) with Gaussian mixture families offers a flexible approach for approximating complex posterior distributions without requiring gradients of the target density. However, standard numerical optimization methods often suffer from instability and inefficiency. We develop a stable and efficient framework that combines three key components: (1) affine-invariant preconditioning via natural gradient formulations, (2) an exponential integrator that unconditionally preserves the positive definiteness of covariance matrices, and (3) adaptive time stepping to ensure stability and to accommodate distinct warm-up and convergence phases. The proposed approach has natural connections to manifold optimization and mirror descent. For Gaussian posteriors, we prove exponential convergence in the noise-free setting and almost-sure convergence under Monte Carlo estimation, rigorously justifying the necessity of adaptive time stepping. Numerical experiments on multimodal distributions, Neal's multiscale funnel, and a PDE-based Bayesian inverse problem for Darcy flow demonstrate the effectiveness of the proposed method.
|
https://arxiv.org/abs/2601.14855
|
Academic Papers
|
svg
|
6bd138bfa0690dd34a823fcdcaf51713873544e60b9f935a81fc19141b0696bd
|
2026-01-23T00:00:00-05:00
|
The GDN-CC Dataset: Automatic Corpus Clarification for AI-enhanced Democratic Citizen Consultations
|
arXiv:2601.14944v2 Announce Type: replace Abstract: LLMs are ubiquitous in modern NLP, and while their applicability extends to texts produced for democratic activities such as online deliberations or large-scale citizen consultations, ethical questions have been raised for their usage as analysis tools. We continue this line of research with two main goals: (a) to develop resources that can help standardize citizen contributions in public forums at the pragmatic level, and make them easier to use in topic modeling and political analysis; (b) to study how well this standardization can reliably be performed by small, open-weights LLMs, i.e. models that can be run locally and transparently with limited resources. Accordingly, we introduce Corpus Clarification as a preprocessing framework for large-scale consultation data that transforms noisy, multi-topic contributions into structured, self-contained argumentative units ready for downstream analysis. We present GDN-CC, a manually-curated dataset of 1,231 contributions to the French Grand D\'ebat National, comprising 2,285 argumentative units annotated for argumentative structure and manually clarified. We then show that finetuned Small Language Models match or outperform LLMs on reproducing these annotations, and measure their usability for an opinion clustering task. We finally release GDN-CC-large, an automatically annotated corpus of 240k contributions, the largest annotated democratic consultation dataset to date.
|
https://arxiv.org/abs/2601.14944
|
Academic Papers
|
svg
|
664d8e77f5882a3f5d66c3c6e091a38fb24c5ca5527db1d25e18a6e59eed815c
|
2026-01-23T00:00:00-05:00
|
LogicScore: Fine-grained Logic Evaluation of Conciseness, Completeness, and Determinateness in Attributed Question Answering
|
arXiv:2601.15050v2 Announce Type: replace Abstract: Current evaluation methods for Attributed Question Answering (AQA) suffer from \textit{attribution myopia}: they emphasize verification of isolated statements and their attributions but overlook the global logical integrity of long-form answers. Consequently, Large Language Models (LLMs) often produce factually grounded yet logically incoherent responses with elusive deductive gaps. To mitigate this limitation, we present \textsc{LogicScore}, a unified evaluation framework that shifts the paradigm from local assessment to global reasoning scrutiny. Grounded in Horn Rules, our approach integrates a backward verification mechanism to systematically evaluate three key reasoning dimensions: \textit{Completeness} (logically sound deduction), \textit{Conciseness} (non-redundancy), and \textit{Determinateness} (consistent answer entailment). Extensive experiments across three multi-hop QA datasets (HotpotQA, MusiQue, and 2WikiMultiHopQA) and over 20 LLMs (including GPT-5, Gemini-3-Pro, LLaMA3, and task-specific tuned models) reveal a critical capability gap: leading models often achieve high attribution scores (e.g., 92.85\% precision for Gemini-3 Pro) but struggle with global reasoning quality (e.g., 35.11\% Conciseness for Gemini-3 Pro). Our work establishes a robust standard for logical evaluation, highlighting the need to prioritize reasoning coherence alongside factual grounding in LLM development. Codes are available at: https://github.com/zhichaoyan11/LogicScore.
|
https://arxiv.org/abs/2601.15050
|
Academic Papers
|
svg
|
d9cf267eca498e19e0a2ca8c9c07ee72b401e259b2a0f9f7de55a2b0a15168a8
|
2026-01-23T00:00:00-05:00
|
DeLog: An Efficient Log Compression Framework with Pattern Signature Synthesis
|
arXiv:2601.15084v2 Announce Type: replace Abstract: Parser-based log compression, which separates static templates from dynamic variables, is a promising approach to exploit the unique structure of log data. However, its performance on complex production logs is often unsatisfactory. This performance gap coincides with a known degradation in the accuracy of its core log parsing component on such data, motivating our investigation into a foundational yet unverified question: does higher parsing accuracy necessarily lead to better compression ratio? To answer this, we conduct the first empirical study quantifying this relationship and find that a higher parsing accuracy does not guarantee a better compression ratio. Instead, our findings reveal that compression ratio is dictated by achieving effective pattern-based grouping and encoding, i.e., the partitioning of tokens into low entropy, highly compressible groups. Guided by this insight, we design DeLog, a novel log compressor that implements a Pattern Signature Synthesis mechanism to achieve efficient pattern-based grouping. On 16 public and 10 production datasets, DeLog achieves state-of-the-art compression ratio and speed.
|
https://arxiv.org/abs/2601.15084
|
Academic Papers
|
svg
|
9484f557e9c7862efb5af8e8a0dcd451ec06405279749fad1fbf8eabfeb2c974
|
2026-01-23T00:00:00-05:00
|
WavLink: Compact Audio-Text Embeddings with a Global Whisper Token
|
arXiv:2601.15118v2 Announce Type: replace Abstract: Whisper has become the de-facto encoder for extracting general-purpose audio features in large audio-language models, where a 30-second clip is typically represented by 1500 frame features projected into an LLM. In contrast, audio-text embedding models like CLAP-based models have largely relied on alternative audio encoders (e.g., HTS-AT, PaSST), and have not leveraged Whisper effectively. We present WavLink, a compact audio-text embedding model that augments Whisper encoder with a learnable global token, trained jointly with a text encoder. Through a systematic study of design choices, including pretrained text encoders, loss functions, training modes, and data mixtures, we identify configurations that yield state-of-the-art retrieval performance. Our two-stage training recipe across three model sizes, combined with Matryoshka-style supervision, improves scalability, enabling 8x smaller embeddings with minimal performance drop. WavLink also demonstrates competitive performance on AIR-Bench with MCQs and zero-shot classification.
|
https://arxiv.org/abs/2601.15118
|
Academic Papers
|
svg
|
0d8385fac7b5ed389cab8c28d05cf80b0fb3b275ec3247b8fb039f38508946d1
|
2026-01-23T00:00:00-05:00
|
Emerging from Ground: Addressing Intent Deviation in Tool-Using Agents via Deriving Real Calls into Virtual Trajectories
|
arXiv:2601.15120v2 Announce Type: replace Abstract: LLMs have advanced tool-using agents for real-world applications, yet they often lead to unexpected behaviors or results. Beyond obvious failures, the subtle issue of "intent deviation" severely hinders reliable evaluation and performance improvement. Existing post-training methods generally leverage either real system samples or virtual data simulated by LLMs. However, the former is costly due to reliance on hand-crafted user requests, while the latter suffers from distribution shift from the real tools in the wild. Additionally, both methods lack negative samples tailored to intent deviation scenarios, hindering effective guidance on preference learning. We introduce RISE, a "Real-to-Virtual" method designed to mitigate intent deviation. Anchoring on verified tool primitives, RISE synthesizes virtual trajectories and generates diverse negative samples through mutation on critical parameters. With synthetic data, RISE fine-tunes backbone LLMs via the two-stage training for intent alignment. Evaluation results demonstrate that data synthesized by RISE achieve promising results in eight metrics covering user requires, execution trajectories and agent responses. Integrating with training, RISE achieves an average 35.28% improvement in Acctask (task completion) and 23.27% in Accintent (intent alignment), outperforming SOTA baselines by 1.20--42.09% and 1.17--54.93% respectively.
|
https://arxiv.org/abs/2601.15120
|
Academic Papers
|
svg
|
465524c65bdb1801e246b6238f172f4bd44ee6512b46452f03ed63f17fef711b
|
2026-01-23T00:00:00-05:00
|
BayesianVLA: Bayesian Decomposition of Vision Language Action Models via Latent Action Queries
|
arXiv:2601.15197v2 Announce Type: replace Abstract: Vision-Language-Action (VLA) models have shown promise in robot manipulation but often struggle to generalize to new instructions or complex multi-task scenarios. We identify a critical pathology in current training paradigms where goal-driven data collection creates a dataset bias. In such datasets, language instructions are highly predictable from visual observations alone, causing the conditional mutual information between instructions and actions to vanish, a phenomenon we term Information Collapse. Consequently, models degenerate into vision-only policies that ignore language constraints and fail in out-of-distribution (OOD) settings. To address this, we propose BayesianVLA, a novel framework that enforces instruction following via Bayesian decomposition. By introducing learnable Latent Action Queries, we construct a dual-branch architecture to estimate both a vision-only prior $p(a \mid v)$ and a language-conditioned posterior $\pi(a \mid v, \ell)$. We then optimize the policy to maximize the conditional Pointwise Mutual Information (PMI) between actions and instructions. This objective effectively penalizes the vision shortcut and rewards actions that explicitly explain the language command. Without requiring new data, BayesianVLA significantly improves generalization. Extensive experiments across on SimplerEnv and RoboCasa demonstrate substantial gains, including an 11.3% improvement on the challenging OOD SimplerEnv benchmark, validating the ability of our approach to robustly ground language in action.
|
https://arxiv.org/abs/2601.15197
|
Academic Papers
|
svg
|
aafe279768a7cd099050b29e6fd67803e7efdba8811084b5f4ed5f9fbec84ff0
|
2026-01-23T00:00:00-05:00
|
Deaf and Hard of Hearing Access to Intelligent Personal Assistants: Comparison of Voice-Based Options with an LLM-Powered Touch Interface
|
arXiv:2601.15209v2 Announce Type: replace Abstract: We investigate intelligent personal assistants (IPAs) accessibility for deaf and hard of hearing (DHH) people who can use their voice in everyday communication. The inability of IPAs to understand diverse accents including deaf speech renders them largely inaccessible to non-signing and speaking DHH individuals. Using an Echo Show, we compare the usability of natural language input via spoken English; with Alexa's automatic speech recognition and a Wizard-of-Oz setting with a trained facilitator re-speaking commands against that of a large language model (LLM)-assisted touch interface in a mixed-methods study. The touch method was navigated through an LLM-powered "task prompter," which integrated the user's history and smart environment to suggest contextually-appropriate commands. Quantitative results showed no significant differences across both spoken English conditions vs LLM-assisted touch. Qualitative results showed variability in opinions on the usability of each method. Ultimately, it will be necessary to have robust deaf-accented speech recognized natively by IPAs.
|
https://arxiv.org/abs/2601.15209
|
Academic Papers
|
svg
|
33368b7b22b0ff857c27a7f501f12cf4c96fc563246c2fa7021d29e638a0f39b
|
2026-01-23T00:00:00-05:00
|
Recommending Best Paper Awards for ML/AI Conferences via the Isotonic Mechanism
|
arXiv:2601.15249v2 Announce Type: replace Abstract: Machine learning and artificial intelligence conferences such as NeurIPS and ICML now regularly receive tens of thousands of submissions, posing significant challenges to maintaining the quality and consistency of the peer review process. This challenge is particularly acute for best paper awards, which are an important part of the peer review process, yet whose selection has increasingly become a subject of debate in recent years. In this paper, we introduce an author-assisted mechanism to facilitate the selection of best paper awards. Our method employs the Isotonic Mechanism for eliciting authors' assessments of their own submissions in the form of a ranking, which is subsequently utilized to adjust the raw review scores for optimal estimation of the submissions' ground-truth quality. We demonstrate that authors are incentivized to report truthfully when their utility is a convex additive function of the adjusted scores, and we validate this convexity assumption for best paper awards using publicly accessible review data of ICLR from 2019 to 2023 and NeurIPS from 2021 to 2023. Crucially, in the special case where an author has a single quota -- that is, may nominate only one paper -- we prove that truthfulness holds even when the utility function is merely nondecreasing and additive. This finding represents a substantial relaxation of the assumptions required in prior work. For practical implementation, we extend our mechanism to accommodate the common scenario of overlapping authorship. Finally, simulation results demonstrate that our mechanism significantly improves the quality of papers selected for awards.
|
https://arxiv.org/abs/2601.15249
|
Academic Papers
|
svg
|
c95210438db727f7aff0c712dc16e5956eb1de73bb74c953df96077cf1125e22
|
2026-01-23T00:00:00-05:00
|
Control Occupation Kernel Regression for Nonlinear Control-Affine Systems
|
arXiv:2106.00103v2 Announce Type: replace-cross Abstract: This manuscript presents an algorithm for obtaining an approximation of a nonlinear high order control affine dynamical system. Controlled trajectories of the system are leveraged as the central unit of information via embedding them in vector-valued reproducing kernel Hilbert space (vvRKHS). The trajectories are embedded as the so-called higher order control occupation kernels which represent an operator on the vvRKHS corresponding to iterated integration after multiplication by a given controller. The solution to the system identification problem is then the unique solution of an infinite dimensional regularized regression problem. The representer theorem is then used to express the solution as finite linear combination of these occupation kernels, which converts an infinite dimensional optimization problem to a finite dimensional optimization problem. The vector valued structure of the Hilbert space allows for simultaneous approximation of the drift and control effectiveness components of the control affine system. Several experiments are performed to demonstrate the effectiveness of the developed approach.
|
https://arxiv.org/abs/2106.00103
|
Academic Papers
|
svg
|
5522053e6960ed47558c53b29f2e952bcbf732e91a263b8e9cf3fb763a1238f6
|
2026-01-23T00:00:00-05:00
|
From Text to Image: Exploring GPT-4Vision's Potential in Advanced Radiological Analysis across Subspecialties
|
arXiv:2311.14777v2 Announce Type: replace-cross Abstract: The study evaluates and compares GPT-4 and GPT-4Vision for radiological tasks, suggesting GPT-4Vision may recognize radiological features from images, thereby enhancing its diagnostic potential over text-based descriptions.
|
https://arxiv.org/abs/2311.14777
|
Academic Papers
|
svg
|
180c5ae5999601d463e712567d6b75c035ab29721d725816055e633e258eb853
|
2026-01-23T00:00:00-05:00
|
Totally symmetric Grassmannian codes
|
arXiv:2406.19542v2 Announce Type: replace-cross Abstract: We introduce a general technique to construct tight fusion frames with prescribed symmetries. Applying this technique with a prescription for "all the symmetries", we construct a new family of equi-isoclinic tight fusion frames (EITFFs), which consequently form optimal Grassmannian codes. By virtue of their construction, our EITFFs have the remarkable property of total symmetry: any permutation of subspaces can be achieved by an appropriate unitary.
|
https://arxiv.org/abs/2406.19542
|
Academic Papers
|
svg
|
362b7e1f4b2a2b01b74671e3a067718f15dd9c96d9ea69d76615644b773fba51
|
2026-01-23T00:00:00-05:00
|
Integer programs with nearly totally unimodular matrices: the cographic case
|
arXiv:2407.09477v2 Announce Type: replace-cross Abstract: It is a notorious open question whether integer programs (IPs), with an integer coefficient matrix $M$ whose subdeterminants are all bounded by a constant $\Delta$ in absolute value, can be solved in polynomial time. We answer this question in the affirmative if we further require that, by removing a constant number of rows and columns from $M$, one obtains a submatrix $A$ that is the transpose of a network matrix. Our approach focuses on the case where $A$ arises from $M$ after removing $k$ rows only, where $k$ is a constant. We achieve our result in two main steps, the first related to the theory of IPs and the second related to graph minor theory. First, we derive a strong proximity result for the case where $A$ is a general totally unimodular matrix: Given an optimal solution of the linear programming relaxation, an optimal solution to the IP can be obtained by finding a constant number of augmentations by circuits of $[A\; I]$. Second, for the case where $A$ is transpose of a network matrix, we reformulate the problem as a maximum constrained integer potential problem on a graph $G$. We observe that if $G$ is $2$-connected, then it has no rooted $K_{2,t}$-minor for $t = \Omega(k \Delta)$. We leverage this to obtain a tree-decomposition of $G$ into highly structured graphs for which we can solve the problem locally. This allows us to solve the global problem via dynamic programming.
|
https://arxiv.org/abs/2407.09477
|
Academic Papers
|
svg
|
136d9ab3a9cb6494390cec35039431aac23acae61297caea448093ef1212b3f8
|
2026-01-23T00:00:00-05:00
|
The Software Complexity of Nations
|
arXiv:2407.13880v2 Announce Type: replace-cross Abstract: Despite the growing importance of the digital sector, research on economic complexity and its implications continues to rely mostly on administrative records, e.g. data on exports, patents, and employment, that have blind spots when it comes to the digital economy. In this paper we use data on the geography of programming languages used in open-source software to extend economic complexity ideas to the digital economy. We estimate a country's software economic complexity index (ECIsoftware) and show that it complements the ability of measures of complexity based on trade, patents, and research to account for international differences in GDP per capita, income inequality, and emissions. We also show that open-source software follows the principle of relatedness, meaning that a country's entries and exits in programming languages are partly explained by its current pattern of specialization. Together, these findings help extend economic complexity ideas and their policy implications to the digital economy.
|
https://arxiv.org/abs/2407.13880
|
Academic Papers
|
svg
|
12f3d203cb5a009ad0fca6bac1c07aa5c09d5edeba86589124c1b45786c6ef84
|
2026-01-23T00:00:00-05:00
|
Beyond Fixed Horizons: A Theoretical Framework for Adaptive Denoising Diffusions
|
arXiv:2501.19373v2 Announce Type: replace-cross Abstract: We introduce a new class of generative diffusion models that, unlike conventional denoising diffusion models, achieve a time-homogeneous structure for both the noising and denoising processes, allowing the number of steps to adaptively adjust based on the noise level. This is accomplished by conditioning the forward process using Doob's $h$-transform, which terminates the process at a suitable sampling distribution at a random time. The model is particularly well suited for generating data with lower intrinsic dimensions, as the termination criterion simplifies to a first-hitting rule. A key feature of the model is its adaptability to the target data, enabling a variety of downstream tasks using a pre-trained unconditional generative model. These tasks include natural conditioning through appropriate initialisation of the denoising process and classification of noisy data.
|
https://arxiv.org/abs/2501.19373
|
Academic Papers
|
svg
|
5bff0af865fb1eef68c80ad983bfefd9759feabbdef2e78bc313dce5ba39c86e
|
2026-01-23T00:00:00-05:00
|
Myrvold's Results on Orthogonal Triples of $10 \times 10$ Latin Squares: A SAT Investigation
|
arXiv:2503.10504v2 Announce Type: replace-cross Abstract: Ever since E. T. Parker constructed an orthogonal pair of $10\times10$ Latin squares in 1959, an orthogonal triple of $10\times10$ Latin squares has been one of the most sought-after combinatorial designs. Despite extensive work, the existence of such an orthogonal triple remains an open problem, though some negative results are known. In 1999, W. Myrvold derived some highly restrictive constraints in the special case in which one of the Latin squares in the triple contains a $4\times4$ Latin subsquare. In particular, Myrvold showed there were twenty-eight possible cases for an orthogonal pair in such a triple, twenty of which were removed from consideration. We implement a computational approach that quickly verifies all of Myrvold's nonexistence results and in the remaining eight cases finds explicit examples of orthogonal pairs -- thus explaining for the first time why Myrvold's approach left eight cases unsolved. As a consequence, the eight remaining cases cannot be removed by a strategy of focusing on the existence of an orthogonal pair; the third square in the triple must necessarily be considered as well. Our approach uses a Boolean satisfiability (SAT) solver to derive the nonexistence of twenty of the orthogonal pair types and find explicit examples of orthogonal pairs in the eight remaining cases. To reduce the existence problem into Boolean logic we use a duality between the concepts of transversal representation and orthogonal pair and we provide a formulation of this duality in terms of a composition operation on Latin squares. Using our SAT encoding, we find transversal representations (and equivalently orthogonal pairs) in the remaining eight cases in under two hours of computing on a large computing cluster.
|
https://arxiv.org/abs/2503.10504
|
Academic Papers
|
svg
|
f823bfff78cc10c8c86dfe68cbb87d8eae4a8e668f87b6e0c2caff6fb5dd634c
|
2026-01-23T00:00:00-05:00
|
Formalising the Bruhat-Tits Tree
|
arXiv:2505.12933v2 Announce Type: replace-cross Abstract: In this article we describe the formalisation of the Bruhat-Tits tree - an important tool in modern number theory - in the Lean Theorem Prover. Motivated by the goal of connecting to ongoing research, we apply our formalisation to verify a result about harmonic cochains on the tree.
|
https://arxiv.org/abs/2505.12933
|
Academic Papers
|
svg
|
e4ca201baf53054ccfb818f81b92edb4ac9ab623202f121009038667bf7b025f
|
2026-01-23T00:00:00-05:00
|
Numerical Optimization Strategies for the Variational Hamiltonian Ansatz in Noisy Quantum Environments
|
arXiv:2505.22398v4 Announce Type: replace-cross Abstract: The prevalence of variational methods in near-term quantum computing makes optimizer choice critical, yet selection is frequently intuition-based. We therefore present a systematic benchmark of eight classical optimization algorithms for variational quantum chemistry using the truncated Variational Hamiltonian Ansatz. Performance is evaluated on H$_2$, H$_4$, and LiH in both full and active-space representations under noiseless and finite-shot sampling noise. Sampling noise substantially reshapes cost landscapes, induces wandering near minima, and flips optimizer rankings: gradient-based methods perform best in noiseless simulations, whereas population-based optimizers, particularly CMA-ES, show greater robustness under finite-shot noise. Optimizer performance is strongly problem dependent: Hartree-Fock initialization aids small systems, but its advantage diminishes with system size. Also, we observe that finite shot sampling frequently violates the lower bound given by the variational principle, a principle that cannot be strictly held in the presence of noise. By exploiting the guaranteed convergence of Evolution Strategies to a steady state distribution defined by the noise floor, we utilize the symmetry of these violations to achieve energy estimation precision beyond the intrinsic sampling limit.
|
https://arxiv.org/abs/2505.22398
|
Academic Papers
|
svg
|
b6ec9f4fda7564fce1371ee497c26940042170419354252b59395befbed60bbd
|
2026-01-23T00:00:00-05:00
|
Modelling the Effects of Hearing Loss on Neural Coding in the Auditory Midbrain with Variational Conditioning
|
arXiv:2506.03088v2 Announce Type: replace-cross Abstract: The mapping from sound to neural activity that underlies hearing is highly non-linear. The first few stages of this mapping in the cochlea have been modelled successfully, with biophysical models built by hand and, more recently, with DNN models trained on datasets simulated by biophysical models. Modelling the auditory brain has been a challenge because central auditory processing is too complex for models to be built by hand, and datasets for training DNN models directly have not been available. Recent work has taken advantage of large-scale high resolution neural recordings from the auditory midbrain to build a DNN model of normal hearing with great success. But this model assumes that auditory processing is the same in all brains, and therefore it cannot capture the widely varying effects of hearing loss. We propose a novel variational-conditional model to learn to encode the space of hearing loss directly from recordings of neural activity in the auditory midbrain of healthy and noise exposed animals. With hearing loss parametrised by only 6 free parameters per animal, our model accurately predicts 62% of the explainable variance in neural responses from normal hearing animals and 68% for hearing impaired animals, within a few percentage points of state of the art animal specific models. We demonstrate that the model can be used to simulate realistic activity from out of sample animals by fitting only the learned conditioning parameters with Bayesian optimisation, achieving crossentropy loss within 2% of the optimum in 15-30 iterations. Including more animals in the training data slightly improved the performance on unseen animals. This model will enable future development of parametrised hearing loss compensation models trained to directly restore normal neural coding in hearing impaired brains, which can be quickly fitted for a new user by human in the loop optimisation.
|
https://arxiv.org/abs/2506.03088
|
Academic Papers
|
svg
|
85c87bf05f5fd8a9a127e8e5f617cf786648fa0307c2fb12204327a56bd35d78
|
2026-01-23T00:00:00-05:00
|
Thinning to improve two-sample discrepancy
|
arXiv:2506.20932v2 Announce Type: replace-cross Abstract: The discrepancy between two independent samples \(X_1,\dots,X_n\) and \(Y_1,\dots,Y_n\) drawn from the same distribution on $\mathbb{R}^d$ typically has order \(O(\sqrt{n})\) even in one dimension. We give a simple online algorithm that reduces the discrepancy to \(O(\log^{2d} n)\) by discarding a small fraction of the points.
|
https://arxiv.org/abs/2506.20932
|
Academic Papers
|
svg
|
dfb31af41db6197d6f1d33bf38cc98e99589c6a8e6699418a986e389129e07a3
|
2026-01-23T00:00:00-05:00
|
Distance-Domain Degrees of Freedom in Near-Field Region
|
arXiv:2507.01227v3 Announce Type: replace-cross Abstract: Extremely large aperture arrays operating in the near-field regime unlock additional spatial resources, which can be exploited to simultaneously serve multiple users even when they share the same angular direction. This work investigates the distance-domain degrees of freedom (DoF), defined as the DoF when a user varies only its distance to the base station and not the angle. To obtain the distance-domain DoF, we study a line-of-sight (LoS) channel with a source representing a base station and an observation region representing users, where the source is a large two-dimensional transmit (Tx) array with arbitrary shape and the observation region is an arbitrarily long linear receive (Rx) array with collinearly aligned elements located at different distances from the Tx array. We assume that both the Tx and Rx arrays have continuous apertures with an infinite number of elements and infinitesimal spacing, which establishes an upper bound for the distance-domain DoF in the case of a finite number of elements. First, we analyze an ideal case where the Tx array is a single piece and the Rx array is on the broadside of the Tx array. By reformulating the channel as an integral operator with a Hermitian convolution kernel, we derive a closed-form expression for the distance-domain DoF via the Fourier transform. Our analysis shows that the distance-domain DoF is predominantly determined by the extreme boundaries of both the Tx and Rx arrays rather than their detailed interior structure. We further extend the framework to non-broadside configurations by employing a projection method that converts the problem to an equivalent broadside case. Finally, we extend the analytical framework to modular arrays and show the distance-domain DoF gain over a single-piece array under a fixed total physical length.
|
https://arxiv.org/abs/2507.01227
|
Academic Papers
|
svg
|
15393c2830a86f1a40b8f29f2fab0b2440b13fd8b44c6e794786520bd7306392
|
2026-01-23T00:00:00-05:00
|
Dynamical stability for dense patterns in discrete attractor neural networks
|
arXiv:2507.10383v4 Announce Type: replace-cross Abstract: Neural networks storing multiple discrete attractors are canonical models of biological memory. Previously, the dynamical stability of such networks could only be guaranteed under highly restrictive conditions. Here, we derive a theory of the local stability of discrete fixed points in a broad class of networks with graded neural activities and in the presence of noise. By directly analyzing the bulk and the outliers of the Jacobian spectrum, we show that all fixed points are stable below a critical load that is distinct from the classical \textit{critical capacity} and depends on the statistics of neural activities in the fixed points as well as the single-neuron activation function. Our analysis highlights the computational benefits of threshold-linear activation and sparse-like patterns.
|
https://arxiv.org/abs/2507.10383
|
Academic Papers
|
svg
|
c362df7f024bd36e334382cd25dab01e3d6e49314ff287cc1d4edfac8c5287cd
|
2026-01-23T00:00:00-05:00
|
Likelihood Matching for Diffusion Models
|
arXiv:2508.03636v2 Announce Type: replace-cross Abstract: We propose a Likelihood Matching approach for training diffusion models by first establishing an equivalence between the likelihood of the target data distribution and a likelihood along the sample path of the reverse diffusion. To efficiently compute the reverse sample likelihood, a quasi-likelihood is considered to approximate each reverse transition density by a Gaussian distribution with matched conditional mean and covariance, respectively. The score and Hessian functions for the diffusion generation are estimated by maximizing the quasi-likelihood, ensuring a consistent matching of both the first two transitional moments between every two time points. A stochastic sampler is introduced to facilitate computation that leverages both the estimated score and Hessian information. We establish consistency of the quasi-maximum likelihood estimation, and provide non-asymptotic convergence guarantees for the proposed sampler, quantifying the rates of the approximation errors due to the score and Hessian estimation, dimensionality, and the number of diffusion steps. Empirical and simulation evaluations demonstrate the effectiveness of the proposed Likelihood Matching and validate the theoretical results.
|
https://arxiv.org/abs/2508.03636
|
Academic Papers
|
svg
|
3186ca3a0bd31174f09bc9fa028e2f1400d3952b6ea7538762b71b86c8b31c44
|
2026-01-23T00:00:00-05:00
|
Implementing Optimal Taxation: A Constrained Optimization Framework for Tax Reform
|
arXiv:2508.03708v3 Announce Type: replace-cross Abstract: While optimal taxation theory provides clear prescriptions for tax design, translating these insights into actual tax codes remains difficult. Existing work largely offers theoretical characterizations of optimal systems, while practical implementation methods are scarce. Bridging this gap involves designing tax rules that meet theoretical goals, while accommodating administrative, distributional, and other practical constraints that arise in real-world reform. We develop a method casting tax reform as a constrained optimization problem by parametrizing the entire income tax code as a set of piecewise linear functions mapping tax-relevant inputs into liabilities and marginal rates. This allows users to impose constraints on marginal rate schedules, limits on income swings, and objectives like revenue neutrality, efficiency, simplicity, or distributional fairness that reflect both theoretical and practical considerations. The framework is computationally tractable for complex tax codes and flexible enough to accommodate diverse constraints, welfare objectives and behavioral responses. Whereas existing tools are typically used for ex-post `what-if' analysis of specific reforms, our framework explicitly incorporates real-world reform constraints and jointly optimizes across the full tax code. We illustrate the framework in several simulated settings, including a detailed reconstruction of the Dutch income tax system. For the Dutch case, we generate a family of reforms that smooth existing spikes in marginal tax rates to any desired cap, reduce the number of rules, and impose hard caps on income losses households can experience from the reform. We also introduce \texttt{TaxSolver}, an open-source package, allowing policymakers and researchers to implement and extend the framework.
|
https://arxiv.org/abs/2508.03708
|
Academic Papers
|
svg
|
2af447fbe947a5a35970ca0e4aca95591ea1ccfd0ddfe30df79224efc64cb4f5
|
2026-01-23T00:00:00-05:00
|
Can LLMs Identify Tax Abuse?
|
arXiv:2508.20097v2 Announce Type: replace-cross Abstract: We investigate whether large language models can discover and analyze U.S. tax-minimization strategies. This real-world domain challenges even seasoned human experts, and progress can reduce tax revenue lost from well-advised, wealthy taxpayers. We evaluate the most advanced LLMs on their ability to (1) interpret and verify tax strategies, (2) fill in gaps in partially specified strategies, and (3) generate complete, end-to-end strategies from scratch. This domain should be of particular interest to the LLM reasoning community: unlike synthetic challenge problems or scientific reasoning tasks, U.S. tax law involves navigating hundreds of thousands of pages of statutes, case law, and administrative guidance, all updated regularly. Notably, LLM-based reasoning identified an entirely novel tax strategy, highlighting these models' potential to revolutionize tax agencies' fight against tax abuse.
|
https://arxiv.org/abs/2508.20097
|
Academic Papers
|
svg
|
0c73f77e4a07cf6699fa5f25ac774a52c2d2767e7493ec423851e2006ca6a23a
|
2026-01-23T00:00:00-05:00
|
Attentive AV-FusionNet: Audio-Visual Quality Prediction with Hybrid Attention
|
arXiv:2509.16994v2 Announce Type: replace-cross Abstract: We introduce a novel deep learning-based audio-visual quality (AVQ) prediction model that leverages internal features from state-of-the-art unimodal predictors. Unlike prior approaches that rely on simple fusion strategies, our model employs a hybrid representation that combines learned Generative Machine Listener (GML) audio features with hand-crafted Video Multimethod Assessment Fusion (VMAF) video features. Attention mechanisms capture cross-modal interactions and intra-modal relationships, yielding context-aware quality representations. A modality relevance estimator quantifies each modality's contribution per content, potentially enabling adaptive bitrate allocation. Experiments demonstrate improved AVQ prediction accuracy and robustness across diverse content types.
|
https://arxiv.org/abs/2509.16994
|
Academic Papers
|
svg
|
692948b691995d864ea4788a7da3b31f40d075fb184945414aaf2509a213944c
|
2026-01-23T00:00:00-05:00
|
An Efficient Quality Metric for Video Frame Interpolation Based on Motion-Field Divergence
|
arXiv:2510.01361v2 Announce Type: replace-cross Abstract: Video frame interpolation is a fundamental tool for temporal video enhancement, but existing quality metrics struggle to evaluate the perceptual impact of interpolation artefacts effectively. Metrics like PSNR, SSIM and LPIPS ignore temporal coherence. State-of-the-art quality metrics tailored towards video frame interpolation, like FloLPIPS, have been developed but suffer from computational inefficiency that limits their practical application. We present $\text{PSNR}_{\text{DIV}}$, a novel full-reference quality metric that enhances PSNR through motion divergence weighting, a technique adapted from archival film restoration where it was developed to detect temporal inconsistencies. Our approach highlights singularities in motion fields which is then used to weight image errors. Evaluation on the BVI-VFI dataset (180 sequences across multiple frame rates, resolutions and interpolation methods) shows $\text{PSNR}_{\text{DIV}}$ achieves statistically significant improvements: +0.09 Pearson Linear Correlation Coefficient over FloLPIPS, while being 2.5$\times$ faster and using 4$\times$ less memory. Performance remains consistent across all content categories and are robust to the motion estimator used. The efficiency and accuracy of $\text{PSNR}_{\text{DIV}}$ enables fast quality evaluation and practical use as a loss function for training neural networks for video frame interpolation tasks. An implementation of our metric is available at www.github.com/conalld/psnr-div.
|
https://arxiv.org/abs/2510.01361
|
Academic Papers
|
svg
|
ffec9c216ed84e811e057bda532642ada6f85a8eff029552397ea92ee05468a7
|
2026-01-23T00:00:00-05:00
|
AtomWorld: A Benchmark for Evaluating Spatial Reasoning in Large Language Models on Crystalline Materials
|
arXiv:2510.04704v3 Announce Type: replace-cross Abstract: Large Language Models (LLMs) excel at textual reasoning and are beginning to develop spatial understanding, prompting the question of whether these abilities can be combined for complex, domain-specific tasks. This question is essential in fields like materials science, where deep understanding of 3D atomic structures is fundamental. While initial studies have successfully applied LLMs to tasks involving pure crystal generation or coordinate understandings, a standardized benchmark to systematically evaluate their core reasoning abilities across diverse atomic structures has been notably absent. To address this gap, we introduce the AtomWorld benchmark to evaluate LLMs on tasks based in Crystallographic Information Files (CIFs), a standard structure representation format. These tasks, including structural editing, CIF perception, and property-guided modeling, reveal a critical limitation: current models, despite establishing promising baselines, consistently fail in structural understanding and spatial reasoning. Our experiments show that these models make frequent errors on structure modification tasks, and even in the basic CIF format understandings, potentially leading to cumulative errors in subsequent analysis and materials insights. By defining these standardized tasks, AtomWorld lays the ground for advancing LLMs toward robust atomic-scale modeling, crucial for accelerating materials research and automating scientific workflows.
|
https://arxiv.org/abs/2510.04704
|
Academic Papers
|
svg
|
7688e998fa57e9a60243c5621f2caca631e3a15d000d490cc567276216095a09
|
2026-01-23T00:00:00-05:00
|
Quantum matrix arithmetics with Hamiltonian evolution
|
arXiv:2510.06316v2 Announce Type: replace-cross Abstract: The efficient implementation of matrix arithmetic operations underpins the speedups of many quantum algorithms. We develop a suite of methods to perform matrix arithmetics -- with the result encoded in the off-diagonal blocks of a Hamiltonian -- using Hamiltonian evolutions of input operators. We show how to maintain this $\textit{Hamiltonian block encoding}$, so that matrix operations can be composed one after another, and the entire quantum computation takes $\leq 2$ ancilla qubits. We achieve this for matrix multiplication, matrix addition, matrix inversion, Hermitian conjugation, fractional scaling, integer scaling, complex phase scaling, as well as singular value transformation for both odd and even polynomials. We also present an overlap estimation algorithm to extract classical properties of Hamiltonian block encoded operators, analogous to the well known Hadmard test, at no extra cost of qubit. Our Hamiltonian matrix multiplication uses the Lie group commutator product formula and its higher-order generalizations due to Childs and Wiebe. Our Hamiltonian singular value transformation employs a dominated polynomial approximation, where the approximation holds within the domain of interest, while the constructed polynomial is upper bounded by the target function over the entire unit interval. We describe a circuit for simulating a class of sum-of-squares Hamiltonians, attaining a commutator scaling in step count, while leveraging the power of matrix arithmetics to reduce the cost of each simulation step. In particular, we apply this to the doubly factorized tensor hypercontracted Hamiltonians from recent studies of quantum chemistry, obtaining further improvements for initial states with a fixed number of particles. We achieve this with $1$ ancilla qubit.
|
https://arxiv.org/abs/2510.06316
|
Academic Papers
|
svg
|
9b4d629740b96f08ad39df496317e65ba20a09860d75060e75245f6b6240f778
|
2026-01-23T00:00:00-05:00
|
Quantization-Based Score Calibration for Few-Shot Keyword Spotting with Dynamic Time Warping in Noisy Environments
|
arXiv:2510.15432v2 Announce Type: replace-cross Abstract: Detecting occurrences of keywords with keyword spotting (KWS) systems requires thresholding continuous detection scores. Selecting appropriate thresholds is a non-trivial task, typically relying on optimizing performance on a validation dataset. However, such greedy threshold selection often leads to suboptimal performance on unseen data, particularly in varying or noisy acoustic environments or few-shot settings. In this work, we investigate detection threshold estimation for template-based open-set few-shot KWS using dynamic time warping on noisy speech data. To mitigate the performance degradation caused by suboptimal thresholds, we propose a score calibration approach that operates at the embedding level by quantizing learned representations and applying quantization error-based normalization prior to DTW-based scoring and thresholding. Experiments on KWS-DailyTalk with simulated high frequency radio channels show that the proposed calibration approach simplifies the selection of robust detection thresholds and significantly improves the resulting performance.
|
https://arxiv.org/abs/2510.15432
|
Academic Papers
|
svg
|
26e1e059aaa0a38eb701e5659a08f7f77d4a02fa56a2cbcab916c30457e935e7
|
2026-01-23T00:00:00-05:00
|
The Sleeping Beauty Problem: Sleeping Kelly is a Thirder
|
arXiv:2510.15911v2 Announce Type: replace-cross Abstract: The Sleeping Beauty problem is a problem of imperfect recall that has received considerable attention. One approach to solving the Sleeping Beauty problem is to allow Sleeping Beauty to make decisions based on her beliefs, and then characterize what it takes for her decisions to be "rational". In particular, she can be allowed to make monetary bets based on her beliefs, with the assumption that she wants to gain wealth rather than lose it. However, this approach is often coupled with the assumption that Sleeping Beauty should maximize the expected value of her bets. Here, show that Sleeping Beauty maximizes the expected growth rate of her wealth as a "thirder" sizing bets using the Kelly Criterion under multiplicative dynamics. Furthermore, this position is shown to be impervious to Dutch books. By contrast, the "halfer" position is shown to be vulnerable to Dutch books under similar circumstances.
|
https://arxiv.org/abs/2510.15911
|
Academic Papers
|
svg
|
871518f0555c12f734bbb9c7e5616238ad3c44b63d689c06d4852c8ac0f4db5f
|
2026-01-23T00:00:00-05:00
|
Comparing the latent features of universal machine-learning interatomic potentials
|
arXiv:2512.05717v2 Announce Type: replace-cross Abstract: The past few years have seen the development of ``universal'' machine-learning interatomic potentials (uMLIPs) capable of approximating the ground-state potential energy surface across a wide range of chemical structures and compositions with reasonable accuracy. While these models differ in the architecture and the dataset used, they share the ability to compress a staggering amount of chemical information into descriptive latent features. Herein, we systematically analyze what the different uMLIPs have learned by quantitatively assessing the relative information content of their latent features with feature reconstruction errors, and observing how the trends are affected by the choice of training set and training protocol. We find that uMLIPs encode the chemical space in significantly distinct ways, with substantial cross-model feature reconstruction errors. When variants of the same model architecture are considered, trends become dependent on the dataset, target, and training protocol of choice. We also observe that fine-tuning of a uMLIP retains a strong pre-training bias in the latent features. Finally, we discuss how atom-level features, which are directly output by MLIPs, can be compressed into global structure-level features via concatenation of progressive cumulants, each adding significantly new information about the variability across the atomic environments within a given system.
|
https://arxiv.org/abs/2512.05717
|
Academic Papers
|
svg
|
df6a88299edaa5fa7d9c6a05c76a1f838e71054dfac2aeacb1b9f1c7c914eaf8
|
2026-01-23T00:00:00-05:00
|
Conformal Blindness: A Note on $A$-Cryptic change-points
|
arXiv:2601.01147v2 Announce Type: replace-cross Abstract: Conformal Test Martingales (CTMs) are a standard method within the Conformal Prediction framework for testing the crucial assumption of data exchangeability by monitoring deviations from uniformity in the p-value sequence. Although exchangeability implies uniform p-values, the converse does not hold. This raises the question of whether a significant break in exchangeability can occur, such that the p-values remain uniform, rendering CTMs blind. We answer this affirmatively, demonstrating the phenomenon of \emph{conformal blindness}. Through explicit construction, for the theoretically ideal ``predictive oracle'' conformity measure (given by the true conditional density), we demonstrate the possibility of an \emph{$A$-cryptic change-point} (where $A$ refers to the conformity measure). Using bivariate Gaussian distributions, we identify a line along which a change in the marginal means does not alter the distribution of the conformity scores, thereby producing perfectly uniform p-values. Simulations confirm that even a massive distribution shift can be perfectly cryptic to the CTM, highlighting a fundamental limitation and emphasising the critical role of the alignment of the conformity measure with potential shifts. By contrasting the predictive oracle with recent results on detection-optimal scores, we emphasise that validity monitoring in safety-critical systems requires careful separation of predictive and diagnostic goals.
|
https://arxiv.org/abs/2601.01147
|
Academic Papers
|
svg
|
694548e29357817d39dc29429bf0b170cbedd57c8aae24d9539bf5197ed36534
|
2026-01-23T00:00:00-05:00
|
Where do We Poop? City-Wide Simulation of Defecation Behavior for Wastewater-Based Epidemiology
|
arXiv:2601.04231v2 Announce Type: replace-cross Abstract: Wastewater surveillance, which regularly examines the pathogen biomarkers in wastewater samples, is a valuable tool for monitoring infectious diseases circulating in communities. Yet, most wastewater-based epidemiology methods, which use wastewater surveillance results for disease inferences, implicitly assume that individuals excrete only at their residential locations and that the population contribute to wastewater samples are static. These simplifying assumptions ignore daily mobility, social interactions, and heterogeneous toilet use behavior patterns, which can lead to biased interpretation of wastewater results, especially at upstream sampling locations such as neighborhoods, institutions, or buildings. Here, we introduce an agent-based geospatial simulation framework: Building on an established Patterns of Life model, we simulate daily human activities, mobility, and social contacts within a realistic urban environment and extend this agent-based framework with a physiologically motivated defecation cycle and toilet usage patterns. We couple this behavioral model with an infectious disease model to simulate transmissions through spatial and social interactions. When a defecation occurs for an infected agent, we use a pathogen shedding model to determine the amount of pathogen shed in the feces. Such a framework, integrating population mobility, disease transmission, toilet use behavior, and pathogen shedding models, is capable to simulate the Spatial-temporal dynamics of wastewater signals for a city. Using a case study of 10,000 simulated agents in Fulton County, Georgia, we examine how varying infection rates alter epidemic trajectories, pathogen loads in wastewater, and the spatial distribution of contamination across time.
|
https://arxiv.org/abs/2601.04231
|
Academic Papers
|
svg
|
8f27c479dfc043bc9f958101a1c76b8fa5d4a2cc396fde756e97bf5198a2d342
|
2026-01-23T00:00:00-05:00
|
Generalization to Political Beliefs from Fine-Tuning on Sports Team Preferences
|
arXiv:2601.04369v3 Announce Type: replace-cross Abstract: Fine-tuned LLMs often exhibit unexpected behavior as a result of generalizing beyond the data they're shown. We present results in which an LLM fine-tuned to prefer either coastal sports teams or Southern sports teams adopt political beliefs that diverge significantly from those of the base model. While we hypothesized that the coastal model would become more liberal and the southern model would become more conservative, we find that their responses are usually similar to each other, without a clear-cut liberal or conservative bias. In addition to asking the models for numerical ratings of agreement with relevant political statements, we ask them to elaborate on their more radical answers, finding varying degrees of willingness to justify themselves. Further work is needed to understand the mechanisms by which fine-tuning on simple, narrow datasets leads to seemingly unrelated changes in model behavior.
|
https://arxiv.org/abs/2601.04369
|
Academic Papers
|
svg
|
75560840c234e5485164e6320ec766fbfc656f58b192f718ae3b2d0da6df464d
|
2026-01-23T00:00:00-05:00
|
Finite-Sample Inference for Sparsely Permuted Linear Regression
|
arXiv:2601.14872v2 Announce Type: replace-cross Abstract: We study a linear observation model with an unknown permutation called \textit{permuted/shuffled linear regression}, where responses and covariates are mismatched and the permutation forms a discrete, factorial-size parameter. The permutation is a key component of the data-generating process, yet its statistical investigation remains challenging due to its discrete nature. We develop a general statistical inference framework on the permutation and regression coefficients. First, we introduce a localization step that reduces the permutation space to a small candidate set building on recent advances in the repro samples method, whose miscoverage decays polynomially with the number of Monte Carlo samples. Then, based on this localized set, we provide statistical inference procedures: a conditional Monte Carlo test of permutation structures with valid finite-sample Type-I error control. We also develop coefficient inference that remains valid under alignment uncertainty of permutations. For computational purposes, we develop a linear assignment problem computable in polynomial time and demonstrate that, with high probability, the solution is equivalent to that of the conventional least squares with large computational cost. Extensions to partially permuted designs and ridge regularization are further discussed. Extensive simulations and an application to air-quality data corroborate finite-sample validity, strong power to detect mismatches, and practical scalability.
|
https://arxiv.org/abs/2601.14872
|
Academic Papers
|
svg
|
dabb003f65872065990a83bd979a019e94e920103ba286ac06ddbcf658591944
|
2026-01-23T00:00:00-05:00
|
Low-magnitude seismic activity between the Kamchatka July 20 and July 29, 2025, earthquakes. Spatio-temporal evolution recovered using waveform cross-correlation
|
arXiv:2601.15302v1 Announce Type: new Abstract: The M8.8 Kamchatka earthquake on July 29, 2025 was one of the largest in the first quarter of the 21st century. It deserves a thorough analysis including the preparation process. A smaller M7.4 earthquake occurred on July 20 with its epicenter within the confidence ellipse for the July 29 event. The aftershock sequence of the July 20 earthquake and the evolution of seismicity within the Kamchatka Peninsula region during 10 days period before the July 29 event may provide important information on the earthquake preparation and initiation processes. The CTBTO's International Monitoring System is one of the most sensitive global seismic networks comprising high-resolution array stations with enhanced sensitivity relative to three-component stations at the same locations. The International Data Centre of the CTBTO processes IMS data automatically and interactively to create a Reviewed Event Bulletin (REB), which serves as a source of information for the International Seismological Centre. Waveform cross-correlation (WCC) allows for additional detection capabilities to the IMS data and IDC processing when repeated seismicity is analyzed. The aftershock sequence of the July 20 earthquake is recovered using the WCC-based detection and phase association techniques as applied to the IMS data in order to accurately describe the spatio-temporal evolution of seismic process just before the July 29 event. With the reduced detection threshold, smaller events are found in the zones where the REB has no located sources. This finding opens up the possibility for a more detailed study of seismic and mechanical processes before the July 29 mainshock.
|
https://arxiv.org/abs/2601.15302
|
Academic Papers
|
svg
|
4193532283a20032e4a5464e67f7ead4d1337450aa1e080ba434b976f83ac2c7
|
2026-01-23T00:00:00-05:00
|
High-Frequency Switching in Superparamagnetic Magnetic Tunnel Junctions by Enhancing Damping
|
arXiv:2601.15315v1 Announce Type: new Abstract: Superparamagnetic magnetic tunnel junctions (sMTJs) are promising components for true random number generation and probabilistic computing. Achieving high-frequency fluctuation while maintaining reliable control over output level is critical for applications. In this work, we systematically investigate the role of magnetic damping in regulating thermal switching rates using macrospin simulations. We show that enhanced damping accelerates the switching rate by increasing the escape rate over the energy barrier. We further compare two control mechanisms: spin-transfer torque (STT) and voltage-controlled exchange coupling (VCEC). Our results reveal that STT-based switching is strongly suppressed under high damping, whereas VCEC, by reshaping the energy landscape without relying on torque-driven dynamics, retains high control efficiency. These findings suggest that enhanced damping not only enables faster stochastic switching in sMTJs but also makes VCEC inherently better suited than STT for high-frequency applications.
|
https://arxiv.org/abs/2601.15315
|
Academic Papers
|
svg
|
473e6cd82dd3a9c6a629a3cc02cdfa5f70e107a3e6b46d58a536d73b7eaa756b
|
2026-01-23T00:00:00-05:00
|
Z(3) Metastable Bubbles and Chiral Dynamics Across a Dark-QCD Deconfinement Transition
|
arXiv:2601.15342v1 Announce Type: new Abstract: We present a self-contained theoretical analysis of a dark-QCD chiral transition in which the Polyakov-loop sector retains an explicit $Z(3)$ structure and couples consistently to the chiral order parameter.Working within a coupled chiral--Polyakov effective theory, we map the homogeneous vacuum landscape and identify a metastability window bounded by spinodal loss of stability.We then construct $Z(3)$ domain-wall solutions including chiral backreaction, extracting temperature-dependent wall profiles and surface tension.Finally, we connect homogeneous metastability and wall microphysics to thermal bubble nucleation by evaluating the critical radius $R_c(T)$ and the nucleation exponent $S_3(T)/T$ in the thin-wall regime, providing a compact set of reproducible diagnostics for the decay of the metastable phase.Our results establish a coherent pipeline from vacuum structure to nonperturbative interfaces and nucleation barriers, suitable for systematic extensions to full multi-field bounce calculations and dark-sector cosmological applications.
|
https://arxiv.org/abs/2601.15342
|
Academic Papers
|
svg
|
67e7f3deb77f70dfafb81dedbe539f6d5d773b5b53791c152092230c6cd00a9b
|
2026-01-23T00:00:00-05:00
|
Metastable Transitions and $\Gamma$--Convergent Eyring--Kramers Asymptotics in Landau--QCD Gradient Systems
|
arXiv:2601.15343v1 Announce Type: new Abstract: We develop a rigorous analytical framework for metastable stochastic transitions in Landau--type gradient systems inspired by QCD phenomenology. The functional $F(\sigma;u)=\int_\Omega [\frac{\kappa}{2}|\nabla\sigma|^2+V(\sigma;u)]\,dx$, depending smoothly on a control parameter $u\in\mathcal U$, is analyzed through the Euler--Lagrange map $\mathcal{E}(\sigma;u)=-\kappa\Delta\sigma+V'(\sigma;u)$ and its Hessian $\mathcal{L}_{\sigma,u}=-\kappa\Delta+V''(\sigma;u)$. By combining variational methods, $\Gamma$-- and Mosco convergence, and spectral perturbation theory, we establish the persistence and stability of local minima and index--one saddles under parameter deformations and variational discretizations. The associated mountain--pass solutions form Cerf--continuous branches away from the discriminant set $\mathcal D=\{u:\det\mathcal L_{\sigma,u}=0\}$, whose crossings produce only fold or cusp catastrophes in generic one-- and two--parameter slices. The $\Gamma$--limit is taken with respect to the $L^2(\Omega)$ topology, ensuring compactness, convergence of gradient flows, and spectral continuity of $\mathcal L_{\sigma,u}$. As a consequence, the Eyring--Kramers formula for the mean transition time between metastable wells retains quantitative validity under both parameter deformations and discretization refinement, with convergent free--energy barriers, unstable eigenvalues, and zeta--regularized determinant ratios. This construction unifies the classical intuition of Eyring, Kramers, and Langer with modern variational and spectral analysis, providing a mathematically consistent and physically transparent foundation for metastable decay and phase conversion in Landau--QCD--type systems.
|
https://arxiv.org/abs/2601.15343
|
Academic Papers
|
svg
|
115f6c7e131d2ebd78bf84862378b87849c1814cc18679d685a94a349360de24
|
2026-01-23T00:00:00-05:00
|
Phantom model for intracranial pressure
|
arXiv:2601.15357v1 Announce Type: new Abstract: This report presents the MOD{\`E}FONE project, whose objective is to develop a simplified experimental model of the cerebrospinal system in order to investigate fluid-structure interactions and physiological adaptations under altered gravity conditions, with a particular focus on microgravity. The experimental setup is based on a pulsatile hydraulic circuit reproducing systolic and diastolic dynamics, coupled with deformable elements simulating vascular compliance and a cranial compartment immersed in a fluid representing cerebrospinal fluid. This model enables the analysis of cranial and spinal pressures as well as their pulsatility. The purpose of this report is to describe the design and the results of the experimental setup.
|
https://arxiv.org/abs/2601.15357
|
Academic Papers
|
svg
|
649656c35ab5ca574e71bcac3beeb557a2d6091761d09aa9e38e81ee7eaa48ef
|
2026-01-23T00:00:00-05:00
|
On the quantum separability of qubit registers
|
arXiv:2601.15364v1 Announce Type: new Abstract: We show that the bipartite separability of a pure qubit state hinges critically on the combinatorial structure of its computational-basis support. Using Boolean cube geometry, we introduce a taxonomy that distinguishes support-guaranteed separability from cases in which entanglement depends on probability amplitudes. We provide closed-form support counts, identify forbidden configurations that enforce multipartite entanglement, and show how these results can enable fast entanglement diagnostics in quantum circuits. The framework offers immediate utility in classical simulation, entanglement-aware circuit design, and quantum error-correcting code analysis. This establishes support geometry as a practical and scalable tool for understanding entanglement in quantum information processing.
|
https://arxiv.org/abs/2601.15364
|
Academic Papers
|
svg
|
ebfedd5e3c03643f8ae9be9de03561c2d71aebaff2703a522fa733c59160c09c
|
2026-01-23T00:00:00-05:00
|
Asymptotic scaling theory of electrostatic turbulent transport in magnetised fusion plasmas
|
arXiv:2601.15391v1 Announce Type: new Abstract: Turbulent transport remains one of the principal obstacles to achieving efficient magnetic confinement in fusion devices. Two of the dominant drivers of the turbulence are microscale instabilities fuelled by electron- and ion-temperature gradients (ETG and ITG), whose nonlinear saturation determines the cross-field transport of particles and energy. Despite decades of study, predictive modelling of this turbulence has been limited either to expensive gyrokinetic simulations or to reduced models calibrated by fitting to numerical or experimental data, restricting their utility for reactor design. Here we present a simple asymptotic scaling theory that unifies ETG- and ITG-driven turbulence within a common framework. By balancing the fundamental time scales of linear growth, nonlinear decorrelation, and parallel propagation, the theory isolates the dependence of the heat flux on equilibrium parameters to two key quantities: the parallel system scale and the outer-scale aspect ratio. We show that these quantities encapsulate the essential physics of saturation, leading to distinct predictions for ETG and ITG transport: a cubic scaling with the temperature gradient in the electron channel, and a linear scaling in the ion channel. Extensive nonlinear gyrokinetic simulations confirm that these theoretical predictions hold irrespective of the magnetic geometry (slab, tokamak, or stellarator), including the first numerical confirmation of the cubic ETG scaling anticipated by earlier theory. By isolating the dependence on just the parallel system scale and the outer-scale aspect ratio, our framework provides a physics-based foundation for fast, geometry-aware transport models, offering a pathway toward reactor optimisation in both tokamaks and stellarators.
|
https://arxiv.org/abs/2601.15391
|
Academic Papers
|
svg
|
930372875afe0a1e7670a0ac214a8f23fe258acd9b007897f3b7af373f614402
|
2026-01-23T00:00:00-05:00
|
Bayesian identification of fibrous insulation thermal conductivity towards design of spacecraft thermal protection systems
|
arXiv:2601.15427v1 Announce Type: new Abstract: The design of spacecraft thermal protection systems (TPS) requires accurate knowledge of thermal transport properties across wide ranges of temperature and pressure. For fibrous insulation, conventional measurement techniques in laboratory settings are typically limited to temperatures much lower than what is reached in atmosphere entry scenarios. Moreover, it is often the case that only temperature measurements are available, meaning that the thermal conductivity of the insulation must be indirectly inferred as an inverse problem. We propose a Bayesian framework using information field theory (IFT) to reconstruct the thermal conductivity of high-temperature fibrous insulation from sparse experimental data. Under IFT, the conductivity is represented as a Gaussian process, and the physics is enforced via a physics-informed prior over the temperature derived from the heat equation. Bayes's rule produces an infinite-dimensional posterior distribution that quantifies uncertainty about the conductivity which can be evaluated in extrapolation regimes. We apply the method to Opacified Fibrous Insulation with both synthetic and experimental data to reconstruct the thermal conductivity beyond the experimental regime. The inferred conductivities are validated against reference data and then propagated into high-fidelity digital twins of flexible TPS performance under Mars and Earth entry trajectories. The results show that IFT yields accurate predictions with quantified uncertainty, enabling robust TPS sizing in regimes inaccessible to direct measurement.
|
https://arxiv.org/abs/2601.15427
|
Academic Papers
|
svg
|
f800b46a2f175a97d6d0aebf9d2a5a695011fe5c9c8b03e48cf348ad93a5e458
|
2026-01-23T00:00:00-05:00
|
Attosecond-timing millimeter waves via Kerr optical frequency division
|
arXiv:2601.15456v1 Announce Type: new Abstract: Millimeter-wave oscillators underpin key applications in communication, spectroscopy, radar, and astronomy, yet their achievable spectral purity remains limited. Approaches that directly generate millimeter-wave carriers are fundamentally limited by quantum and thermal phase-noise processes. Here we show that these limits can be overcome by combining Kerr-induced optical frequency division in a chip-scale microresonator with a large-spacing dual-wavelength Brillouin laser. This 3.3 THz optical reference injection-locks a Kerr soliton microcomb, with a repetition rate that becomes a coherently divided 300 GHz carrier with phase noise below the quantum limit of a corresponding 300 GHz dual-wavelength Brillouin laser and far below the thermo-refractive noise of a microring resonator. Cross-correlation phase-noise measurements were developed to show that the resulting oscillator reaches a phase-noise floor of -152 dBc/Hz at 1 MHz offset, consistent with photodetection shot noise. Integration of the measured spectrum yields an RMS timing jitter of 135 as from 1 kHz to 1 MHz. These results establish optical frequency division as a generic method for generation of sub-terahertz carriers with coherence no longer constrained by direct-generation limits.
|
https://arxiv.org/abs/2601.15456
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.