id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
cfe8b8392644351f481cd798316f71e6e4051f3705f9c82d199063d1b4e3ada3
|
2026-02-02T00:00:00-05:00
|
Compressed Set Representations based on Set Difference
|
arXiv:2601.23240v1 Announce Type: new Abstract: We introduce a compressed representation of sets of sets that exploits how much they differ from each other. Our representation supports access, membership, predecessor and successor queries on the sets within logarithmic time. In addition, we give a new MST-based construction algorithm for the representation that outperforms standard ones.
|
https://arxiv.org/abs/2601.23240
|
Academic Papers
|
svg
|
3866de8181e4476264b8203b157f622437115afccbcb96265fbe9f81bc747051
|
2026-02-02T00:00:00-05:00
|
A Primal-Dual Level Set Method for Computing Geodesic Distances
|
arXiv:2601.23244v1 Announce Type: new Abstract: The numerical computation of shortest paths or geodesics on surfaces, along with the associated geodesic distance, has a wide range of applications. Compared to Euclidean distance computation, these tasks are more complex due to the influence of surface geometry on the behavior of shortest paths. This paper introduces a primal-dual level set method for computing geodesic distances. A key insight is that the underlying surface can be implicitly represented as a zero level set, allowing us to formulate a constraint minimization problem. We employ the primal-dual methodology, along with regularization and acceleration techniques, to develop our algorithm. This approach is robust, efficient, and easy to implement. We establish a convergence result for the high-resolution PDE system, and numerical evidence suggests that the method converges to a geodesic in the limit of refinement.
|
https://arxiv.org/abs/2601.23244
|
Academic Papers
|
svg
|
9d0ee7d868242ad2dbbbc9bb2597990ff160e1e99712afb6374e94e48567f52b
|
2026-02-02T00:00:00-05:00
|
The Iterated Local Model for tournaments
|
arXiv:2601.23246v1 Announce Type: new Abstract: Transitivity is a central, generative principle in social and other complex networks, capturing the tendency for two nodes with a common neighbor to form a direct connection. We propose a new model for highly dense, complex networks based on transitivity, called the Iterated Local Model Tournament (ILMT). In ILMT, we iteratively apply transitivity to form new tournaments by cloning nodes and their adjacencies, and either preserving or reversing the orientation of existing arcs between clones. The resulting model generates tournaments with small diameters and high connectivity as observed in real-world complex networks. We analyze subtournaments or motifs in the ILMT model and their universality properties. For many parameter choices, the model generates sequences of quasirandom tournaments. We also study the graph-theoretic properties of ILMT tournaments, including their cop number, domination number, and chromatic number. We finish with a set of open problems and variants of the ILMT model for oriented graphs.
|
https://arxiv.org/abs/2601.23246
|
Academic Papers
|
svg
|
de633812189a0f49165412966ad6c1293ab00b7b4782a51b7b957731f151eaa0
|
2026-02-02T00:00:00-05:00
|
(Doubly) Exponential Lower Bounds for Follow the Regularized Leader in Potential Games
|
arXiv:2601.23248v1 Announce Type: new Abstract: Follow the regularized leader FTRL is the premier algorithm for online optimization. However, despite decades of research on its convergence in constrained optimization -- and potential games in particular -- its behavior remained hitherto poorly understood. In this paper, we establish that FTRL can take exponential time to converge to a Nash equilibrium in two-player potential games for any (permutation-invariant) regularizer and potentially vanishing learning rate. By known equivalences, this translates to an exponential lower bound for certain mirror descent counterparts, most notably multiplicative weights update. On the positive side, we establish the potential property for FTRL and obtain an exponential upper bound $\exp(O_{\epsilon}(1/\epsilon^2))$ for any no-regret dynamics executed in a lazy, alternating fashion, matching our lower bound up to factors in the exponent. Finally, in multi-player potential games, we show that fictitious play -- the extreme version of FTRL -- can take doubly exponential time to reach a Nash equilibrium. This constitutes an exponentially stronger lower bound for the foundational learning algorithm in games.
|
https://arxiv.org/abs/2601.23248
|
Academic Papers
|
svg
|
fb5ea75b7cb479fbe8fac8b078b0ba9e8b39630ca05148d8a44dce5048ec4bdb
|
2026-02-02T00:00:00-05:00
|
Structured Over Scale: Learning Spatial Reasoning from Educational Video
|
arXiv:2601.23251v1 Announce Type: new Abstract: Vision-language models (VLMs) demonstrate impressive performance on standard video understanding benchmarks yet fail systematically on simple reasoning tasks that preschool children can solve, including counting, spatial reasoning, and compositional understanding. We hypothesize that the pedagogically-structured content of educational videos provides an ideal training signal for improving these capabilities. We introduce DoraVQA, a dataset of 5,344 question-answer pairs automatically extracted from 8 seasons of Dora the Explorer with precise timestamp alignment. Each episode follows a consistent \textit{context-question-pause-answer} structure that creates a self-contained learning environment analogous to interactive tutoring. We fine-tune both Qwen2 and Qwen3 using Group Relative Policy Optimization (GRPO), leveraging the clear correctness signals and structured reasoning traces inherent in educational content. Despite training exclusively on 38 hours of children's educational videos, our approach achieves improvements of 8-14 points on DoraVQA and state-of-the-art 86.16\% on CVBench, with strong transfer to Video-MME and NExT-QA, demonstrating effective generalization from narrow pedagogical content to broad multimodal understanding. Through cross-domain benchmarks, we show that VLMs can perform tasks that require robust reasoning learned from structured educational content, suggesting that content structure matters as much as content scale.
|
https://arxiv.org/abs/2601.23251
|
Academic Papers
|
svg
|
7beef4990dcfb6c568fe75874cf59c95939c6e649fe947dac303764b57d17ca9
|
2026-02-02T00:00:00-05:00
|
Training-Free Test-Time Adaptation with Brownian Distance Covariance in Vision-Language Models
|
arXiv:2601.23253v1 Announce Type: new Abstract: Vision-language models suffer performance degradation under domain shift, limiting real-world applicability. Existing test-time adaptation methods are computationally intensive, rely on back-propagation, and often focus on single modalities. To address these issues, we propose Training-free Test-Time Adaptation with Brownian Distance Covariance (TaTa). TaTa leverages Brownian Distance Covariance-a powerful statistical measure that captures both linear and nonlinear dependencies via pairwise distances-to dynamically adapt VLMs to new domains without training or back-propagation. This not only improves efficiency but also enhances stability by avoiding disruptive weight updates. TaTa further integrates attribute-enhanced prompting to improve vision-language inference with descriptive visual cues. Combined with dynamic clustering and pseudo-label refinement, it effectively recalibrates the model for novel visual contexts. Experiments across diverse datasets show that TaTa significantly reduces computational cost while achieving state-of-the-art performance in domain and cross-dataset generalization.
|
https://arxiv.org/abs/2601.23253
|
Academic Papers
|
svg
|
a8e8246f7ca8e6b4c484055e2c2173b9710a7990fbbe0c7e2371b8a819a2dae9
|
2026-02-02T00:00:00-05:00
|
GrepRAG: An Empirical Study and Optimization of Grep-Like Retrieval for Code Completion
|
arXiv:2601.23254v1 Announce Type: new Abstract: Repository-level code completion remains challenging for large language models (LLMs) due to cross-file dependencies and limited context windows. Prior work addresses this challenge using Retrieval-Augmented Generation (RAG) frameworks based on semantic indexing or structure-aware graph analysis, but these approaches incur substantial computational overhead for index construction and maintenance. Motivated by common developer workflows that rely on lightweight search utilities (e.g., ripgrep), we revisit a fundamental yet underexplored question: how far can simple, index-free lexical retrieval support repository-level code completion before more complex retrieval mechanisms become necessary? To answer this question, we systematically investigate lightweight, index-free, intent-aware lexical retrieval through extensive empirical analysis. We first introduce Naive GrepRAG, a baseline framework in which LLMs autonomously generate ripgrep commands to retrieve relevant context. Despite its simplicity, Naive GrepRAG achieves performance comparable to sophisticated graph-based baselines. Further analysis shows that its effectiveness stems from retrieving lexically precise code fragments that are spatially closer to the completion site. We also identify key limitations of lexical retrieval, including sensitivity to noisy matches from high-frequency ambiguous keywords and context fragmentation caused by rigid truncation boundaries. To address these issues, we propose GrepRAG, which augments lexical retrieval with a lightweight post-processing pipeline featuring identifier-weighted re-ranking and structure-aware deduplication. Extensive evaluation on CrossCodeEval and RepoEval-Updated demonstrates that GrepRAG consistently outperforms state-of-the-art (SOTA) methods, achieving 7.04-15.58 percent relative improvement in code exact match (EM) over the best baseline on CrossCodeEval.
|
https://arxiv.org/abs/2601.23254
|
Academic Papers
|
svg
|
0d79c20919640bf15bd13adf1b272ad468b499487450bda301c549f853603f90
|
2026-02-02T00:00:00-05:00
|
Now You Hear Me: Audio Narrative Attacks Against Large Audio-Language Models
|
arXiv:2601.23255v1 Announce Type: new Abstract: Large audio-language models increasingly operate on raw speech inputs, enabling more seamless integration across domains such as voice assistants, education, and clinical triage. This transition, however, introduces a distinct class of vulnerabilities that remain largely uncharacterized. We examine the security implications of this modality shift by designing a text-to-audio jailbreak that embeds disallowed directives within a narrative-style audio stream. The attack leverages an advanced instruction-following text-to-speech (TTS) model to exploit structural and acoustic properties, thereby circumventing safety mechanisms primarily calibrated for text. When delivered through synthetic speech, the narrative format elicits restricted outputs from state-of-the-art models, including Gemini 2.0 Flash, achieving a 98.26% success rate that substantially exceeds text-only baselines. These results highlight the need for safety frameworks that jointly reason over linguistic and paralinguistic representations, particularly as speech-based interfaces become more prevalent.
|
https://arxiv.org/abs/2601.23255
|
Academic Papers
|
svg
|
0181d726fde38b0c3da44f2a7f4adc841b8a542ec3884c005bee810ad0735b16
|
2026-02-02T00:00:00-05:00
|
Outcome-Conditioned Reasoning Distillation for Resolving Software Issues
|
arXiv:2601.23257v1 Announce Type: new Abstract: Software issue resolution in large repositories is a long-range decision process: choices made during localization shape the space of viable edits, and missteps can compound into incorrect patches. Despite this, many LLM-based repair pipelines still operate in a reset-and-solve manner, producing fresh reasoning for every new issue instead of carrying forward what worked in past fixes. This is wasteful because repositories routinely contain earlier issues with overlapping structure, failure modes, or constraints, where prior repair experience could provide useful guidance. Existing approaches typically harvest this signal through forward-time trial procedures, such as repeated refinement or search, incurring high inference cost while still risking divergence from the eventual correct patch. We present an Outcome-Conditioned Reasoning Distillation(O-CRD) framework that uses resolved in-repository issues with verified patches as supervision. Starting from a historical fix, the method reconstructs a stage-wise repair trace backward from the verified outcome, then reuses the distilled guidance at inference time to steer file/function localization and patch synthesis, without fine-tuning or online search. On SWE-Bench Lite, this approach increases Pass@1 by 10.4% with GPT-4o, 8.6% with DeepSeek-V3, and 10.3% with GPT-5, indicating that outcome-conditioned reuse of verified repairs can replace costly forward exploration for software issue resolution.
|
https://arxiv.org/abs/2601.23257
|
Academic Papers
|
svg
|
e02c7427248db2f989b2530bcf879e72d377e0b2e92419ce4452b09cc210413c
|
2026-02-02T00:00:00-05:00
|
Agnostic Language Identification and Generation
|
arXiv:2601.23258v1 Announce Type: new Abstract: Recent works on language identification and generation have established tight statistical rates at which these tasks can be achieved. These works typically operate under a strong realizability assumption: that the input data is drawn from an unknown distribution necessarily supported on some language in a given collection. In this work, we relax this assumption of realizability entirely, and impose no restrictions on the distribution of the input data. We propose objectives to study both language identification and generation in this more general "agnostic" setup. Across both problems, we obtain novel interesting characterizations and nearly tight rates.
|
https://arxiv.org/abs/2601.23258
|
Academic Papers
|
svg
|
19285de5350025b6f0004a2db1ed86e861d28a9e75a86987f6c17595e58ebd65
|
2026-02-02T00:00:00-05:00
|
TEON: Tensorized Orthonormalization Beyond Layer-Wise Muon for Large Language Model Pre-Training
|
arXiv:2601.23261v1 Announce Type: new Abstract: The Muon optimizer has demonstrated strong empirical performance in pre-training large language models by performing matrix-level gradient (or momentum) orthogonalization in each layer independently. In this work, we propose TEON, a principled generalization of Muon that extends orthogonalization beyond individual layers by modeling the gradients of a neural network as a structured higher-order tensor. We present TEON's improved convergence guarantee over layer-wise Muon, and further develop a practical instantiation of TEON based on the theoretical analysis with corresponding ablation. We evaluate our approach on two widely adopted architectures: GPT-style models, ranging from 130M to 774M parameters, and LLaMA-style models, ranging from 60M to 1B parameters. Experimental results show that TEON consistently improves training and validation perplexity across model scales and exhibits strong robustness under various approximate SVD schemes.
|
https://arxiv.org/abs/2601.23261
|
Academic Papers
|
svg
|
691c7e7b6d247cd13300b09fa8982c156131f59a7cf6556036540cfad0635675
|
2026-02-02T00:00:00-05:00
|
Particle-Guided Diffusion Models for Partial Differential Equations
|
arXiv:2601.23262v1 Announce Type: new Abstract: We introduce a guided stochastic sampling method that augments sampling from diffusion models with physics-based guidance derived from partial differential equation (PDE) residuals and observational constraints, ensuring generated samples remain physically admissible. We embed this sampling procedure within a new Sequential Monte Carlo (SMC) framework, yielding a scalable generative PDE solver. Across multiple benchmark PDE systems as well as multiphysics and interacting PDE systems, our method produces solution fields with lower numerical error than existing state-of-the-art generative methods.
|
https://arxiv.org/abs/2601.23262
|
Academic Papers
|
svg
|
0a0c322f4e97d44cd6c13e01440005ce2b4ce0f70c1f1d1e15db1cf401005bc8
|
2026-02-02T00:00:00-05:00
|
PaperBanana: Automating Academic Illustration for AI Scientists
|
arXiv:2601.23265v1 Announce Type: new Abstract: Despite rapid advances in autonomous AI scientists powered by language models, generating publication-ready illustrations remains a labor-intensive bottleneck in the research workflow. To lift this burden, we introduce PaperBanana, an agentic framework for automated generation of publication-ready academic illustrations. Powered by state-of-the-art VLMs and image generation models, PaperBanana orchestrates specialized agents to retrieve references, plan content and style, render images, and iteratively refine via self-critique. To rigorously evaluate our framework, we introduce PaperBananaBench, comprising 292 test cases for methodology diagrams curated from NeurIPS 2025 publications, covering diverse research domains and illustration styles. Comprehensive experiments demonstrate that PaperBanana consistently outperforms leading baselines in faithfulness, conciseness, readability, and aesthetics. We further show that our method effectively extends to the generation of high-quality statistical plots. Collectively, PaperBanana paves the way for the automated generation of publication-ready illustrations.
|
https://arxiv.org/abs/2601.23265
|
Academic Papers
|
svg
|
68f0b19baf3bc7f9edf264789a0b74877edf6a8420e9ac4383aedc05b81334b3
|
2026-02-02T00:00:00-05:00
|
IRL-DAL: Safe and Adaptive Trajectory Planning for Autonomous Driving via Energy-Guided Diffusion Models
|
arXiv:2601.23266v1 Announce Type: new Abstract: This paper proposes a novel inverse reinforcement learning framework using a diffusion-based adaptive lookahead planner (IRL-DAL) for autonomous vehicles. Training begins with imitation from an expert finite state machine (FSM) controller to provide a stable initialization. Environment terms are combined with an IRL discriminator signal to align with expert goals. Reinforcement learning (RL) is then performed with a hybrid reward that combines diffuse environmental feedback and targeted IRL rewards. A conditional diffusion model, which acts as a safety supervisor, plans safe paths. It stays in its lane, avoids obstacles, and moves smoothly. Then, a learnable adaptive mask (LAM) improves perception. It shifts visual attention based on vehicle speed and nearby hazards. After FSM-based imitation, the policy is fine-tuned with Proximal Policy Optimization (PPO). Training is run in the Webots simulator with a two-stage curriculum. A 96\% success rate is reached, and collisions are reduced to 0.05 per 1k steps, marking a new benchmark for safe navigation. By applying the proposed approach, the agent not only drives in lane but also handles unsafe conditions at an expert level, increasing robustness.We make our code publicly available.
|
https://arxiv.org/abs/2601.23266
|
Academic Papers
|
svg
|
d159b7a4c306bd22656cc2dd88e66ba1b4996b228e9c998ebfda76eddd0b2190
|
2026-02-02T00:00:00-05:00
|
TCBench: A Benchmark for Tropical Cyclone Track and Intensity Forecasting at the Global Scale
|
arXiv:2601.23268v1 Announce Type: new Abstract: TCBench is a benchmark for evaluating global, short to medium-range (1-5 days) forecasts of tropical cyclone (TC) track and intensity. To allow a fair and model-agnostic comparison, TCBench builds on the IBTrACS observational dataset and formulates TC forecasting as predicting the time evolution of an existing tropical system conditioned on its initial position and intensity. TCBench includes state-of-the-art dynamical (TIGGE) and neural weather models (AIFS, Pangu-Weather, FourCastNet v2, GenCast). If not readily available, baseline tracks are consistently derived from model outputs using the TempestExtremes library. For evaluation, TCBench provides deterministic and probabilistic storm-following metrics. On 2023 test cases, neural weather models skillfully forecast TC tracks, while skillful intensity forecasts require additional steps such as post-processing. Designed for accessibility, TCBench helps AI practitioners tackle domain-relevant TC challenges and equips tropical meteorologists with data-driven tools and workflows to improve prediction and TC process understanding. By lowering barriers to reproducible, process-aware evaluation of extreme events, TCBench aims to democratize data-driven TC forecasting.
|
https://arxiv.org/abs/2601.23268
|
Academic Papers
|
svg
|
50f1b86c34c3295347834043f7fc15d129ce1a33f2d8f20ee2b24f11c60c0f8c
|
2026-02-02T00:00:00-05:00
|
Rank Reduction AutoEncoders for Mechanical Design: Advancing Novel and Efficient Data-Driven Topology Optimization
|
arXiv:2601.23269v1 Announce Type: new Abstract: This work presents a data-driven framework for fast forward and inverse analysis in topology optimization (TO) by combining Rank Reduction Autoencoders (RRAEs) with neural latent-space mappings. The methodology targets the efficient approximation of the relationship between optimized geometries and their corresponding mechanical responses or Quantity of Interest (QoI), with a particular focus on compliance-minimized linear elastic structures. High-dimensional TO results are first compressed using RRAEs, which encode the data into a low-rank approximation via Singular Value Decomposition (SVD), obtained in this sense the most important features that approximate the data. Separate RRAE models are trained for geometry and for different types of QoIs, including scalar metrics, one-dimensional stress fields, and full two-dimensional von Mises stress distributions. The resulting low-dimensional latent coefficients of the latent space are then related through multilayer perceptrons to address both direct problems -- predicting structural responses from geometry -- and inverse problems -- recovering geometries from prescribed performance targets. The proposed approach is demonstrated on a benchmark TO problem based on a half MBB beam, using datasets generated via density-based Solid Isotropic Material with Penalization (SIMP) optimization. Numerical results show that the framework enables accurate and computationally efficient surrogate models, with increasing robustness and fidelity as richer QoIs are considered. The methodology also provides a foundation for generative mechanical design by enabling the synthesis of new geometries and responses through latent-space exploration.
|
https://arxiv.org/abs/2601.23269
|
Academic Papers
|
svg
|
51de9f834f1c72bc36dad89a70554868f9eff712c4961c851e9beb358e0add92
|
2026-02-02T00:00:00-05:00
|
UPA: Unsupervised Prompt Agent via Tree-Based Search and Selection
|
arXiv:2601.23273v1 Announce Type: new Abstract: Prompt agents have recently emerged as a promising paradigm for automated prompt optimization, framing refinement as a sequential decision-making problem over a structured prompt space. While this formulation enables the use of advanced planning algorithms, these methods typically assume access to supervised reward signals, which are often unavailable in practical scenarios. In this work, we propose UPA, an Unsupervised Prompt Agent that realizes structured search and selection without relying on supervised feedback. Specifically, during search, UPA iteratively constructs an evolving tree structure to navigate the prompt space, guided by fine-grained and order-invariant pairwise comparisons from Large Language Models (LLMs). Crucially, as these local comparisons do not inherently yield a consistent global scale, we decouple systematic prompt exploration from final selection, introducing a two-stage framework grounded in the Bradley-Terry-Luce (BTL) model. This framework first performs path-wise Bayesian aggregation of local comparisons to filter candidates under uncertainty, followed by global tournament-style comparisons to infer latent prompt quality and identify the optimal prompt. Experiments across multiple tasks demonstrate that UPA consistently outperforms existing prompt optimization methods, showing that agent-style optimization remains highly effective even in fully unsupervised settings.
|
https://arxiv.org/abs/2601.23273
|
Academic Papers
|
svg
|
08accf25c5bcc689d66879228d692b0a6c83db842aab46dda3875e5e938a3a59
|
2026-02-02T00:00:00-05:00
|
FOCUS: DLLMs Know How to Tame Their Compute Bound
|
arXiv:2601.23278v1 Announce Type: new Abstract: Diffusion Large Language Models (DLLMs) offer a compelling alternative to Auto-Regressive models, but their deployment is constrained by high decoding cost. In this work, we identify a key inefficiency in DLLM decoding: while computation is parallelized over token blocks, only a small subset of tokens is decodable at each diffusion step, causing most compute to be wasted on non-decodable tokens. We further observe a strong correlation between attention-derived token importance and token-wise decoding probability. Based on this insight, we propose FOCUS -- an inference system designed for DLLMs. By dynamically focusing computation on decodable tokens and evicting non-decodable ones on-the-fly, FOCUS increases the effective batch size, alleviating compute limitations and enabling scalable throughput. Empirical evaluations demonstrate that FOCUS achieves up to 3.52$\times$ throughput improvement over the production-grade engine LMDeploy, while preserving or improving generation quality across multiple benchmarks. The FOCUS system is publicly available on GitHub: https://github.com/sands-lab/FOCUS.
|
https://arxiv.org/abs/2601.23278
|
Academic Papers
|
svg
|
31fa2355ce6e55c774bdf4d3b0dd459ed9d7817bd930ddf53e292b9ab4a20b45
|
2026-02-02T00:00:00-05:00
|
Decoupled Diffusion Sampling for Inverse Problems on Function Spaces
|
arXiv:2601.23280v1 Announce Type: new Abstract: We propose a data-efficient, physics-aware generative framework in function space for inverse PDE problems. Existing plug-and-play diffusion posterior samplers represent physics implicitly through joint coefficient-solution modeling, requiring substantial paired supervision. In contrast, our Decoupled Diffusion Inverse Solver (DDIS) employs a decoupled design: an unconditional diffusion learns the coefficient prior, while a neural operator explicitly models the forward PDE for guidance. This decoupling enables superior data efficiency and effective physics-informed learning, while naturally supporting Decoupled Annealing Posterior Sampling (DAPS) to avoid over-smoothing in Diffusion Posterior Sampling (DPS). Theoretically, we prove that DDIS avoids the guidance attenuation failure of joint models when training data is scarce. Empirically, DDIS achieves state-of-the-art performance under sparse observation, improving $l_2$ error by 11% and spectral error by 54% on average; when data is limited to 1%, DDIS maintains accuracy with 40% advantage in $l_2$ error compared to joint models.
|
https://arxiv.org/abs/2601.23280
|
Academic Papers
|
svg
|
7dd5b1dcc1c49180886b18c726bf9bc7106466385acd934822f2c332ee186128
|
2026-02-02T00:00:00-05:00
|
User Prompting Strategies and Prompt Enhancement Methods for Open-Set Object Detection in XR Environments
|
arXiv:2601.23281v1 Announce Type: new Abstract: Open-set object detection (OSOD) localizes objects while identifying and rejecting unknown classes at inference. While recent OSOD models perform well on benchmarks, their behavior under realistic user prompting remains underexplored. In interactive XR settings, user-generated prompts are often ambiguous, underspecified, or overly detailed. To study prompt-conditioned robustness, we evaluate two OSOD models, GroundingDINO and YOLO-E, on real-world XR images and simulate diverse user prompting behaviors using vision-language models. We consider four prompt types: standard, underdetailed, overdetailed, and pragmatically ambiguous, and examine the impact of two enhancement strategies on these prompts. Results show that both models exhibit stable performance under underdetailed and standard prompts, while they suffer degradation under ambiguous prompts. Overdetailed prompts primarily affect GroundingDINO. Prompt enhancement substantially improves robustness under ambiguity, yielding gains exceeding 55% mIoU and 41% average confidence. Based on the findings, we propose several prompting strategies and prompt enhancement methods for OSOD models in XR environments.
|
https://arxiv.org/abs/2601.23281
|
Academic Papers
|
svg
|
427b6b178cb70d409513e191fc0123ee8375d56ba73493c68b620af850eabad8
|
2026-02-02T00:00:00-05:00
|
End-to-end Optimization of Belief and Policy Learning in Shared Autonomy Paradigms
|
arXiv:2601.23285v1 Announce Type: new Abstract: Shared autonomy systems require principled methods for inferring user intent and determining appropriate assistance levels. This is a central challenge in human-robot interaction, where systems must be successful while being mindful of user agency. Previous approaches relied on static blending ratios or separated goal inference from assistance arbitration, leading to suboptimal performance in unstructured environments. We introduce BRACE (Bayesian Reinforcement Assistance with Context Encoding), a novel framework that fine-tunes Bayesian intent inference and context-adaptive assistance through an architecture enabling end-to-end gradient flow between intent inference and assistance arbitration. Our pipeline conditions collaborative control policies on environmental context and complete goal probability distributions. We provide analysis showing (1) optimal assistance levels should decrease with goal uncertainty and increase with environmental constraint severity, and (2) integrating belief information into policy learning yields a quadratic expected regret advantage over sequential approaches. We validated our algorithm against SOTA methods (IDA, DQN) using a three-part evaluation progressively isolating distinct challenges of end-effector control: (1) core human-interaction dynamics in a 2D human-in-the-loop cursor task, (2) non-linear dynamics of a robotic arm, and (3) integrated manipulation under goal ambiguity and environmental constraints. We demonstrate improvements over SOTA, achieving 6.3% higher success rates and 41% increased path efficiency, and 36.3% success rate and 87% path efficiency improvement over unassisted control. Our results confirmed that integrated optimization is most beneficial in complex, goal-ambiguous scenarios, and is generalizable across robotic domains requiring goal-directed assistance, advancing the SOTA for adaptive shared autonomy.
|
https://arxiv.org/abs/2601.23285
|
Academic Papers
|
svg
|
b4612cb41a8007f5292ae225da280529a7636a505b47f0bdfed2db33615f11b9
|
2026-02-02T00:00:00-05:00
|
VideoGPA: Distilling Geometry Priors for 3D-Consistent Video Generation
|
arXiv:2601.23286v1 Announce Type: new Abstract: While recent video diffusion models (VDMs) produce visually impressive results, they fundamentally struggle to maintain 3D structural consistency, often resulting in object deformation or spatial drift. We hypothesize that these failures arise because standard denoising objectives lack explicit incentives for geometric coherence. To address this, we introduce VideoGPA (Video Geometric Preference Alignment), a data-efficient self-supervised framework that leverages a geometry foundation model to automatically derive dense preference signals that guide VDMs via Direct Preference Optimization (DPO). This approach effectively steers the generative distribution toward inherent 3D consistency without requiring human annotations. VideoGPA significantly enhances temporal stability, physical plausibility, and motion coherence using minimal preference pairs, consistently outperforming state-of-the-art baselines in extensive experiments.
|
https://arxiv.org/abs/2601.23286
|
Academic Papers
|
svg
|
2bf1247eb48e36797ef254a3df09d023521d03d5f8f6897a3ebf8acc88f11249
|
2026-02-02T00:00:00-05:00
|
Smart Routing with Precise Link Estimation: DSEE-Based Anypath Routing for Reliable Wireless Networking
|
arXiv:2405.10377v1 Announce Type: cross Abstract: In dynamic and resource-constrained environments, such as multi-hop wireless mesh networks, traditional routing protocols often falter by relying on predetermined paths that prove ineffective in unpredictable link conditions. Shortest Anypath routing offers a solution by adapting routing decisions based on real-time link conditions. However, the effectiveness of such routing is fundamentally dependent on the quality and reliability of the available links, and predicting these variables with certainty is challenging. This paper introduces a novel approach that leverages the Deterministic Sequencing of Exploration and Exploitation (DSEE), a multi-armed bandit algorithm, to address the need for accurate and real-time estimation of link delivery probabilities. This approach augments the reliability and resilience of the Shortest Anypath routing in the face of fluctuating link conditions. By coupling DSEE with Anypath routing, this algorithm continuously learns and ensures accurate delivery probability estimation and selects the most suitable way to efficiently route packets while maintaining a provable near-logarithmic regret bound. We also theoretically prove that our proposed scheme offers better regret scaling with respect to the network size than the previously proposed Thompson Sampling-based Opportunistic Routing (TSOR).
|
https://arxiv.org/abs/2405.10377
|
Academic Papers
|
svg
|
2cb8cdeec265911ed570153c2633c783622a768a91276e6d578fbebe21243782
|
2026-02-02T00:00:00-05:00
|
Deep Lightweight Unrolled Network for High Dynamic Range Modulo Imaging
|
arXiv:2601.12526v1 Announce Type: cross Abstract: Modulo-Imaging (MI) offers a promising alternative for expanding the dynamic range of images by resetting the signal intensity when it reaches the saturation level. Subsequently, high-dynamic range (HDR) modulo imaging requires a recovery process to obtain the HDR image. MI is a non-convex and ill-posed problem where recent recovery networks suffer in high-noise scenarios. In this work, we formulate the HDR reconstruction task as an optimization problem that incorporates a deep prior and subsequently unrolls it into an optimization-inspired deep neural network. The network employs a lightweight convolutional denoiser for fast inference with minimal computational overhead, effectively recovering intensity values while mitigating noise. Moreover, we introduce the Scaling Equivariance term that facilitates self-supervised fine-tuning, thereby enabling the model to adapt to new modulo images that fall outside the original training distribution. Extensive evaluations demonstrate the superiority of our method compared to state-of-the-art recovery algorithms in terms of performance and quality.
|
https://arxiv.org/abs/2601.12526
|
Academic Papers
|
svg
|
d0e735909ca49f1cb7724acba3acfc14c3e64e0c225f3f73ebe04ae82d423f05
|
2026-02-02T00:00:00-05:00
|
Formalization of non-Archimedean functional analysis 1: spherically complete spaces
|
arXiv:2601.21734v1 Announce Type: cross Abstract: In this article, we present a formalization of spherically complete spaces, which is a fundamental notion in non-archimedean functional analysis. This work includes the equivalent definitions of spherically complete spaces, their basic properties, examples and non-examples such as the field $\mathbf{C}_p$ of $p$-adic complex numbers. As applications, we formalize the Birkhoff-James orthogonality, Hahn-Banach extension theorem and the spherical completion for non-archimedean Banach spaces. Code available at https://github.com/YijunYuan/SphericalCompleteness
|
https://arxiv.org/abs/2601.21734
|
Academic Papers
|
svg
|
dacb6051b5f7b700bcba29afdffd288519917ecee381c13df6df4d7c21e40e3a
|
2026-02-02T00:00:00-05:00
|
UniFinEval: Towards Unified Evaluation of Financial Multimodal Models across Text, Images and Videos
|
arXiv:2601.22162v1 Announce Type: cross Abstract: Multimodal large language models are playing an increasingly significant role in empowering the financial domain, however, the challenges they face, such as multimodal and high-density information and cross-modal multi-hop reasoning, go beyond the evaluation scope of existing multimodal benchmarks. To address this gap, we propose UniFinEval, the first unified multimodal benchmark designed for high-information-density financial environments, covering text, images, and videos. UniFinEval systematically constructs five core financial scenarios grounded in real-world financial systems: Financial Statement Auditing, Company Fundamental Reasoning, Industry Trend Insights, Financial Risk Sensing, and Asset Allocation Analysis. We manually construct a high-quality dataset consisting of 3,767 question-answer pairs in both chinese and english and systematically evaluate 10 mainstream MLLMs under Zero-Shot and CoT settings. Results show that Gemini-3-pro-preview achieves the best overall performance, yet still exhibits a substantial gap compared to financial experts. Further error analysis reveals systematic deficiencies in current models. UniFinEval aims to provide a systematic assessment of MLLMs' capabilities in fine-grained, high-information-density financial environments, thereby enhancing the robustness of MLLMs applications in real-world financial scenarios. Data and code are available at https://github.com/aifinlab/UniFinEval.
|
https://arxiv.org/abs/2601.22162
|
Academic Papers
|
svg
|
83c50ffe8b13bca287fbe4b8e2c846fddc268d6fe1e3b9d83214bd6e957bc93b
|
2026-02-02T00:00:00-05:00
|
Stablecoin Design with Adversarial-Robust Multi-Agent Systems via Trust-Weighted Signal Aggregation
|
arXiv:2601.22168v1 Announce Type: cross Abstract: Algorithmic stablecoins promise decentralized monetary stability by maintaining a target peg through programmatic reserve management. Yet, their reserve controllers remain vulnerable to regime-blind optimization, calibrating risk parameters on fair-weather data while ignoring tail events that precipitate cascading failures. The March 2020 Black Thursday collapse, wherein MakerDAO's collateral auctions yielded $8.3M in losses and a 15% peg deviation, exposed a critical gap: existing models like SAS systematically omit extreme volatility regimes from covariance estimates, producing allocations optimal in expectation but catastrophic under adversarial stress. We present MVF-Composer, a trust-weighted Mean-Variance Frontier reserve controller incorporating a novel Stress Harness for risk-state estimation. Our key insight is deploying multi-agent simulations as adversarial stress-testers: heterogeneous agents (traders, liquidity providers, attackers) execute protocol actions under crisis scenarios, exposing reserve vulnerabilities before they manifest on-chain. We formalize a trust-scoring mechanism T: A -> [0,1] that down-weights signals from agents exhibiting manipulative behavior, ensuring the risk-state estimator remains robust to signal injection and Sybil attacks. Across 1,200 randomized scenarios with injected Black-Swan shocks (10% collateral drawdown, 50% sentiment collapse, coordinated redemption attacks), MVF-Composer reduces peak peg deviation by 57% and mean recovery time by 3.1x relative to SAS baselines. Ablation studies confirm the trust layer accounts for 23% of stability gains under adversarial conditions, achieving 72% adversarial agent detection. Our system runs on commodity hardware, requires no on-chain oracles beyond standard price feeds, and provides a reproducible framework for stress-testing DeFi reserve policies.
|
https://arxiv.org/abs/2601.22168
|
Academic Papers
|
svg
|
e459246573e6defa0197581e5f5bec659148bf5fb7e05c0c6220b7dbfe07108c
|
2026-02-02T00:00:00-05:00
|
Proliferating series by Jean Barraqu\'e: a study and classification in mathematical terms
|
arXiv:2601.22176v1 Announce Type: cross Abstract: Barraqu\'e's proliferating series give an interesting turn on the concept of classic serialism by creating a new invariant when it comes to constructing the series: rather than the intervals between consecutive notes, what remains unaltered during the construction of the proliferations of the given base series is the permutation of the notes which happens between two consecutive series, that is to say, the transformation of the order of the notes in the series. This presents new possibilities for composers interested in the serial method, given the fact that the variety of intervals obtained by this method is far greater than that of classic serialism. In this manuscript, we will study some unexplored possibilities that the proliferating series offer from a mathematical point of view, which will allow composers to gain much more familiarity with them and potentially result in the creation of pieces that take serialism to the next level.
|
https://arxiv.org/abs/2601.22176
|
Academic Papers
|
svg
|
c04f3c682cc220ea660c53724139c9ab303845a1dc386e8aee942d7ef3a67219
|
2026-02-02T00:00:00-05:00
|
SCENE: Semantic-aware Codec Enhancement with Neural Embeddings
|
arXiv:2601.22189v1 Announce Type: cross Abstract: Compression artifacts from standard video codecs often degrade perceptual quality. We propose a lightweight, semantic-aware pre-processing framework that enhances perceptual fidelity by selectively addressing these distortions. Our method integrates semantic embeddings from a vision-language model into an efficient convolutional architecture, prioritizing the preservation of perceptually significant structures. The model is trained end-to-end with a differentiable codec proxy, enabling it to mitigate artifacts from various standard codecs without modifying the existing video pipeline. During inference, the codec proxy is discarded, and SCENE operates as a standalone pre-processor, enabling real-time performance. Experiments on high-resolution benchmarks show improved performance over baselines in both objective (MS-SSIM) and perceptual (VMAF) metrics, with notable gains in preserving detailed textures within salient regions. Our results show that semantic-guided, codec-aware pre-processing is an effective approach for enhancing compressed video streams.
|
https://arxiv.org/abs/2601.22189
|
Academic Papers
|
svg
|
313055fe2bfe7822f04cf78bbd091e92d88378ef80b91906b86f7f0b3e334bc0
|
2026-02-02T00:00:00-05:00
|
Practical Evaluation of Quantum Kernel Methods for Radar Micro-Doppler Classification on Noisy Intermediate-Scale Quantum (NISQ) Hardware
|
arXiv:2601.22194v1 Announce Type: cross Abstract: This paper examines the application of a Quantum Support Vector Machine (QSVM) for radarbased aerial target classification using micro-Doppler signatures. Classical features are extracted and reduced via Principal Component Analysis (PCA) to enable efficient quantum encoding. The reduced feature vectors are embedded into a quantum kernel-induced feature space using a fully entangled ZZFeatureMap and classified using a kernel based QSVM. Performance is first evaluated on a quantum simulator and subsequently validated on NISQ-era superconducting quantum hardware, specifically the IBM Torino (133-qubit) and IBM Fez (156-qubit) processors. Experimental results demonstrate that the QSVM achieves competitive classification performance relative to classical SVM baselines while operating on substantially reduced feature dimensionality. Hardware experiments reveal the impact of noise and decoherence and measurement shot count on quantum kernel estimation, and further show improved stability and fidelity on newer Heron r2 architecture. This study provides a systematic comparison between simulator-based and hardware-based QSVM implementations and highlights both the feasibility and current limitations of deploying quantum kernel methods for practical radar signal classification tasks.
|
https://arxiv.org/abs/2601.22194
|
Academic Papers
|
svg
|
48f56311c7cdde359f3d00ffa6ccbd8d03fd3cfc7762690a033bd5a7f86e3276
|
2026-02-02T00:00:00-05:00
|
Adaptive Benign Overfitting (ABO): Overparameterized RLS for Online Learning in Non-stationary Time-series
|
arXiv:2601.22200v1 Announce Type: cross Abstract: Overparameterized models have recently challenged conventional learning theory by exhibiting improved generalization beyond the interpolation limit, a phenomenon known as benign overfitting. This work introduces Adaptive Benign Overfitting (ABO), extending the recursive least-squares (RLS) framework to this regime through a numerically stable formulation based on orthogonal-triangular updates. A QR-based exponentially weighted RLS (QR-EWRLS) algorithm is introduced, combining random Fourier feature mappings with forgetting-factor regularization to enable online adaptation under non-stationary conditions. The orthogonal decomposition prevents the numerical divergence associated with covariance-form RLS while retaining adaptability to evolving data distributions. Experiments on nonlinear synthetic time series confirm that the proposed approach maintains bounded residuals and stable condition numbers while reproducing the double-descent behavior characteristic of overparameterized models. Applications to forecasting foreign exchange and electricity demand show that ABO is highly accurate (comparable to baseline kernel methods) while achieving speed improvements of between 20 and 40 percent. The results provide a unified view linking adaptive filtering, kernel approximation, and benign overfitting within a stable online learning framework.
|
https://arxiv.org/abs/2601.22200
|
Academic Papers
|
svg
|
f9d95c3ed97f0dacfda938692084cad2c22d19c73278efd0214e64fb95a2a4ba
|
2026-02-02T00:00:00-05:00
|
A Survey on Semantic Communication for Vision: Categories, Frameworks, Enabling Techniques, and Applications
|
arXiv:2601.22202v1 Announce Type: cross Abstract: Semantic communication (SemCom) emerges as a transformative paradigm for traffic-intensive visual data transmission, shifting focus from raw data to meaningful content transmission and relieving the increasing pressure on communication resources. However, to achieve SemCom, challenges are faced in accurate semantic quantization for visual data, robust semantic extraction and reconstruction under diverse tasks and goals, transceiver coordination with effective knowledge utilization, and adaptation to unpredictable wireless communication environments. In this paper, we present a systematic review of SemCom for visual data transmission (SemCom-Vision), wherein an interdisciplinary analysis integrating computer vision (CV) and communication engineering is conducted to provide comprehensive guidelines for the machine learning (ML)-empowered SemCom-Vision design. Specifically, this survey first elucidates the basics and key concepts of SemCom. Then, we introduce a novel classification perspective to categorize existing SemCom-Vision approaches as semantic preservation communication (SPC), semantic expansion communication (SEC), and semantic refinement communication (SRC) based on communication goals interpreted through semantic quantization schemes. Moreover, this survey articulates the ML-based encoder-decoder models and training algorithms for each SemCom-Vision category, followed by knowledge structure and utilization strategies. Finally, we discuss potential SemCom-Vision applications.
|
https://arxiv.org/abs/2601.22202
|
Academic Papers
|
svg
|
14a961628ede23bd0ebf5795e5ba3b72ff00b58f3337ab9c52944ab0d4e43cb7
|
2026-02-02T00:00:00-05:00
|
Beyond Conditional Computation: Retrieval-Augmented Genomic Foundation Models with Gengram
|
arXiv:2601.22203v1 Announce Type: cross Abstract: Current genomic foundation models (GFMs) rely on extensive neural computation to implicitly approximate conserved biological motifs from single-nucleotide inputs. We propose Gengram, a conditional memory module that introduces an explicit and highly efficient lookup primitive for multi-base motifs via a genomic-specific hashing scheme, establishing genomic "syntax". Integrated into the backbone of state-of-the-art GFMs, Gengram achieves substantial gains (up to 14%) across several functional genomics tasks. The module demonstrates robust architectural generalization, while further inspection of Gengram's latent space reveals the emergence of meaningful representations that align closely with fundamental biological knowledge. By establishing structured motif memory as a modeling primitive, Gengram simultaneously boosts empirical performance and mechanistic interpretability, providing a scalable and biology-aligned pathway for the next generation of GFMs. The code is available at https://github.com/zhejianglab/Genos, and the model checkpoint is available at https://huggingface.co/ZhejiangLab/Gengram.
|
https://arxiv.org/abs/2601.22203
|
Academic Papers
|
svg
|
ef23ada01575e03cb45a7efffaf8033b892fe8a1ea260e942b3c4f25a07a571b
|
2026-02-02T00:00:00-05:00
|
Transitive Sets of Mutually Orthogonal Latin Squares
|
arXiv:2601.22205v1 Announce Type: cross Abstract: We investigate MacNeish's conjecture (known to be false in general) in the setting of what we call "transitive" Mutually Orthogonal Latin Squares (MOLS). When we restrict our attention to "simply transitive" MOLS, we find that the conjecture holds. We provide some partial results towards the transitive case, as well as the outcome of a computer search, which introduces a new construction of MOLS. In particular, we were unable to find any transitive large (conjecture-violating) sets of MOLS in the literature.
|
https://arxiv.org/abs/2601.22205
|
Academic Papers
|
svg
|
0194d9c46043e8ece845b6123973bbadb719696a545879e9259b277c34d26629
|
2026-02-02T00:00:00-05:00
|
Forecasting in the presence of scale-free noise
|
arXiv:2601.22294v1 Announce Type: cross Abstract: The extraction of signals from noise is a common problem in all areas of science and engineering. A particularly useful version is that of forecasting: determining a causal filter that estimates a future value of a hidden process from past observations. Current techniques for deriving the filter require that the noise be well described by rational power spectra. However, scale-free noises, whose spectra scale as a non-integer power of frequency, are ubiquitous in practice. We establish a method, together with performance guarantees, that solves the forecasting problem in the presence of scale-free noise. Via the duality between estimation and control, our technique can be used to design control for distributed systems. These results will have wide-ranging applications in neuroscience, finance, fluid dynamics, and quantum measurements.
|
https://arxiv.org/abs/2601.22294
|
Academic Papers
|
svg
|
2c140e4ec178f20ba53c459690a64d7ec9607ae8e9dcadbb422a80e6bbda96a7
|
2026-02-02T00:00:00-05:00
|
Sylber 2.0: A Universal Syllable Embedding
|
arXiv:2601.22306v1 Announce Type: cross Abstract: Scaling spoken language modeling requires speech tokens that are both efficient and universal. Recent work has proposed syllables as promising speech tokens at low temporal resolution, but existing models are constrained to English and fail to capture sufficient acoustic detail. To address this gap, we present Sylber 2.0, a self-supervised framework for coding speech at the syllable level that enables efficient temporal compression and high-fidelity reconstruction. Sylber 2.0 achieves a very low token frequency around 5 Hz, while retaining both linguistic and acoustic detail across multiple languages and expressive styles. Experiments show that it performs on par with previous models operating on high-frequency baselines. Furthermore, Sylber 2.0 enables efficient TTS modeling which can generate speech with competitive intelligibility and quality with SOTA models using only 72M parameters. Moreover, the universality of Sylber 2.0 provides more effective features for low resource ASR than previous speech coding frameworks. In sum, we establish an effective syllable-level abstraction for general spoken language.
|
https://arxiv.org/abs/2601.22306
|
Academic Papers
|
svg
|
61d02f0e9ffafccae37ec69afc56506fe2c420542ea714e292ec99aeb0ad482e
|
2026-02-02T00:00:00-05:00
|
Dependence-Aware Label Aggregation for LLM-as-a-Judge via Ising Models
|
arXiv:2601.22336v1 Announce Type: cross Abstract: Large-scale AI evaluation increasingly relies on aggregating binary judgments from $K$ annotators, including LLMs used as judges. Most classical methods, e.g., Dawid-Skene or (weighted) majority voting, assume annotators are conditionally independent given the true label $Y\in\{0,1\}$, an assumption often violated by LLM judges due to shared data, architectures, prompts, and failure modes. Ignoring such dependencies can yield miscalibrated posteriors and even confidently incorrect predictions. We study label aggregation through a hierarchy of dependence-aware models based on Ising graphical models and latent factors. For class-dependent Ising models, the Bayes log-odds is generally quadratic in votes; for class-independent couplings, it reduces to a linear weighted vote with correlation-adjusted parameters. We present finite-$K$ examples showing that methods based on conditional independence can flip the Bayes label despite matching per-annotator marginals. We prove separation results demonstrating that these methods remain strictly suboptimal as the number of judges grows, incurring nonvanishing excess risk under latent factors. Finally, we evaluate the proposed method on three real-world datasets, demonstrating improved performance over the classical baselines.
|
https://arxiv.org/abs/2601.22336
|
Academic Papers
|
svg
|
457a144c9faca7284410a69fad19f1226a4f146e972db9a457132cc29d2a8f3e
|
2026-02-02T00:00:00-05:00
|
Quaternionic Perfect Sequences and Hadamard Matrices
|
arXiv:2601.22337v1 Announce Type: cross Abstract: A finite sequence of numbers is perfect if it has zero periodic autocorrelation after a nontrivial cyclic shift. In this work, we study quaternionic perfect sequences having a one-to-one correspondence with the binary sequences arising in Williamson's construction of quaternion-type Hadamard matrices. Using this correspondence, we devise an enumeration algorithm that is significantly faster than previously used algorithms and does not require the sequences to be symmetric. We implement our algorithm and use it to enumerate all circulant and possibly non-symmetric Williamson-type matrices of orders up to 21; previously, the largest order exhaustively enumerated was 13. We prove that when the blocks of a quaternion-type Hadamard matrix are circulant, the blocks are necessarily pairwise amicable. This dramatically improves the filtering power of our algorithm: in order 20, the number of block pairs needing consideration is reduced by a factor of over 25,000. We use our results to construct quaternionic Hadamard matrices of interest in quantum communication and prove they are not equivalent to those constructed by other means. We also study the properties of quaternionic Hadamard matrices analytically, and demonstrate the feasibility of characterizing quaternionic Hadamard matrices with a fixed pattern of entries. These results indicate a richer set of properties and suggest an abundance of quaternionic Hadamard matrices for sufficiently large orders.
|
https://arxiv.org/abs/2601.22337
|
Academic Papers
|
svg
|
2814f401c145d7fa98e7b61326dab6d43537378216698882604c6bece4d44dbc
|
2026-02-02T00:00:00-05:00
|
Amortized Simulation-Based Inference in Generalized Bayes via Neural Posterior Estimation
|
arXiv:2601.22367v1 Announce Type: cross Abstract: Generalized Bayesian Inference (GBI) tempers a loss with a temperature $\beta>0$ to mitigate overconfidence and improve robustness under model misspecification, but existing GBI methods typically rely on costly MCMC or SDE-based samplers and must be re-run for each new dataset and each $\beta$ value. We give the first fully amortized variational approximation to the tempered posterior family $p_\beta(\theta \mid x) \propto \pi(\theta)\,p(x \mid \theta)^\beta$ by training a single $(x,\beta)$-conditioned neural posterior estimator $q_\phi(\theta \mid x,\beta)$ that enables sampling in a single forward pass, without simulator calls or inference-time MCMC. We introduce two complementary training routes: (i) synthesize off-manifold samples $(\theta,x) \sim \pi(\theta)\,p(x \mid \theta)^\beta$ and (ii) reweight a fixed base dataset $\pi(\theta)\,p(x \mid \theta)$ using self-normalized importance sampling (SNIS). We show that the SNIS-weighted objective provides a consistent forward-KL fit to the tempered posterior with finite weight variance. Across four standard simulation-based inference (SBI) benchmarks, including the chaotic Lorenz-96 system, our $\beta$-amortized estimator achieves competitive posterior approximations in standard two-sample metrics, matching non-amortized MCMC-based power-posterior samplers over a wide range of temperatures.
|
https://arxiv.org/abs/2601.22367
|
Academic Papers
|
svg
|
59288755ed2207518b7e44fda46efc82f7851fe5edc453d9836be7181c4879b9
|
2026-02-02T00:00:00-05:00
|
It's all the (Exponential) Family: An Equivalence between Maximum Likelihood Estimation and Control Variates for Sketching Algorithms
|
arXiv:2601.22378v1 Announce Type: cross Abstract: Maximum likelihood estimators (MLE) and control variate estimators (CVE) have been used in conjunction with known information across sketching algorithms and applications in machine learning. We prove that under certain conditions in an exponential family, an optimal CVE will achieve the same asymptotic variance as the MLE, giving an Expectation-Maximization (EM) algorithm for the MLE. Experiments show the EM algorithm is faster and numerically stable compared to other root finding algorithms for the MLE for the bivariate Normal distribution, and we expect this to hold across distributions satisfying these conditions. We show how the EM algorithm leads to reproducibility for algorithms using MLE / CVE, and demonstrate how the EM algorithm leads to finding the MLE when the CV weights are known.
|
https://arxiv.org/abs/2601.22378
|
Academic Papers
|
svg
|
2a5170a05374dccc340b2c1881cad011d3071565b80c1b94074fdc3aaa11ccde
|
2026-02-02T00:00:00-05:00
|
Spectral Filtering for Learning Quantum Dynamics
|
arXiv:2601.22400v1 Announce Type: cross Abstract: Learning high-dimensional quantum systems is a fundamental challenge that notoriously suffers from the curse of dimensionality. We formulate the task of predicting quantum evolution in the linear response regime as a specific instance of learning a Complex-Valued Linear Dynamical System (CLDS) with sector-bounded eigenvalues -- a setting that also encompasses modern Structured State Space Models (SSMs). While traditional system identification attempts to reconstruct full system matrices (incurring exponential cost in the Hilbert dimension), we propose Quantum Spectral Filtering, a method that shifts the goal to improper dynamic learning. Leveraging the optimal concentration properties of the Slepian basis, we prove that the learnability of such systems is governed strictly by an effective quantum dimension $k^*$, determined by the spectral bandwidth and memory horizon. This result establishes that complex-valued LDSs can be learned with sample and computational complexity independent of the ambient state dimension, provided their spectrum is bounded.
|
https://arxiv.org/abs/2601.22400
|
Academic Papers
|
svg
|
1c2defeceb41fa8a4ee2cac19f5fd3e35a37377eb74fa47f3c2fd9a5d31a80c5
|
2026-02-02T00:00:00-05:00
|
Minimal-Action Discrete Schr\"odinger Bridge Matching for Peptide Sequence Design
|
arXiv:2601.22408v1 Announce Type: cross Abstract: Generative modeling of peptide sequences requires navigating a discrete and highly constrained space in which many intermediate states are chemically implausible or unstable. Existing discrete diffusion and flow-based methods rely on reversing fixed corruption processes or following prescribed probability paths, which can force generation through low-likelihood regions and require countless sampling steps. We introduce Minimal-action discrete Schr\"odinger Bridge Matching (MadSBM), a rate-based generative framework for peptide design that formulates generation as a controlled continuous-time Markov process on the amino-acid edit graph. To yield probability trajectories that remain near high-likelihood sequence neighborhoods throughout generation, MadSBM 1) defines generation relative to a biologically informed reference process derived from pre-trained protein language model logits and 2) learns a time-dependent control field that biases transition rates to produce low-action transport paths from a masked prior to the data distribution. We finally introduce guidance to the MadSBM sampling procedure towards a specific functional objective, expanding the design space of therapeutic peptides; to our knowledge, this represents the first-ever application of discrete classifier guidance to Schr\"odinger bridge-based generative models.
|
https://arxiv.org/abs/2601.22408
|
Academic Papers
|
svg
|
15f37743799b579a015244e30464bcc7790ad38661c5f88bb727ca2ccd7f43fe
|
2026-02-02T00:00:00-05:00
|
On the computability of cofinal Fra\"iss\'e limits
|
arXiv:2601.22435v1 Announce Type: cross Abstract: For any collection of finite structures closed under isomorphism (i.e., an age) which has the Hereditary Property (HP), the Joint Embedding Property (JEP), and the Cofinal Amalgamation Property (CAP), there is a unique (up to isomorphism) countable structure which is cofinally ultrahomogeneous with the given age. Such a structure is called the cofinal Fra\"iss\'e limit of the age. In this paper, we consider the computational strength needed to construct the cofinal Fra\"iss\'e limit of a computable age. We show that this construction can always be done using the oracle 0''', and that there are ages that require 0''. In contrast, we show that if one assumes the strengthening of (CAP) known as the Amalgamation Property (AP), then the resulting limit, called the Fra\"iss\'e limit, can be constructed from the age using 0'. Our results therefore show that the more general case of cofinal Fra\"iss\'e limits requires greater computational strength than Fra\"iss\'e limits.
|
https://arxiv.org/abs/2601.22435
|
Academic Papers
|
svg
|
d59b9cf9a5e63cefa9313c79f5bbdfaafb436191cd9376fb2277fbe9fcac1b2c
|
2026-02-02T00:00:00-05:00
|
Simulation-based Bayesian inference with ameliorative learned summary statistics -- Part I
|
arXiv:2601.22441v1 Announce Type: cross Abstract: This paper, which is Part 1 of a two-part paper series, considers a simulation-based inference with learned summary statistics, in which such a learned summary statistic serves as an empirical-likelihood with ameliorative effects in the Bayesian setting, when the exact likelihood function associated with the observation data and the simulation model is difficult to obtain in a closed form or computationally intractable. In particular, a transformation technique which leverages the Cressie-Read discrepancy criterion under moment restrictions is used for summarizing the learned statistics between the observation data and the simulation outputs, while preserving the statistical power of the inference. Here, such a transformation of data-to-learned summary statistics also allows the simulation outputs to be conditioned on the observation data, so that the inference task can be performed over certain sample sets of the observation data that are considered as an empirical relevance or believed to be particular importance. Moreover, the simulation-based inference framework discussed in this paper can be extended further, and thus handling weakly dependent observation data. Finally, we remark that such an inference framework is suitable for implementation in distributed computing, i.e., computational tasks involving both the data-to-learned summary statistics and the Bayesian inferencing problem can be posed as a unified distributed inference problem that will exploit distributed optimization and MCMC algorithms for supporting large datasets associated with complex simulation models.
|
https://arxiv.org/abs/2601.22441
|
Academic Papers
|
svg
|
0dfd44c350583bdf0e33c44c2382b49fb33140fbd3a1fbd2462d4b40fe9eda3e
|
2026-02-02T00:00:00-05:00
|
AI Decodes Historical Chinese Archives to Reveal Lost Climate History
|
arXiv:2601.22458v1 Announce Type: cross Abstract: Historical archives contain qualitative descriptions of climate events, yet converting these into quantitative records has remained a fundamental challenge. Here we introduce a paradigm shift: a generative AI framework that inverts the logic of historical chroniclers by inferring the quantitative climate patterns associated with documented events. Applied to historical Chinese archives, it produces the sub-annual precipitation reconstruction for southeastern China over the period 1368-1911 AD. Our reconstruction not only quantifies iconic extremes like the Ming Dynasty's Great Drought but also, crucially, maps the full spatial and seasonal structure of El Ni$\~n$o influence on precipitation in this region over five centuries, revealing dynamics inaccessible in shorter modern records. Our methodology and high-resolution climate dataset are directly applicable to climate science and have broader implications for the historical and social sciences.
|
https://arxiv.org/abs/2601.22458
|
Academic Papers
|
svg
|
6910b62d6d002945d6a54521b98ee2fd7d48abd42ae91bab1e084b9f1f6d4f15
|
2026-02-02T00:00:00-05:00
|
On the undecidability of quantum channel capacities
|
arXiv:2601.22471v1 Announce Type: cross Abstract: An important distinction in our understanding of capacities of classical versus quantum channels is marked by the following question: is there an algorithm which can compute (or even efficiently compute) the capacity? While there is overwhelming evidence suggesting that quantum channel capacities may be uncomputable, a formal proof of any such statement is elusive. We initiate the study of the hardness of computing quantum channel capacities. We show that, for a general quantum channel, it is QMA-hard to compute its quantum capacity, and that the maximal-entanglement-assisted zero-error one-shot classical capacity is uncomputable.
|
https://arxiv.org/abs/2601.22471
|
Academic Papers
|
svg
|
62a96861b76261c2bfe367787e8a5e4bd74a4fe9d5b678542688fda1b34fe9e1
|
2026-02-02T00:00:00-05:00
|
Structural Conditions for Native CCZ Magic-State Fountains in qLDPC Codes
|
arXiv:2601.22489v1 Announce Type: cross Abstract: Quantum low-density parity-check (qLDPC) codes promise constant-rate, linear-distance families with bounded-weight checks, and recent work has realized transversal or constant-depth non-Clifford gates on various (often non-LDPC) codes. However, no explicit \emph{qubit} qLDPC family is known that simultaneously has constant rate, linear distance, bounded stabilizer weight, and a native \emph{magic-state fountain} that prepares many non-Clifford resource states in constant depth. We take a structural approach and identify coding-theoretic conditions under which a CSS qLDPC family necessarily supports a constant-depth $\CCZ$ magic-state fountain. The key ingredients are: (i) an algebraic notion of \emph{magic-friendly triples} of $X$-type logical operators, defined by pairwise orthogonality and a triple-overlap form controlling diagonal $\CCZ$ phases, and (ii) a 3-uniform hypergraph model of physical $\CCZ$ circuits combined with a packing lemma that turns large collections of such triples with bounded overlaps into bounded-degree hypergraphs. Our main theorem shows that if a CSS code family on $n$ qubits admits $\Omega(n^{1+\gamma})$ magic-friendly triples whose supports have bounded per-qubit participation, then there exists a constant-depth circuit of physical $\CCZ$ gates implementing $\Omega(n^{\gamma})$ logical $\CCZ$ gates in parallel while preserving distance up to a constant factor. For asymptotically good qLDPC families such as quantum Tanner codes, this reduces the existence of a native $\CCZ$ magic-state fountain to a concrete combinatorial problem about counting and distributing magic-friendly triples in the logical $X$ space.
|
https://arxiv.org/abs/2601.22489
|
Academic Papers
|
svg
|
0abc53fa2b7ded14ffdc32d346f8be7f176cc90745a2d109ce0118d8dcbe838b
|
2026-02-02T00:00:00-05:00
|
Corrected Samplers for Discrete Flow Models
|
arXiv:2601.22519v1 Announce Type: cross Abstract: Discrete flow models (DFMs) have been proposed to learn the data distribution on a finite state space, offering a flexible framework as an alternative to discrete diffusion models. A line of recent work has studied samplers for discrete diffusion models, such as tau-leaping and Euler solver. However, these samplers require a large number of iterations to control discretization error, since the transition rates are frozen in time and evaluated at the initial state within each time interval. Moreover, theoretical results for these samplers often require boundedness conditions of the transition rate or they focus on a specific type of source distributions. To address those limitations, we establish non-asymptotic discretization error bounds for those samplers without any restriction on transition rates and source distributions, under the framework of discrete flow models. Furthermore, by analyzing a one-step lower bound of the Euler sampler, we propose two corrected samplers: \textit{time-corrected sampler} and \textit{location-corrected sampler}, which can reduce the discretization error of tau-leaping and Euler solver with almost no additional computational cost. We rigorously show that the location-corrected sampler has a lower iteration complexity than existing parallel samplers. We validate the effectiveness of the proposed method by demonstrating improved generation quality and reduced inference time on both simulation and text-to-image generation tasks. Code can be found in https://github.com/WanZhengyan/Corrected-Samplers-for-Discrete-Flow-Models.
|
https://arxiv.org/abs/2601.22519
|
Academic Papers
|
svg
|
0aad305ecb6cdf6584b4381c2e769429364ca4f00dc7a3958efe3170d3c3a95b
|
2026-02-02T00:00:00-05:00
|
EndoCaver: Handling Fog, Blur and Glare in Endoscopic Images via Joint Deblurring-Segmentation
|
arXiv:2601.22537v1 Announce Type: cross Abstract: Endoscopic image analysis is vital for colorectal cancer screening, yet real-world conditions often suffer from lens fogging, motion blur, and specular highlights, which severely compromise automated polyp detection. We propose EndoCaver, a lightweight transformer with a unidirectional-guided dual-decoder architecture, enabling joint multi-task capability for image deblurring and segmentation while significantly reducing computational complexity and model parameters. Specifically, it integrates a Global Attention Module (GAM) for cross-scale aggregation, a Deblurring-Segmentation Aligner (DSA) to transfer restoration cues, and a cosine-based scheduler (LoCoS) for stable multi-task optimisation. Experiments on the Kvasir-SEG dataset show that EndoCaver achieves 0.922 Dice on clean data and 0.889 under severe image degradation, surpassing state-of-the-art methods while reducing model parameters by 90%. These results demonstrate its efficiency and robustness, making it well-suited for on-device clinical deployment. Code is available at https://github.com/ReaganWu/EndoCaver.
|
https://arxiv.org/abs/2601.22537
|
Academic Papers
|
svg
|
cdc576812d85bbf7de3e7559ce58b3ffae2d21427558530d2936d464a835eb0e
|
2026-02-02T00:00:00-05:00
|
Bonnet: Ultra-fast whole-body bone segmentation from CT scans
|
arXiv:2601.22576v1 Announce Type: cross Abstract: This work proposes Bonnet, an ultra-fast sparse-volume pipeline for whole-body bone segmentation from CT scans. Accurate bone segmentation is important for surgical planning and anatomical analysis, but existing 3D voxel-based models such as nnU-Net and STU-Net require heavy computation and often take several minutes per scan, which limits time-critical use. The proposed Bonnet addresses this by integrating a series of novel framework components including HU-based bone thresholding, patch-wise inference with a sparse spconv-based U-Net, and multi-window fusion into a full-volume prediction. Trained on TotalSegmentator and evaluated without additional tuning on RibSeg, CT-Pelvic1K, and CT-Spine1K, Bonnet achieves high Dice across ribs, pelvis, and spine while running in only 2.69 seconds per scan on an RTX A6000. Compared to strong voxel baselines, Bonnet attains a similar accuracy but reduces inference time by roughly 25x on the same hardware and tiling setup. The toolkit and pre-trained models will be released at https://github.com/HINTLab/Bonnet.
|
https://arxiv.org/abs/2601.22576
|
Academic Papers
|
svg
|
e3c1a5ef7dceabed0af068d5184ce283fe2082fee680c946e146a2dec9154650
|
2026-02-02T00:00:00-05:00
|
An Efficient Algorithm for Thresholding Monte Carlo Tree Search
|
arXiv:2601.22600v1 Announce Type: cross Abstract: We introduce the Thresholding Monte Carlo Tree Search problem, in which, given a tree $\mathcal{T}$ and a threshold $\theta$, a player must answer whether the root node value of $\mathcal{T}$ is at least $\theta$ or not. In the given tree, `MAX' or `MIN' is labeled on each internal node, and the value of a `MAX'-labeled (`MIN'-labeled) internal node is the maximum (minimum) of its child values. The value of a leaf node is the mean reward of an unknown distribution, from which the player can sample rewards. For this problem, we develop a $\delta$-correct sequential sampling algorithm based on the Track-and-Stop strategy that has asymptotically optimal sample complexity. We show that a ratio-based modification of the D-Tracking arm-pulling strategy leads to a substantial improvement in empirical sample complexity, as well as reducing the per-round computational cost from linear to logarithmic in the number of arms.
|
https://arxiv.org/abs/2601.22600
|
Academic Papers
|
svg
|
4493c88f29313e76a544213832cfe1b8a3745ac810fe7f8478bb47757409ed54
|
2026-02-02T00:00:00-05:00
|
RPWithPrior: Label Differential Privacy in Regression
|
arXiv:2601.22625v1 Announce Type: cross Abstract: With the wide application of machine learning techniques in practice, privacy preservation has gained increasing attention. Protecting user privacy with minimal accuracy loss is a fundamental task in the data analysis and mining community. In this paper, we focus on regression tasks under $\epsilon$-label differential privacy guarantees. Some existing methods for regression with $\epsilon$-label differential privacy, such as the RR-On-Bins mechanism, discretized the output space into finite bins and then applied RR algorithm. To efficiently determine these finite bins, the authors rounded the original responses down to integer values. However, such operations does not align well with real-world scenarios. To overcome these limitations, we model both original and randomized responses as continuous random variables, avoiding discretization entirely. Our novel approach estimates an optimal interval for randomized responses and introduces new algorithms designed for scenarios where a prior is either known or unknown. Additionally, we prove that our algorithm, RPWithPrior, guarantees $\epsilon$-label differential privacy. Numerical results demonstrate that our approach gets better performance compared with the Gaussian, Laplace, Staircase, and RRonBins, Unbiased mechanisms on the Communities and Crime, Criteo Sponsored Search Conversion Log, California Housing datasets.
|
https://arxiv.org/abs/2601.22625
|
Academic Papers
|
svg
|
3b4f718957b2f6505210a653268876c53fbdf6d2a7dbd54f015b619912d41939
|
2026-02-02T00:00:00-05:00
|
Training Beyond Convergence: Grokking nnU-Net for Glioma Segmentation in Sub-Saharan MRI
|
arXiv:2601.22637v1 Announce Type: cross Abstract: Gliomas are placing an increasingly clinical burden on Sub-Saharan Africa (SSA). In the region, the median survival for patients remains under two years, and access to diagnostic imaging is extremely limited. These constraints highlight an urgent need for automated tools that can extract the maximum possible information from each available scan, tools that are specifically trained on local data, rather than adapted from high-income settings where conditions are vastly different. We utilize the Brain Tumor Segmentation (BraTS) Africa 2025 Challenge dataset, an expert annotated collection of glioma MRIs. Our objectives are: (i) establish a strong baseline with nnUNet on this dataset, and (ii) explore whether the celebrated "grokking" phenomenon an abrupt, late training jump from memorization to superior generalization can be triggered to push performance without extra labels. We evaluate two training regimes. The first is a fast, budget-conscious approach that limits optimization to just a few epochs, reflecting the constrained GPU resources typically available in African institutions. Despite this limitation, nnUNet achieves strong Dice scores: 92.3% for whole tumor (WH), 86.6% for tumor core (TC), and 86.3% for enhancing tumor (ET). The second regime extends training well beyond the point of convergence, aiming to trigger a grokking-driven performance leap. With this approach, we were able to achieve grokking and enhanced our results to higher Dice scores: 92.2% for whole tumor (WH), 90.1% for tumor core (TC), and 90.2% for enhancing tumor (ET).
|
https://arxiv.org/abs/2601.22637
|
Academic Papers
|
svg
|
dee62dee29312765cccdd35b66ec5bec00dbede3e6be13ab4e1eedfe4ebcc017
|
2026-02-02T00:00:00-05:00
|
Generative and Nonparametric Approaches for Conditional Distribution Estimation: Methods, Perspectives, and Comparative Evaluations
|
arXiv:2601.22650v1 Announce Type: cross Abstract: The inference of conditional distributions is a fundamental problem in statistics, essential for prediction, uncertainty quantification, and probabilistic modeling. A wide range of methodologies have been developed for this task. This article reviews and compares several representative approaches spanning classical nonparametric methods and modern generative models. We begin with the single-index method of Hall and Yao (2005), which estimates the conditional distribution through a dimension-reducing index and nonparametric smoothing of the resulting one-dimensional cumulative conditional distribution function. We then examine the basis-expansion approaches, including FlexCode (Izbicki and Lee, 2017) and DeepCDE (Dalmasso et al., 2020), which convert conditional density estimation into a set of nonparametric regression problems. In addition, we discuss two recent generative simulation-based methods that leverage modern deep generative architectures: the generative conditional distribution sampler (Zhou et al., 2023) and the conditional denoising diffusion probabilistic model (Fu et al., 2024; Yang et al., 2025). A systematic numerical comparison of these approaches is provided using a unified evaluation framework that ensures fairness and reproducibility. The performance metrics used for the estimated conditional distribution include the mean-squared errors of conditional mean and standard deviation, as well as the Wasserstein distance. We also discuss their flexibility and computational costs, highlighting the distinct advantages and limitations of each approach.
|
https://arxiv.org/abs/2601.22650
|
Academic Papers
|
svg
|
5aca1d63da40f3aa4b841f11ed188c4e0298484525a4d835e7f3f46a038ae06f
|
2026-02-02T00:00:00-05:00
|
Spectral Gradient Descent Mitigates Anisotropy-Driven Misalignment: A Case Study in Phase Retrieval
|
arXiv:2601.22652v1 Announce Type: cross Abstract: Spectral gradient methods, such as the Muon optimizer, modify gradient updates by preserving directional information while discarding scale, and have shown strong empirical performance in deep learning. We investigate the mechanisms underlying these gains through a dynamical analysis of a nonlinear phase retrieval model with anisotropic Gaussian inputs, equivalent to training a two-layer neural network with the quadratic activation and fixed second-layer weights. Focusing on a spiked covariance setting where the dominant variance direction is orthogonal to the signal, we show that gradient descent (GD) suffers from a variance-induced misalignment: during the early escaping stage, the high-variance but uninformative spike direction is multiplicatively amplified, degrading alignment with the true signal under strong anisotropy. In contrast, spectral gradient descent (SpecGD) removes this spike amplification effect, leading to stable alignment and accelerated noise contraction. Numerical experiments confirm the theory and show that these phenomena persist under broader anisotropic covariances.
|
https://arxiv.org/abs/2601.22652
|
Academic Papers
|
svg
|
db2feb5235eef27fefd19798c73611bb7b523aaa2b574c160cbcbfcbb4bc8974
|
2026-02-02T00:00:00-05:00
|
Parametric vector flows for registration fields in bounded domains with applications to nonlinear interpolation of shock-dominated flows
|
arXiv:2601.22712v1 Announce Type: cross Abstract: We present a registration procedure for parametric model order reduction (MOR) in two- and three-dimensional bounded domains. In the MOR framework, registration methods exploit solution snapshots to identify a parametric coordinate transformation that improves the approximation of the solution set through linear subspaces. For each training parameter, optimization-based (or variational) registration methods minimize a target function that measures the alignment of the coherent structures of interest (e.g., shocks, shear layers, cracks) for different parameter values, over a family of bijections of the computational domain $\Omega$. We consider diffeomorphisms $\Phi$ that are vector flows of given velocity fields $v$ with vanishing normal component on $\partial \Omega$; we rely on a sensor to extract appropriate point clouds from the solution snapshots and we develop an expectation-maximization procedure to simultaneously solve the point cloud matching problem and to determine the velocity $v$ (and thus the bijection $\Phi$); finally, we combine our registration method with the nonlinear interpolation technique of [Iollo, Taddei, J. Comput. Phys., 2022] to perform accurate interpolations of fluid dynamic fields in the presence of shocks. Numerical results for a two-dimensional inviscid transonic flow past a NACA airfoil and a three-dimensional viscous transonic flow past an ONERA M6 wing illustrate the many elements of the methodology and demonstrate the effectiveness of nonlinear interpolation for shock-dominated fields.
|
https://arxiv.org/abs/2601.22712
|
Academic Papers
|
svg
|
848fa613df4c55304c8909c546b9b0b628b06345849e323f91a82b6cbb97ec6b
|
2026-02-02T00:00:00-05:00
|
Profunctorial algebras
|
arXiv:2601.22721v1 Announce Type: cross Abstract: We provide a bicategorical generalization of Barr's landmark 1970 paper, in which he describes how to extend Set-monads to relations and uses this to characterize topological spaces as the relational algebras of the ultrafilter monad. With two-sided discrete fibrations playing the role of relations in a bicategory, we first characterize, in terms of exact squares, when pseudomonads on a bicategory extend to its bicategory of two-sided discrete fibrations. As a wide class of examples, we show that every Set-monad induces a pseudomonad on the 2-category of categories satisfying our criterion and thus extending to profunctors. Among these, we then focus on the ultracompletion pseudomonad, whose pseudoalgebras are ultracategories: we characterize the normalized lax algebras of its profunctorial extension as ultraconvergence spaces, a recently-introduced categorification of topological spaces.
|
https://arxiv.org/abs/2601.22721
|
Academic Papers
|
svg
|
d34530e13838b8541ffc64c205edf4dc1a4e85a75c43f1abe7257168ce97aa70
|
2026-02-02T00:00:00-05:00
|
A Cross-Domain Graph Learning Protocol for Single-Step Molecular Geometry Refinement
|
arXiv:2601.22723v1 Announce Type: cross Abstract: Accurate molecular geometries are a prerequisite for reliable quantum-chemical predictions, yet density functional theory (DFT) optimization remains a major bottleneck for high-throughput molecular screening. Here we present GeoOpt-Net, a multi-branch SE(3)-equivariant geometry refinement network that predicts DFT-quality structures at the B3LYP/TZVP level of theory in a single forward pass starting from inexpensive initial conformers generated at a low-cost force-field level. GeoOpt-Net is trained using a two-stage strategy in which a broadly pretrained geometric representation is subsequently fine-tuned to approach B3LYP/TZVP-level accuracy, with theory- and basis-set-aware calibration enabled by a fidelity-aware feature modulation (FAFM) mechanism. Benchmarking against representative approaches spanning classical conformer generation (RDKit), semiempirical quantum methods (xTB), data-driven geometry refinement pipelines (Auto3D), and machine-learning interatomic potentials (UMA) on external drug-like molecules demonstrates that GeoOpt-Net achieves sub-milli-\AA{} all-atom RMSD with near-zero B3LYP/TZVP single-point energy deviations, indicating DFT-ready geometries that closely reproduce both structural and energetic references. Beyond geometric metrics, GeoOpt-Net generates initial guesses intrinsically compatible with DFT convergence criteria, yielding nonzero ``All-YES'' convergence rates (65.0\% under loose and 33.4\% under default thresholds), and substantially reducing re-optimization steps and wall-clock time. GeoOpt-Net further exhibits smooth and predictable energy scaling with molecular complexity while preserving key electronic observables such as dipole moments. Collectively, these results establish GeoOpt-Net as a scalable, physically consistent geometry refinement framework that enables efficient acceleration of DFT-based quantum-chemical workflows.
|
https://arxiv.org/abs/2601.22723
|
Academic Papers
|
svg
|
90cb8f507f01204238230176ee2de88c19a9a95365ed109c21fa96525b373916
|
2026-02-02T00:00:00-05:00
|
Active Learning-Driven Lightweight YOLOv9: Enhancing Efficiency in Smart Agriculture
|
arXiv:2601.22732v1 Announce Type: cross Abstract: This study addresses the demand for real-time detection of tomatoes and tomato flowers by agricultural robots deployed on edge devices in greenhouse environments. Under practical imaging conditions, object detection systems often face challenges such as large scale variations caused by varying camera distances, severe occlusion from plant structures, and highly imbalanced class distributions. These factors make conventional object detection approaches that rely on fully annotated datasets difficult to simultaneously achieve high detection accuracy and deployment efficiency. To overcome these limitations, this research proposes an active learning driven lightweight object detection framework, integrating data analysis, model design, and training strategy. First, the size distribution of objects in raw agricultural images is analyzed to redefine an operational target range, thereby improving learning stability under real-world conditions. Second, an efficient feature extraction module is incorporated to reduce computational cost, while a lightweight attention mechanism is introduced to enhance feature representation under multi-scale and occluded scenarios. Finally, an active learning strategy is employed to iteratively select high-information samples for annotation and training under a limited labeling budget, effectively improving the recognition performance of minority and small-object categories. Experimental results demonstrate that, while maintaining a low parameter count and inference cost suitable for edge-device deployment, the proposed method effectively improves the detection performance of tomatoes and tomato flowers in raw images. Under limited annotation conditions, the framework achieves an overall detection accuracy of 67.8% mAP, validating its practicality and feasibility for intelligent agricultural applications.
|
https://arxiv.org/abs/2601.22732
|
Academic Papers
|
svg
|
0326e189b22a879ace1a8b8af5f6f04f566145b374ab8add05903b6c4b04705f
|
2026-02-02T00:00:00-05:00
|
Synthetic Abundance Maps for Unsupervised Super-Resolution of Hyperspectral Remote Sensing Images
|
arXiv:2601.22755v1 Announce Type: cross Abstract: Hyperspectral single image super-resolution (HS-SISR) aims to enhance the spatial resolution of hyperspectral images to fully exploit their spectral information. While considerable progress has been made in this field, most existing methods are supervised and require ground truth data for training-data that is often unavailable in practice. To overcome this limitation, we propose a novel unsupervised training framework for HS-SISR, based on synthetic abundance data. The approach begins by unmixing the hyperspectral image into endmembers and abundances. A neural network is then trained to perform abundance super-resolution using synthetic abundances only. These synthetic abundance maps are generated from a dead leaves model whose characteristics are inherited from the low-resolution image to be super-resolved. This trained network is subsequently used to enhance the spatial resolution of the original image's abundances, and the final super-resolution hyperspectral image is reconstructed by combining them with the endmembers. Experimental results demonstrate both the training value of the synthetic data and the effectiveness of the proposed method.
|
https://arxiv.org/abs/2601.22755
|
Academic Papers
|
svg
|
828e1b1b30f7f93e5809a470154f78a70e4fbf9788350a600a195bc3f911849d
|
2026-02-02T00:00:00-05:00
|
Bayesian Matrix Completion Under Geometric Constraints
|
arXiv:2601.22765v1 Announce Type: cross Abstract: The completion of a Euclidean distance matrix (EDM) from sparse and noisy observations is a fundamental challenge in signal processing, with applications in sensor network localization, acoustic room reconstruction, molecular conformation, and manifold learning. Traditional approaches, such as rank-constrained optimization and semidefinite programming, enforce geometric constraints but often struggle under sparse or noisy conditions. This paper introduces a hierarchical Bayesian framework that places structured priors directly on the latent point set generating the EDM, naturally embedding geometric constraints. By incorporating a hierarchical prior on latent point set, the model enables automatic regularization and robust noise handling. Posterior inference is performed using a Metropolis-Hastings within Gibbs sampler to handle coupled latent point posterior. Experiments on synthetic data demonstrate improved reconstruction accuracy compared to deterministic baselines in sparse regimes.
|
https://arxiv.org/abs/2601.22765
|
Academic Papers
|
svg
|
8382e5034d9456d99402350ef0bcc8641586044248498a9902085ebedb105321
|
2026-02-02T00:00:00-05:00
|
GRANITE: A Generalized Regional Framework for Identifying Agreement in Feature-Based Explanations
|
arXiv:2601.22771v1 Announce Type: cross Abstract: Feature-based explanation methods aim to quantify how features influence the model's behavior, either locally or globally, but different methods often disagree, producing conflicting explanations. This disagreement arises primarily from two sources: how feature interactions are handled and how feature dependencies are incorporated. We propose GRANITE, a generalized regional explanation framework that partitions the feature space into regions where interaction and distribution influences are minimized. This approach aligns different explanation methods, yielding more consistent and interpretable explanations. GRANITE unifies existing regional approaches, extends them to feature groups, and introduces a recursive partitioning algorithm to estimate such regions. We demonstrate its effectiveness on real-world datasets, providing a practical tool for consistent and interpretable feature explanations.
|
https://arxiv.org/abs/2601.22771
|
Academic Papers
|
svg
|
307e4e6156c30a95e76c008fb1807e1083c1e06d82697580213c1837e4ae7078
|
2026-02-02T00:00:00-05:00
|
Streaming Speech Recognition with Decoder-Only Large Language Models and Latency Optimization
|
arXiv:2601.22779v1 Announce Type: cross Abstract: Recent advances have demonstrated the potential of decoderonly large language models (LLMs) for automatic speech recognition (ASR). However, enabling streaming recognition within this framework remains a challenge. In this work, we propose a novel streaming ASR approach that integrates a read/write policy network with monotonic chunkwise attention (MoChA) to dynamically segment speech embeddings. These segments are interleaved with label sequences during training, enabling seamless integration with the LLM. During inference, the audio stream is buffered until the MoChA module triggers a read signal, at which point the buffered segment together with the previous token is fed into the LLM for the next token prediction. We also introduce a minimal-latency training objective to guide the policy network toward accurate segmentation boundaries. Furthermore, we adopt a joint training strategy in which a non-streaming LLM-ASR model and our streaming model share parameters. Experiments on the AISHELL-1 and AISHELL-2 Mandarin benchmarks demonstrate that our method consistently outperforms recent streaming ASR baselines, achieving character error rates of 5.1% and 5.5%, respectively. The latency optimization results in a 62.5% reduction in average token generation delay with negligible impact on recognition accuracy
|
https://arxiv.org/abs/2601.22779
|
Academic Papers
|
svg
|
a056a6d23c3bed20f6a855e6d23aa95fb882820368bde44d947ff3bbe370d9a5
|
2026-02-02T00:00:00-05:00
|
Approximating $f$-Divergences with Rank Statistics
|
arXiv:2601.22784v1 Announce Type: cross Abstract: We introduce a rank-statistic approximation of $f$-divergences that avoids explicit density-ratio estimation by working directly with the distribution of ranks. For a resolution parameter $K$, we map the mismatch between two univariate distributions $\mu$ and $\nu$ to a rank histogram on $\{ 0, \ldots, K\}$ and measure its deviation from uniformity via a discrete $f$-divergence, yielding a rank-statistic divergence estimator. We prove that the resulting estimator of the divergence is monotone in $K$, is always a lower bound of the true $f$-divergence, and we establish quantitative convergence rates for $K\to\infty$ under mild regularity of the quantile-domain density ratio. To handle high-dimensional data, we define the sliced rank-statistic $f$-divergence by averaging the univariate construction over random projections, and we provide convergence results for the sliced limit as well. We also derive finite-sample deviation bounds along with asymptotic normality results for the estimator. Finally, we empirically validate the approach by benchmarking against neural baselines and illustrating its use as a learning objective in generative modelling experiments.
|
https://arxiv.org/abs/2601.22784
|
Academic Papers
|
svg
|
c60b9a82705383fb6014d770aa453ffb20e9dfdb11104f97082ae9b1469694cb
|
2026-02-02T00:00:00-05:00
|
CALM: Joint Contextual Acoustic-Linguistic Modeling for Personalization of Multi-Speaker ASR
|
arXiv:2601.22792v1 Announce Type: cross Abstract: We present CALM, a joint Contextual Acoustic-Linguistic Modeling framework for multi-speaker automatic speech recognition (ASR). In personalized AI scenarios, the joint availability of acoustic and linguistic cues naturally motivates the integration of target-speaker conditioning with contextual biasing in overlapping conversations. CALM implements this integration in an end-to-end framework through speaker embedding-driven target-speaker extraction and dynamic vocabulary-based contextual biasing. We evaluate CALM on simulated English (LibriSpeechMix) and Japanese (Corpus of Spontaneous Japanese mixtures, CSJMix). On two-speaker mixtures, CALM reduces biased word error rate (B-WER) from 12.7 to 4.7 on LibriSpeech2Mix and biased character error rate (B-CER) from 16.6 to 8.4 on CSJMix2 (eval3), demonstrating the effectiveness of joint acoustic-linguistic modeling across languages. We additionally report results on the AMI corpus (IHM-mix condition) to validate performance on standardized speech mixtures.
|
https://arxiv.org/abs/2601.22792
|
Academic Papers
|
svg
|
363eff98120f907f36cc0f20a964738ab803ef004c854b935cb59cdade49a58e
|
2026-02-02T00:00:00-05:00
|
EmoShift: Lightweight Activation Steering for Enhanced Emotion-Aware Speech Synthesis
|
arXiv:2601.22873v1 Announce Type: cross Abstract: Achieving precise and controllable emotional expression is crucial for producing natural and context-appropriate speech in text-to-speech (TTS) synthesis. However, many emotion-aware TTS systems, including large language model (LLM)-based designs, rely on scaling fixed emotion embeddings or external guidance, limiting their ability to model emotion-specific latent characteristics. To address this gap, we present EmoShift, a lightweight activation-steering framework incorporating a EmoSteer layer, which learns a steering vector for each target emotion in the output embedding space to capture its latent offset and maintain stable, appropriate expression across utterances and categories. With only 10M trainable parameters,less than 1/30 of full fine-tuning, EmoShift outperforms zero-shot and fully fine-tuned baselines in objective and subjective evaluations, enhancing emotional expressiveness while preserving naturalness and speaker similarity. Further analysis confirms the proposed EmoSteer layer's effectiveness and reveals its potential for controllable emotional intensity in speech synthesis.
|
https://arxiv.org/abs/2601.22873
|
Academic Papers
|
svg
|
5481844a5335d6816db9ae7f628f84f0d11f952c4943d2e1fb1553e09bb58ed6
|
2026-02-02T00:00:00-05:00
|
Development of Domain-Invariant Visual Enhancement and Restoration (DIVER) Approach for Underwater Images
|
arXiv:2601.22878v1 Announce Type: cross Abstract: Underwater images suffer severe degradation due to wavelength-dependent attenuation, scattering, and illumination non-uniformity that vary across water types and depths. We propose an unsupervised Domain-Invariant Visual Enhancement and Restoration (DIVER) framework that integrates empirical correction with physics-guided modeling for robust underwater image enhancement. DIVER first applies either IlluminateNet for adaptive luminance enhancement or a Spectral Equalization Filter for spectral normalization. An Adaptive Optical Correction Module then refines hue and contrast using channel-adaptive filtering, while Hydro-OpticNet employs physics-constrained learning to compensate for backscatter and wavelength-dependent attenuation. The parameters of IlluminateNet and Hydro-OpticNet are optimized via unsupervised learning using a composite loss function. DIVER is evaluated on eight diverse datasets covering shallow, deep, and highly turbid environments, including both naturally low-light and artificially illuminated scenes, using reference and non-reference metrics. While state-of-the-art methods such as WaterNet, UDNet, and Phaseformer perform reasonably in shallow water, their performance degrades in deep, unevenly illuminated, or artificially lit conditions. In contrast, DIVER consistently achieves best or near-best performance across all datasets, demonstrating strong domain-invariant capability. DIVER yields at least a 9% improvement over SOTA methods in UCIQE. On the low-light SeaThru dataset, where color-palette references enable direct evaluation of color restoration, DIVER achieves at least a 4.9% reduction in GPMAE compared to existing methods. Beyond visual quality, DIVER also improves robotic perception by enhancing ORB-based keypoint repeatability and matching performance, confirming its robustness across diverse underwater environments.
|
https://arxiv.org/abs/2601.22878
|
Academic Papers
|
svg
|
1df5dc161377ef0964b10be6c793090bbc495dcb25b8b286cfce40a9b4101041
|
2026-02-02T00:00:00-05:00
|
Persuasive Privacy
|
arXiv:2601.22945v1 Announce Type: cross Abstract: We propose a novel framework for measuring privacy from a Bayesian game-theoretic perspective. This framework enables the creation of new, purpose-driven privacy definitions that are rigorously justified, while also allowing for the assessment of existing privacy guarantees through game theory. We show that pure and probabilistic differential privacy are special cases of our framework, and provide new interpretations of the post-processing inequality in this setting. Further, we demonstrate that privacy guarantees can be established for deterministic algorithms, which are overlooked by current privacy standards.
|
https://arxiv.org/abs/2601.22945
|
Academic Papers
|
svg
|
b6805de8d27f067abaedf7d75314a075df3064ff0d36a8349a29c0f94efa724e
|
2026-02-02T00:00:00-05:00
|
OneFlowSBI: One Model, Many Queries for Simulation-Based Inference
|
arXiv:2601.22951v1 Announce Type: cross Abstract: We introduce \textit{OneFlowSBI}, a unified framework for simulation-based inference that learns a single flow-matching generative model over the joint distribution of parameters and observations. Leveraging a query-aware masking distribution during training, the same model supports multiple inference tasks, including posterior sampling, likelihood estimation, and arbitrary conditional distributions, without task-specific retraining. We evaluate \textit{OneFlowSBI} on ten benchmark inference problems and two high-dimensional real-world inverse problems across multiple simulation budgets. \textit{OneFlowSBI} is shown to deliver competitive performance against state-of-the-art generalized inference solvers and specialized posterior estimators, while enabling efficient sampling with few ODE integration steps and remaining robust under noisy and partially observed data.
|
https://arxiv.org/abs/2601.22951
|
Academic Papers
|
svg
|
6bcbb4f1bee2b4b7cabcc44e0bb4aabb0d82496f98fb8278fbf4ab661f17a196
|
2026-02-02T00:00:00-05:00
|
Neural Backward Filtering Forward Guiding
|
arXiv:2601.23030v1 Announce Type: cross Abstract: Inference in non-linear continuous stochastic processes on trees is challenging, particularly when observations are sparse (leaf-only) and the topology is complex. Exact smoothing via Doob's $h$-transform is intractable for general non-linear dynamics, while particle-based methods degrade in high dimensions. We propose Neural Backward Filtering Forward Guiding (NBFFG), a unified framework for both discrete transitions and continuous diffusions. Our method constructs a variational posterior by leveraging an auxiliary linear-Gaussian process. This auxiliary process yields a closed-form backward filter that serves as a ``guide'', steering the generative path toward high-likelihood regions. We then learn a neural residual--parameterized as a normalizing flow or a controlled SDE--to capture the non-linear discrepancies. This formulation allows for an unbiased path-wise subsampling scheme, reducing the training complexity from tree-size dependent to path-length dependent. Empirical results show that NBFFG outperforms baselines on synthetic benchmarks, and we demonstrate the method on a high-dimensional inference task in phylogenetic analysis with reconstruction of ancestral butterfly wing shapes.
|
https://arxiv.org/abs/2601.23030
|
Academic Papers
|
svg
|
aadc8cbfa4944d2bd6b1711e91fa8359a217d32a1b72a63098b79f6c95312d24
|
2026-02-02T00:00:00-05:00
|
Asymptotic Theory of Iterated Empirical Risk Minimization, with Applications to Active Learning
|
arXiv:2601.23031v1 Announce Type: cross Abstract: We study a class of iterated empirical risk minimization (ERM) procedures in which two successive ERMs are performed on the same dataset, and the predictions of the first estimator enter as an argument in the loss function of the second. This setting, which arises naturally in active learning and reweighting schemes, introduces intricate statistical dependencies across samples and fundamentally distinguishes the problem from classical single-stage ERM analyses. For linear models trained with a broad class of convex losses on Gaussian mixture data, we derive a sharp asymptotic characterization of the test error in the high-dimensional regime where the sample size and ambient dimension scale proportionally. Our results provide explicit, fully asymptotic predictions for the performance of the second-stage estimator despite the reuse of data and the presence of prediction-dependent losses. We apply this theory to revisit a well-studied pool-based active learning problem, removing oracle and sample-splitting assumptions made in prior work. We uncover a fundamental tradeoff in how the labeling budget should be allocated across stages, and demonstrate a double-descent behavior of the test error driven purely by data selection, rather than model size or sample count.
|
https://arxiv.org/abs/2601.23031
|
Academic Papers
|
svg
|
8def742d53b3e6aa5356b0074f454f9809350b1242f4ac2c5b4cf6ae39b3e052
|
2026-02-02T00:00:00-05:00
|
Scale Equivariance Regularization and Feature Lifting in High Dynamic Range Modulo Imaging
|
arXiv:2601.23037v1 Announce Type: cross Abstract: Modulo imaging enables high dynamic range (HDR) acquisition by cyclically wrapping saturated intensities, but accurate reconstruction remains challenging due to ambiguities between natural image edges and artificial wrap discontinuities. This work proposes a learning-based HDR restoration framework that incorporates two key strategies: (i) a scale-equivariant regularization that enforces consistency under exposure variations, and (ii) a feature lifting input design combining the raw modulo image, wrapped finite differences, and a closed-form initialization. Together, these components enhance the network's ability to distinguish true structure from wrapping artifacts, yielding state-of-the-art performance across perceptual and linear HDR quality metrics.
|
https://arxiv.org/abs/2601.23037
|
Academic Papers
|
svg
|
97468d6b0a0bb0829421e5c88f6b0b73fd18e104586a55ceb49e0ce888c5437d
|
2026-02-02T00:00:00-05:00
|
Learning-Based Signal Recovery in Nonlinear Systems with Spectrally Separated Interference
|
arXiv:2601.23076v1 Announce Type: cross Abstract: Upper Mid-Band (FR3, 7-24 GHz) receivers for 6G must operate over wide bandwidths in dense spectral environments, making them particularly vulnerable to strong adjacent-band interference and front-end nonlinearities. While conventional linear receivers can suppress spectrally separated interferers under ideal hardware assumptions, receiver saturation and finite-resolution quantization cause nonlinear spectral leakage that severely degrades performance in practical wideband radios. We study the recovery of a desired signal from nonlinear receiver observations corrupted by a high-power out-of-band interferer. The receiver front-end is modeled as a smooth, memoryless nonlinearity followed by additive noise and optional quantization. To mitigate these nonlinear and quantization-induced distortions, we propose a learned multi-layer Vector Approximate Message Passing (LMLVAMP) algorithm that incorporates spectral priors with neural network based denoising. Simulation results demonstrate significant performance gains over conventional methods, particularly in high-interference regimes representative of FR3 coexistence scenarios.
|
https://arxiv.org/abs/2601.23076
|
Academic Papers
|
svg
|
0902835b4ef711949c20eb31ad2c26d081ffc3f941d22ceb214afc2d58a351b3
|
2026-02-02T00:00:00-05:00
|
Vision-Language Controlled Deep Unfolding for Joint Medical Image Restoration and Segmentation
|
arXiv:2601.23103v1 Announce Type: cross Abstract: We propose VL-DUN, a principled framework for joint All-in-One Medical Image Restoration and Segmentation (AiOMIRS) that bridges the gap between low-level signal recovery and high-level semantic understanding. While standard pipelines treat these tasks in isolation, our core insight is that they are fundamentally synergistic: restoration provides clean anatomical structures to improve segmentation, while semantic priors regularize the restoration process. VL-DUN resolves the sub-optimality of sequential processing through two primary innovations. (1) We formulate AiOMIRS as a unified optimization problem, deriving an interpretable joint unfolding mechanism where restoration and segmentation are mathematically coupled for mutual refinement. (2) We introduce a frequency-aware Mamba mechanism to capture long-range dependencies for global segmentation while preserving the high-frequency textures necessary for restoration. This allows for efficient global context modeling with linear complexity, effectively mitigating the spectral bias of standard architectures. As a pioneering work in the AiOMIRS task, VL-DUN establishes a new state-of-the-art across multi-modal benchmarks, improving PSNR by 0.92 dB and the Dice coefficient by 9.76\%. Our results demonstrate that joint collaborative learning offers a superior, more robust solution for complex clinical workflows compared to isolated task processing. The codes are provided in https://github.com/cipi666/VLDUN.
|
https://arxiv.org/abs/2601.23103
|
Academic Papers
|
svg
|
937594108ecc0dae8e6d803d81a190b4c55d79a0a3f900c869a09c7b1ad862ca
|
2026-02-02T00:00:00-05:00
|
Interpolation Techniques for Fast Channel Estimation in Ray Tracing
|
arXiv:2601.23119v1 Announce Type: cross Abstract: Ray tracing is increasingly utilized in wireless system simulations to estimate channel paths. In large-scale simulations with complex environments, ray tracing at high resolution can be computationally demanding. To reduce the computation, this paper presents a novel method for conducting ray tracing at a coarse set of reference points and interpolating the channels at other locations. The key insight is to interpolate the images of reflected points. In addition to the computational savings, the method directly captures the spherical nature of each wavefront enabling fast and accurate computation of channels using line-of-sight MIMO and other wide aperture techniques. Through empirical validation and comparison with exhaustive ray tracing, we demonstrate the efficacy and practicality of our approach in achieving high-fidelity channel predictions with reduced computational resources.
|
https://arxiv.org/abs/2601.23119
|
Academic Papers
|
svg
|
c44b3f88164196a63f2212a56f8e9e2faad420c6eff6286c20be9ae5ff6e28cf
|
2026-02-02T00:00:00-05:00
|
Compressed BC-LISTA via Low-Rank Convolutional Decomposition
|
arXiv:2601.23148v1 Announce Type: cross Abstract: We study Sparse Signal Recovery (SSR) methods for multichannel imaging with compressed {forward and backward} operators that preserve reconstruction accuracy. We propose a Compressed Block-Convolutional (C-BC) measurement model based on a low-rank Convolutional Neural Network (CNN) decomposition that is analytically initialized from a low-rank factorization of physics-derived forward/backward operators in time delay-based measurements. We use Orthogonal Matching Pursuit (OMP) to select a compact set of basis filters from the analytic model and compute linear mixing coefficients to approximate the full model. We consider the Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) network as a representative example for which the C-BC-LISTA extension is presented. In simulated multichannel ultrasound imaging across multiple Signal-to-Noise Ratios (SNRs), C-BC-LISTA requires substantially fewer parameters and smaller model size than other state-of-the-art (SOTA) methods while improving reconstruction accuracy. In ablations over OMP, Singular Value Decomposition (SVD)-based, and random initializations, OMP-initialized structured compression performs best, yielding the most efficient training and the best performance.
|
https://arxiv.org/abs/2601.23148
|
Academic Papers
|
svg
|
acf4109283bdfe6bb46a8bfd95e51912809c72fecd2c73f366cd9cdacd3cf9d6
|
2026-02-02T00:00:00-05:00
|
Scale-Cascaded Diffusion Models for Super-Resolution in Medical Imaging
|
arXiv:2601.23201v1 Announce Type: cross Abstract: Diffusion models have been increasingly used as strong generative priors for solving inverse problems such as super-resolution in medical imaging. However, these approaches typically utilize a diffusion prior trained at a single scale, ignoring the hierarchical scale structure of image data. In this work, we propose to decompose images into Laplacian pyramid scales and train separate diffusion priors for each frequency band. We then develop an algorithm to perform super-resolution that utilizes these priors to progressively refine reconstructions across different scales. Evaluated on brain, knee, and prostate MRI data, our approach both improves perceptual quality over baselines and reduces inference time through smaller coarse-scale networks. Our framework unifies multiscale reconstruction and diffusion priors for medical image super-resolution.
|
https://arxiv.org/abs/2601.23201
|
Academic Papers
|
svg
|
678bccb9687f587b4ddd7242d7c808e842452371da3c37565c170172cfbfe696
|
2026-02-02T00:00:00-05:00
|
A Random Matrix Theory of Masked Self-Supervised Regression
|
arXiv:2601.23208v1 Announce Type: cross Abstract: In the era of transformer models, masked self-supervised learning (SSL) has become a foundational training paradigm. A defining feature of masked SSL is that training aggregates predictions across many masking patterns, giving rise to a joint, matrix-valued predictor rather than a single vector-valued estimator. This object encodes how coordinates condition on one another and poses new analytical challenges. We develop a precise high-dimensional analysis of masked modeling objectives in the proportional regime where the number of samples scales with the ambient dimension. Our results provide explicit expressions for the generalization error and characterize the spectral structure of the learned predictor, revealing how masked modeling extracts structure from data. For spiked covariance models, we show that the joint predictor undergoes a Baik--Ben Arous--P\'ech\'e (BBP)-type phase transition, identifying when masked SSL begins to recover latent signals. Finally, we identify structured regimes in which masked self-supervised learning provably outperforms PCA, highlighting potential advantages of SSL objectives over classical unsupervised methods
|
https://arxiv.org/abs/2601.23208
|
Academic Papers
|
svg
|
93e198f033c39bb46442f99161ec81a7b50e6a2b6a21c17d3d3d36b8ac60b518
|
2026-02-02T00:00:00-05:00
|
Disentangling multispecific antibody function with graph neural networks
|
arXiv:2601.23212v1 Announce Type: cross Abstract: Multispecific antibodies offer transformative therapeutic potential by engaging multiple epitopes simultaneously, yet their efficacy is an emergent property governed by complex molecular architectures. Rational design is often bottlenecked by the inability to predict how subtle changes in domain topology influence functional outcomes, a challenge exacerbated by the scarcity of comprehensive experimental data. Here, we introduce a computational framework to address part of this gap. First, we present a generative method for creating large-scale, realistic synthetic functional landscapes that capture non-linear interactions where biological activity depends on domain connectivity. Second, we propose a graph neural network architecture that explicitly encodes these topological constraints, distinguishing between format configurations that appear identical to sequence-only models. We demonstrate that this model, trained on synthetic landscapes, recapitulates complex functional properties and, via transfer learning, has the potential to achieve high predictive accuracy on limited biological datasets. We showcase the model's utility by optimizing trade-offs between efficacy and toxicity in trispecific T-cell engagers and retrieving optimal common light chains. This work provides a robust benchmarking environment for disentangling the combinatorial complexity of multispecifics, accelerating the design of next-generation therapeutics.
|
https://arxiv.org/abs/2601.23212
|
Academic Papers
|
svg
|
d2b904840f1e59a5ed615bab75ff9c6a40a79817aba96bfd038a255545201799
|
2026-02-02T00:00:00-05:00
|
Solving Inverse Problems with Flow-based Models via Model Predictive Control
|
arXiv:2601.23231v1 Announce Type: cross Abstract: Flow-based generative models provide strong unconditional priors for inverse problems, but guiding their dynamics for conditional generation remains challenging. Recent work casts training-free conditional generation in flow models as an optimal control problem; however, solving the resulting trajectory optimisation is computationally and memory intensive, requiring differentiation through the flow dynamics or adjoint solves. We propose MPC-Flow, a model predictive control framework that formulates inverse problem solving with flow-based generative models as a sequence of control sub-problems, enabling practical optimal control-based guidance at inference time. We provide theoretical guarantees linking MPC-Flow to the underlying optimal control objective and show how different algorithmic choices yield a spectrum of guidance algorithms, including regimes that avoid backpropagation through the generative model trajectory. We evaluate MPC-Flow on benchmark image restoration tasks, spanning linear and non-linear settings such as in-painting, deblurring, and super-resolution, and demonstrate strong performance and scalability to massive state-of-the-art architectures via training-free guidance of FLUX.2 (32B) in a quantised setting on consumer hardware.
|
https://arxiv.org/abs/2601.23231
|
Academic Papers
|
svg
|
e66d59a063a8d45c9f93fa711c7fb4b14474791c5d79142006442532b60825d9
|
2026-02-02T00:00:00-05:00
|
Graph Attention Network for Node Regression on Random Geometric Graphs with Erd\H{o}s--R\'enyi contamination
|
arXiv:2601.23239v1 Announce Type: cross Abstract: Graph attention networks (GATs) are widely used and often appear robust to noise in node covariates and edges, yet rigorous statistical guarantees demonstrating a provable advantage of GATs over non-attention graph neural networks~(GNNs) are scarce. We partially address this gap for node regression with graph-based errors-in-variables models under simultaneous covariate and edge corruption: responses are generated from latent node-level covariates, but only noise-perturbed versions of the latent covariates are observed; and the sample graph is a random geometric graph created from the node covariates but contaminated by independent Erd\H{o}s--R\'enyi edges. We propose and analyze a carefully designed, task-specific GAT that constructs denoised proxy features for regression. We prove that regressing the response variables on the proxies achieves lower error asymptotically in (a) estimating the regression coefficient compared to the ordinary least squares (OLS) estimator on the noisy node covariates, and (b) predicting the response for an unlabelled node compared to a vanilla graph convolutional network~(GCN) -- under mild growth conditions. Our analysis leverages high-dimensional geometric tail bounds and concentration for neighbourhood counts and sample covariances. We verify our theoretical findings through experiments on synthetically generated data. We also perform experiments on real-world graphs and demonstrate the effectiveness of the attention mechanism in several node regression tasks.
|
https://arxiv.org/abs/2601.23239
|
Academic Papers
|
svg
|
645c21d3fe7c8713542173f3afc12dd4f8520e60fbe87bbaae4274a903b4724e
|
2026-02-02T00:00:00-05:00
|
Nested Slice Sampling: Vectorized Nested Sampling for GPU-Accelerated Inference
|
arXiv:2601.23252v1 Announce Type: cross Abstract: Model comparison and calibrated uncertainty quantification often require integrating over parameters, but scalable inference can be challenging for complex, multimodal targets. Nested Sampling is a robust alternative to standard MCMC, yet its typically sequential structure and hard constraints make efficient accelerator implementations difficult. This paper introduces Nested Slice Sampling (NSS), a GPU-friendly, vectorized formulation of Nested Sampling that uses Hit-and-Run Slice Sampling for constrained updates. A tuning analysis yields a simple near-optimal rule for setting the slice width, improving high-dimensional behavior and making per-step compute more predictable for parallel execution. Experiments on challenging synthetic targets, high dimensional Bayesian inference, and Gaussian process hyperparameter marginalization show that NSS maintains accurate evidence estimates and high-quality posterior samples, and is particularly robust on difficult multimodal problems where current state-of-the-art methods such as tempered SMC baselines can struggle. An open-source implementation is released to facilitate adoption and reproducibility.
|
https://arxiv.org/abs/2601.23252
|
Academic Papers
|
svg
|
67f8574de0e8a3dbbbb94c763dd6f5690c01148db25d6e567b4a35270f38555e
|
2026-02-02T00:00:00-05:00
|
Denoising the Deep Sky: Physics-Based CCD Noise Formation for Astronomical Imaging
|
arXiv:2601.23276v1 Announce Type: cross Abstract: Astronomical imaging remains noise-limited under practical observing constraints, while standard calibration pipelines mainly remove structured artifacts and leave stochastic noise largely unresolved. Learning-based denoising is promising, yet progress is hindered by scarce paired training data and the need for physically interpretable and reproducible models in scientific workflows. We propose a physics-based noise synthesis framework tailored to CCD noise formation. The pipeline models photon shot noise, photo-response non-uniformity, dark-current noise, readout effects, and localized outliers arising from cosmic-ray hits and hot pixels. To obtain low-noise inputs for synthesis, we average multiple unregistered exposures to produce high-SNR bases. Realistic noisy counterparts synthesized from these bases using our noise model enable the construction of abundant paired datasets for supervised learning. We further introduce a real-world dataset across multi-bands acquired with two twin ground-based telescopes, providing paired raw frames and instrument-pipeline calibrated frames, together with calibration data and stacked high-SNR bases for real-world evaluation.
|
https://arxiv.org/abs/2601.23276
|
Academic Papers
|
svg
|
12fb07a8908345b4ccd4d0fa4d6b13ffd98e3f42968ba6b0b0462625fb225022
|
2026-02-02T00:00:00-05:00
|
Grounding Large Language Models in Interactive Environments with Online Reinforcement Learning
|
arXiv:2302.02662v5 Announce Type: replace Abstract: Recent works successfully leveraged Large Language Models' (LLM) abilities to capture abstract knowledge about world's physics to solve decision-making problems. Yet, the alignment between LLMs' knowledge and the environment can be wrong and limit functional competence due to lack of grounding. In this paper, we study an approach (named GLAM) to achieve this alignment through functional grounding: we consider an agent using an LLM as a policy that is progressively updated as the agent interacts with the environment, leveraging online Reinforcement Learning to improve its performance to solve goals. Using an interactive textual environment designed to study higher-level forms of functional grounding, and a set of spatial and navigation tasks, we study several scientific questions: 1) Can LLMs boost sample efficiency for online learning of various RL tasks? 2) How can it boost different forms of generalization? 3) What is the impact of online learning? We study these questions by functionally grounding several variants (size, architecture) of FLAN-T5.
|
https://arxiv.org/abs/2302.02662
|
Academic Papers
|
svg
|
6e8dfd57e8edf65c19266bbf3cb5600b7ff4b8fd3b9a593d8c479e8f19f6e2b0
|
2026-02-02T00:00:00-05:00
|
A Cheeger Inequality for Size-Specific Conductance
|
arXiv:2303.11452v2 Announce Type: replace Abstract: The $\mu$-conductance measure proposed by Lov\'asz and Simonovits is a size-specific conductance score that identifies the set with smallest conductance while disregarding those sets with volume smaller than a $\mu$ fraction of the whole graph. Using $\mu$-conductance enables us to study the network structures in new ways. In this manuscript we study a modified spectral cut for $\mu$-conductance that is a natural relaxation of the integer program of $\mu$-conductance and show that the optimum of this program has a two-sided Cheeger inequality with $\mu$-conductance.
|
https://arxiv.org/abs/2303.11452
|
Academic Papers
|
svg
|
87d0a06dc45dbf55a5bdc3095223d0c84a189be854ebbb7196b5f2e173d51064
|
2026-02-02T00:00:00-05:00
|
On The Relationship Between Continual Learning and Long-Tailed Recognition
|
arXiv:2306.13275v2 Announce Type: replace Abstract: Real-world datasets often exhibit long-tailed distributions, where a few dominant "Head" classes have abundant samples while most "Tail" classes are severely underrepresented, leading to biased learning and poor generalization for the Tail. We present a theoretical framework that reveals a previously undescribed connection between Long-Tailed Recognition (LTR) and Continual Learning (CL), the process of learning sequential tasks without forgetting prior knowledge. Our analysis demonstrates that, for models trained on imbalanced datasets, the weights converge to a bounded neighborhood of those trained exclusively on the Head, with the bound scaling as the inverse square root of the imbalance factor. Leveraging this insight, we introduce Continual Learning for Long-Tailed Recognition (CLTR), a principled approach that employs standard off-the-shelf CL methods to address LTR problems by sequentially learning Head and Tail classes without forgetting the Head. Our theoretical analysis further suggests that CLTR mitigates gradient saturation and improves Tail learning while maintaining strong Head performance. Extensive experiments on CIFAR100-LT, CIFAR10-LT, ImageNet-LT, and Caltech256 validate our theoretical predictions, achieving strong results across various LTR benchmarks. Our work bridges the gap between LTR and CL, providing a principled way to tackle imbalanced data challenges with standard existing CL strategies.
|
https://arxiv.org/abs/2306.13275
|
Academic Papers
|
svg
|
bbe66026f9d9e71189d3febf7e3b9bf385929a6b7973d61a55a3c64a57678e7b
|
2026-02-02T00:00:00-05:00
|
The complexity of solving a system of equations of the same degree
|
arXiv:2309.03855v3 Announce Type: replace Abstract: Many systems of interest in cryptography consist of equations of the same degree. Under the assumption that the degree of regularity is finite, we prove upper bounds on the degree of regularity of a system of equations of the same degree, with or without adding the field equations to the system. The bounds translate into upper bounds on the solving degree of the systems, and hence on the complexity of solving them via Gr\"obner bases methods. Our bounds depend on the number of equations in the system, the number of variables, and the degree of the equations.
|
https://arxiv.org/abs/2309.03855
|
Academic Papers
|
svg
|
6503f1c6934b8be105c55b77db1b6cc21ef74651c03c7fdac11c8310a66d31d3
|
2026-02-02T00:00:00-05:00
|
Exploring and Analyzing the Effect of Avatar's Realism on Anxiety of English as Second Language (ESL) Speakers
|
arXiv:2311.05126v2 Announce Type: replace Abstract: Virtual avatars are increasingly used to support cross-cultural communication, yet their impact on communication anxiety among English as a Second Language (ESL) speakers remains underexplored. This study examines how avatar realism influences anxiety during English interactions between ESL speakers and native speakers. We conducted a controlled laboratory study in which Mandarin-speaking ESL participants engaged in guided one-on-one conversations under three visual representation conditions: live video, cartoon-like avatars, and realistic-like avatars. Anxiety was assessed using self-reported surveys and physiological signals, including electrodermal activity (EDA), electrocardiography (ECG), and photoplethysmography (PPG). The results show that increased visual realism does not correspond to a monotonic change in anxiety. Live video was the most preferred and was associated with the lowest self-reported anxiety. Cartoon-like avatars exhibited physiological anxiety levels comparable to live video and lower than realistic-like avatars, whereas realistic-like avatars elicited elevated anxiety across measures. These findings suggest that an effective avatar design for ESL communication should prioritize clarity of social signaling, reduced perceived social threat, and alignment between visual representation and interaction context, rather than visual realism alone.
|
https://arxiv.org/abs/2311.05126
|
Academic Papers
|
svg
|
e48ebe6418bec6b02aa856b00d29826b5dd3f478682b6d5d2f5b4b483b915908
|
2026-02-02T00:00:00-05:00
|
Symmetry-Enforced Quadratic Degradability Beyond Low Dimensions
|
arXiv:2401.16312v5 Announce Type: replace Abstract: Approximate degradability provides a powerful framework for bounding the quantum and private capacities of noisy quantum channels in regimes where exact degradability fails. While generic low-noise channels exhibit a non-degradability parameter that decays as a fractional power of the noise strength, certain symmetric channels are known to display an enhanced quadratic suppression. In this work, we investigate the structural origin of this phenomenon through a family of high-dimensional, rotationally symmetric noise models constructed from angular momentum operators. We first establish that the pure noise component of these channels is maximally distinguishable from the identity channel in diamond norm, revealing a geometric orthogonality between signal and noise. Building on this structure, we construct an explicit symmetric degrading map and prove that the approximate degradability parameter scales quadratically with the noise parameter for all system dimensions. To clarify the mechanism behind this behavior, we identify algebraic conditions on the noise operators that guarantee the cancellation of leading-order non-degradability terms. These conditions apply not only to the rotationally symmetric model studied here, but also to a distinct family of high-dimensional depolarizing channels based on discrete unitary operator bases. Numerical evaluations of capacity lower bounds further illustrate the practical impact of the quadratic suppression. Together, these results demonstrate that enhanced approximate degradability arises from symmetry-induced orthogonality and invariance properties, rather than from low-dimensional or model-specific effects.
|
https://arxiv.org/abs/2401.16312
|
Academic Papers
|
svg
|
12b16deaa66e367f4a2d31794e8cfb25c683903c5e29bcd92424dce59e9ece5b
|
2026-02-02T00:00:00-05:00
|
Estimating the Decoding Failure Rate of Binary Regular Codes Using Iterative Decoding
|
arXiv:2401.16919v4 Announce Type: replace Abstract: Providing closed-form estimates of the decoding failure rate of iterative decoders for low- and moderate-density binary parity-check codes has attracted significant interest in the research community. Recently, interest in this topic has increased due to the use of iterative decoders in post-quantum cryptosystems, where the desired decoding failure rates (DFRs) are less than or equal to $2^{-128}$ and impossible to estimate via Monte Carlo simulations. We propose a new technique that provides accurate DFR estimates for a two-iteration (parallel) bit-flipping decoder that can be used for cryptographic purposes. We estimate the bit-flipping probabilities at the second decoder iteration and the syndrome weight distribution before and after the first iteration as a function of the code parameters and error weight. We validate our results numerically by comparing the modelled and simulated syndrome weights, the incorrectly guessed error bit distribution at the end of the first iteration, and the DFR after two iterations in both the floor and waterfall regimes. Finally, we apply our method to estimate the DFR of the LEDAcrypt cryptographic system, a post-quantum key encapsulation method that employs a two-iteration bit-flipping decoder. We show that the DFR estimate resulting from the chosen code parameters can be improved by a factor larger than $2^{70}$ with respect to previous estimation techniques, when $128$-bit security is required. This allows for a $20$% reduction in public key and ciphertext sizes at no security loss. We note that our results can be applied to the post-quantum cryptosystem known as Bit Flipping Key Encapsulation (BIKE) replacing the current ``BIKE-flip decoder'' with the two-iteration decoder and consequently endowing BIKE with the property of indistinguishability under an adaptive chosen-ciphertext attack (IND-CCA$2$), provably.
|
https://arxiv.org/abs/2401.16919
|
Academic Papers
|
svg
|
b17454c2fb4c1aacecaf2ecfc2ce9ad4087c8787a133b3f5fd979d5caf6052c1
|
2026-02-02T00:00:00-05:00
|
XAI-CF -- Examining the Role of Explainable Artificial Intelligence in Cyber Forensics
|
arXiv:2402.02452v3 Announce Type: replace Abstract: With the rise of complex cyber devices Cyber Forensics (CF) is facing many new challenges. For example, there are dozens of systems running on smartphones, each with more than millions of downloadable applications. Sifting through this large amount of data and making sense requires new techniques, such as from the field of Artificial Intelligence (AI). To apply these techniques successfully in CF, we need to justify and explain the results to the stakeholders of CF, such as forensic analysts and members of the court, for them to make an informed decision. If we want to apply AI successfully in CF, there is a need to develop trust in AI systems. Some other factors in accepting the use of AI in CF are to make AI authentic, interpretable, understandable, and interactive. This way, AI systems will be more acceptable to the public and ensure alignment with legal standards. An explainable AI (XAI) system can play this role in CF, and we call such a system XAI-CF. XAI-CF is indispensable and is still in its infancy. In this paper, we explore and make a case for the significance and advantages of XAI-CF. We strongly emphasize the need to build a successful and practical XAI-CF system and discuss some of the main requirements and prerequisites of such a system. We present a formal definition of the terms CF and XAI-CF and a comprehensive literature review of previous works that apply and utilize XAI to build and increase trust in CF. We discuss some challenges facing XAI-CF. We also provide some concrete solutions to these challenges. We identify key insights and future research directions for building XAI applications for CF. This paper is an effort to explore and familiarize the readers with the role of XAI applications in CF, and we believe that our work provides a promising basis for future researchers interested in XAI-CF.
|
https://arxiv.org/abs/2402.02452
|
Academic Papers
|
svg
|
4c39572a6d27690ed36a8f0ce24ea2f1090707a477069bfd78679e3ddb9560d2
|
2026-02-02T00:00:00-05:00
|
TorchCP: A Python Library for Conformal Prediction
|
arXiv:2402.12683v5 Announce Type: replace Abstract: Conformal prediction (CP) is a powerful statistical framework that generates prediction intervals or sets with guaranteed coverage probability. While CP algorithms have evolved beyond traditional classifiers and regressors to sophisticated deep learning models like deep neural networks (DNNs), graph neural networks (GNNs), and large language models (LLMs), existing CP libraries often lack the model support and scalability for large-scale deep learning (DL) scenarios. This paper introduces TorchCP, a PyTorch-native library designed to integrate state-of-the-art CP algorithms into DL techniques, including DNN-based classifiers/regressors, GNNs, and LLMs. Released under the LGPL-3.0 license, TorchCP comprises about 16k lines of code, validated with 100\% unit test coverage and detailed documentation. Notably, TorchCP enables CP-specific training algorithms, online prediction, and GPU-accelerated batch processing, achieving up to 90\% reduction in inference time on large datasets. With its low-coupling design, comprehensive suite of advanced methods, and full GPU scalability, TorchCP empowers researchers and practitioners to enhance uncertainty quantification across cutting-edge applications.
|
https://arxiv.org/abs/2402.12683
|
Academic Papers
|
svg
|
bf48511b30b8d89e77f9576ff209eadafcfe1a85976456ab89c4a398d6a6c384
|
2026-02-02T00:00:00-05:00
|
OMGEval: An Open Multilingual Generative Evaluation Benchmark for Large Language Models
|
arXiv:2402.13524v2 Announce Type: replace Abstract: Modern large language models (LLMs) should generally benefit individuals from various cultural backgrounds around the world. However, most recent advanced generative evaluation benchmarks tailed for LLMs mainly focus on English. To this end, we introduce OMGEval, the first Open-source Multilingual Generative test set that can assess the capability of LLMs in different languages. For each language, OMGEval provides 804 open-ended questions, covering a wide range of important capabilities of LLMs, such as general knowledge, logical reasoning, and so on. Each question is rigorously verified by human annotators. Notably, to sufficiently reflect the compatibility of LLMs in different cultural backgrounds, we perform localization for each non-English language. Specifically, the current version of OMGEval includes 5 languages (i.e., Zh, Ru, Fr, Es, Ar). Following AlpacaEval, we employ GPT-4 as the adjudicator to automatically score different model outputs, which is shown closely related to human evaluation. We evaluate several representative multilingual LLMs on the proposed OMGEval, which we believe will provide a valuable reference for the community to further understand and improve the multilingual capability of LLMs. OMGEval is available at https://github.com/blcuicall/OMGEval.
|
https://arxiv.org/abs/2402.13524
|
Academic Papers
|
svg
|
5d97d45238e52b301cef0cfa04c552c0d9e09cbec8ffda73ecc6336116b12ed6
|
2026-02-02T00:00:00-05:00
|
Can Distillation Mitigate Backdoor Attacks in Pre-trained Encoders?
|
arXiv:2403.03846v2 Announce Type: replace Abstract: Self-Supervised Learning (SSL) has become a prominent paradigm for pre-training encoders to learning general-purpose representations from unlabeled data and releasing them on third-party platforms for broad downstream deep learning tasks. However, SSL is vulnerable to backdoor attacks, where an adversary may train and distribute poisoned pre-training encoders to contaminate the downstream models. In this paper, we study a defense mechanism based on distillation against poisoned encoders in SSL. Traditionally, distillation transfers knowledge from a pre-trained teacher model to a student model, enabling the student to replicate or refine the teacher's learned representations. We repurpose distillation to extract benign knowledge and remove backdoors from a poisoned pre-trained encoder to produce a clean and reliable pre-trained model. We conduct extensive experiments to evaluate the effectiveness of distillation in mitigating backdoor attacks on pre-trained encoders. Based on two state-of-the-art backdoor attacks and four widely adopted image classification datasets, our results demonstrate that distillation reduces the attack success rate from 80.87% to 27.51%, with only a 6.35% drop in model accuracy. Furthermore, by comparing four teacher architectures, three student models, and six loss functions, we find that the distillation with fine-tuned teacher networks, warm-up-based student training, and attention-based distillation losses yield the best performance.
|
https://arxiv.org/abs/2403.03846
|
Academic Papers
|
svg
|
69c3776f354046ae6e6e09eaba5a61a27558eae434b0847450585ba256e35c2a
|
2026-02-02T00:00:00-05:00
|
FlashFace: Human Image Personalization with High-fidelity Identity Preservation
|
arXiv:2403.17008v2 Announce Type: replace Abstract: This work presents FlashFace, a practical tool with which users can easily personalize their own photos on the fly by providing one or a few reference face images and a text prompt. Our approach is distinguishable from existing human photo customization methods by higher-fidelity identity preservation and better instruction following, benefiting from two subtle designs. First, we encode the face identity into a series of feature maps instead of one image token as in prior arts, allowing the model to retain more details of the reference faces (e.g., scars, tattoos, and face shape ). Second, we introduce a disentangled integration strategy to balance the text and image guidance during the text-to-image generation process, alleviating the conflict between the reference faces and the text prompts (e.g., personalizing an adult into a "child" or an "elder"). Extensive experimental results demonstrate the effectiveness of our method on various applications, including human image personalization, face swapping under language prompts, making virtual characters into real people, etc. Project Page: https://jshilong.github.io/flashface-page.
|
https://arxiv.org/abs/2403.17008
|
Academic Papers
|
svg
|
3f628e137b14bf92e055f91f7e5c47aea544f82552428f184b5c99eec6d97fc1
|
2026-02-02T00:00:00-05:00
|
Parameterized Algorithms for Coordinated Motion Planning: Minimizing Energy
|
arXiv:2404.15950v3 Announce Type: replace Abstract: We study the parameterized complexity of a generalization of the coordinated motion planning problem on graphs, where the goal is to route a specified subset of a given set of $k$ robots to their destinations with the aim of minimizing the total energy (i.e., the total length traveled). We develop novel techniques to push beyond previously-established results that were restricted to solid grids. We design a fixed-parameter additive approximation algorithm for this problem parameterized by $k$ alone. This result, which is of independent interest, allows us to prove the following two results pertaining to well-studied coordinated motion planning problems: (1) A fixed-parameter algorithm, parameterized by $k$, for routing a single robot to its destination while avoiding the other robots, which is related to the famous Rush-Hour Puzzle; and (2) a fixed-parameter algorithm, parameterized by $k$ plus the treewidth of the input graph, for the standard \textsc{Coordinated Motion Planning} (CMP) problem in which we need to route all the $k$ robots to their destinations. The latter of these results implies, among others, the fixed-parameter tractability of CMP parameterized by $k$ on graphs of bounded outerplanarity, which include bounded-height subgrids. We complement the above results with a lower bound which rules out the fixed-parameter tractability for CMP when parameterized by the total energy. This contrasts the recently-obtained tractability of the problem on solid grids under the same parameterization. As our final result, we strengthen the aforementioned fixed-parameter tractability to hold not only on solid grids but all graphs of bounded local treewidth -- a class including, among others, all graphs of bounded genus.
|
https://arxiv.org/abs/2404.15950
|
Academic Papers
|
svg
|
0d964c49445e2d2b3bb7e92886e9092a2992bbe2bf6a9f9f5d5696bf39e4c28d
|
2026-02-02T00:00:00-05:00
|
Implications of computer science theory for the simulation hypothesis
|
arXiv:2404.16050v4 Announce Type: replace Abstract: The simulation hypothesis has recently excited renewed interest in the physics and philosophy communities. However, the hypothesis specifically concerns {\textit{computers}} that simulate physical universes. So to formally investigate the hypothesis, we need to understand it in terms of computer science (CS) theory. In addition we need a formal way to couple CS theory with physics. Here I couple those fields by using the physical Church-Turing thesis. This allow me to exploit Kleene's second recursion, to prove that not only is it possible for {us} to be a simulation being run on a computer, but that we might be in a simulation being run a computer \emph{by us}. In such a ``self-simulation'', there would be two identical instances of us, both equally ``real''. I then use Rice's theorem to derive impossibility results concerning simulation and self-simulation; derive implications for (self-)simulation if we are being simulated in a program using fully homomorphic encryption; and briefly investigate the graphical structure of universes simulating other universes which contain computers running their own simulations. I end by describing some of the possible avenues for future research. While motivated in terms of the simulation hypothesis, the results in this paper are direct consequences of the Church-Turing thesis. So they apply far more broadly than the simulation hypothesis.
|
https://arxiv.org/abs/2404.16050
|
Academic Papers
|
svg
|
872c84e6ad332bb46465d9f6a7c768e4bb5203b3740d3e6755e7a741c3e4a394
|
2026-02-02T00:00:00-05:00
|
Finding patterns of meaning: Reassessing Construal Clustering via Bipolar Class Analysis
|
arXiv:2404.17042v3 Announce Type: replace Abstract: Empirical research on \textit{construals}--social affinity groups that share similar patterns of meaning--has advanced significantly in recent years. This progress is largely driven by the development of \textit{Construal Clustering Methods} (CCMs), which group survey respondents into construal clusters based on similarities in their response patterns. We identify key limitations of existing CCMs, which affect their accuracy when applied to the typical structures of available data, and introduce Bipolar Class Analysis (BCA), a CCM designed to address these shortcomings. BCA measures similarity in response shifts between expressions of support and rejection across survey respondents, addressing conceptual and measurement challenges in existing methods. We formally define BCA and demonstrate its advantages through extensive simulation analyses, where it consistently outperforms existing CCMs in accurately identifying construals. Along the way, we develop a novel data-generation process that approximates more closely how individuals map latent opinions onto observable survey responses, as well as a new metric to evaluate the performance of CCMs. Additionally, we find that applying BCA to previously studied real-world datasets reveals substantively different construal patterns compared to those generated by existing CCMs in prior empirical analyses. Finally, we discuss limitations of BCA and outline directions for future research.
|
https://arxiv.org/abs/2404.17042
|
Academic Papers
|
svg
|
b0cb5a31b7f4d850c06ef446bec2acf5a53bc25117e2eeac2be042996db08f9e
|
2026-02-02T00:00:00-05:00
|
Complexity Classes for Online Problems with and without Predictions
|
arXiv:2406.18265v3 Announce Type: replace Abstract: With the developments in machine learning, there has been a surge in interest and results focused on algorithms utilizing predictions, not least in online algorithms where most new results incorporate the prediction aspect for concrete online problems. While the structural computational hardness of problems with regards to time and space is quite well developed, not much is known about online problems where time and space resources are typically not in focus. Some information-theoretical insights were gained when researchers considered online algorithms with oracle advice, but predictions of uncertain quality is a very different matter. We initiate the development of a complexity theory for online problems with predictions, focusing on binary predictions for minimization problems. Based on the most generic hard online problem type, string guessing, we define a family of hierarchies of complexity classes (indexed by pairs of error measures) and develop notions of reductions, class membership, hardness, and completeness. Our framework contains all the tools one expects to find when working with complexity, and we illustrate our tools by analyzing problems with different characteristics. In addition, we show that known lower bounds for paging with discard predictions apply directly to all hard problems for each class in the hierarchy based on the canonical pair of error measures. This paging problem is not complete for these classes. Our work also implies corresponding complexity classes for classic online problems without predictions, with the corresponding complete problems.
|
https://arxiv.org/abs/2406.18265
|
Academic Papers
|
svg
|
11d33e905b8dc685d1fa544dddb915ff3cc9de57744110dd42a4c287852ae640
|
2026-02-02T00:00:00-05:00
|
Monocular pose estimation of articulated open surgery tools -- in the wild
|
arXiv:2407.12138v3 Announce Type: replace Abstract: This work presents a framework for monocular 6D pose estimation of surgical instruments in open surgery, addressing challenges such as object articulations, specularity, occlusions, and synthetic-to-real domain adaptation. The proposed approach consists of three main components: $(1)$ synthetic data generation pipeline that incorporates 3D scanning of surgical tools with articulation rigging and physically-based rendering; $(2)$ a tailored pose estimation framework combining tool detection with pose and articulation estimation; and $(3)$ a training strategy on synthetic and real unannotated video data, employing domain adaptation with automatically generated pseudo-labels. Evaluations conducted on real data of open surgery demonstrate the good performance and real-world applicability of the proposed framework, highlighting its potential for integration into medical augmented reality and robotic systems. The approach eliminates the need for extensive manual annotation of real surgical data.
|
https://arxiv.org/abs/2407.12138
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.