id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
c6bc73078df6999ea91af4ab5a5804d21e0340e7116db72ef3163b6ffae540ad
|
2026-01-16T00:00:00-05:00
|
A New Construction Structure on Multi-access Coded Caching with Linear Subpacketization: Cyclic Multi-Access Non-Half-Sum Disjoint Packing
|
arXiv:2601.10510v1 Announce Type: new Abstract: We consider the $(K,L,M,N)$ multi-access coded caching system introduced by Hachem et al., which consists of a central server with $N$ files and $K$ cache nodes, each of memory size $M$, where each user can access $L$ cache nodes in a cyclic wrap-around fashion. At present, several existing schemes achieve competitive transmission performance, but their subpacketization levels grow exponentially with the number of users. In contrast, schemes with linear or polynomial subpacketization always incur higher transmission loads. We aim to design a multi-access coded caching scheme with linear subpacketization $F$ while maintaining low transmission load. Recently, Cheng et al. proposed a construction framework for coded caching schemes with linear subpacketization (i.e., $F=K$) called non-half-sum disjoint packing (NHSDP). Inspired by this structure, we introduce a novel combinatorial structure named cyclic multi-access non-half-sum disjoint packing (CMA-NHSDP) by extending NHSDP to MACC system. By constructing CMA-NHSDP, we obtain a new class of multi-access coded caching schemes. Theoretical and numerical analyses show that our scheme achieves lower transmission loads than some existing schemes with linear subpacketization. Moreover, the proposed schemes achieves lower transmission load compared to existing schemes with exponential subpacketization in some case.
|
https://arxiv.org/abs/2601.10510
|
Academic Papers
|
svg
|
621c7b75d79795557a278774d001c1d18e2a0a00d00c5dac09dba1ec88d010fb
|
2026-01-16T00:00:00-05:00
|
Scalable Algorithms for Approximate DNF Model Counting
|
arXiv:2601.10511v1 Announce Type: new Abstract: Model counting of Disjunctive Normal Form (DNF) formulas is a critical problem in applications such as probabilistic inference and network reliability. For example, it is often used for query evaluation in probabilistic databases. Due to the computational intractability of exact DNF counting, there has been a line of research into a variety of approximation algorithms. These include Monte Carlo approaches such as the classical algorithms of Karp, Luby, and Madras (1989), as well as methods based on hashing (Soos et al. 2023), and heuristic approximations based on Neural Nets (Abboud, Ceylan, and Lukasiewicz 2020). We develop a new Monte Carlo approach with an adaptive stopping rule and short-circuit formula evaluation. We prove it achieves Probably Approximately Correct (PAC) learning bounds and is asymptotically more efficient than the previous methods. We also show experimentally that it out-performs prior algorithms by orders of magnitude, and can scale to much larger problems with millions of variables.
|
https://arxiv.org/abs/2601.10511
|
Academic Papers
|
svg
|
845040a3a63defa52b3cee37f0a9553a2dd1db64ba4cebe5719c029351e387ea
|
2026-01-16T00:00:00-05:00
|
SatMap: Revisiting Satellite Maps as Prior for Online HD Map Construction
|
arXiv:2601.10512v1 Announce Type: new Abstract: Online high-definition (HD) map construction is an essential part of a safe and robust end-to-end autonomous driving (AD) pipeline. Onboard camera-based approaches suffer from limited depth perception and degraded accuracy due to occlusion. In this work, we propose SatMap, an online vectorized HD map estimation method that integrates satellite maps with multi-view camera observations and directly predicts a vectorized HD map for downstream prediction and planning modules. Our method leverages lane-level semantics and texture from satellite imagery captured from a Bird's Eye View (BEV) perspective as a global prior, effectively mitigating depth ambiguity and occlusion. In our experiments on the nuScenes dataset, SatMap achieves 34.8% mAP performance improvement over the camera-only baseline and 8.5% mAP improvement over the camera-LiDAR fusion baseline. Moreover, we evaluate our model in long-range and adverse weather conditions to demonstrate the advantages of using a satellite prior map. Source code will be available at https://iv.ee.hm.edu/satmap/.
|
https://arxiv.org/abs/2601.10512
|
Academic Papers
|
svg
|
a507f1b1504560c056b209b4455040485b1d477a5d35ccae2fad84718f6835f2
|
2026-01-16T00:00:00-05:00
|
AEQ-Bench: Measuring Empathy of Omni-Modal Large Models
|
arXiv:2601.10513v1 Announce Type: new Abstract: While the automatic evaluation of omni-modal large models (OLMs) is essential, assessing empathy remains a significant challenge due to its inherent affectivity. To investigate this challenge, we introduce AEQ-Bench (Audio Empathy Quotient Benchmark), a novel benchmark to systematically assess two core empathetic capabilities of OLMs: (i) generating empathetic responses by comprehending affective cues from multi-modal inputs (audio + text), and (ii) judging the empathy of audio responses without relying on text transcription. Compared to existing benchmarks, AEQ-Bench incorporates two novel settings that vary in context specificity and speech tone. Comprehensive assessment across linguistic and paralinguistic metrics reveals that (1) OLMs trained with audio output capabilities generally outperformed models with text-only outputs, and (2) while OLMs align with human judgments for coarse-grained quality assessment, they remain unreliable for evaluating fine-grained paralinguistic expressiveness.
|
https://arxiv.org/abs/2601.10513
|
Academic Papers
|
svg
|
9eb6003d7355a19707d0a79d49e2187de0767391c21475742d63c1dda1d38cf8
|
2026-01-16T00:00:00-05:00
|
Transformer-Based Cognitive Radio: Adaptive Modulation Strategies Using Transformer Models
|
arXiv:2601.10519v1 Announce Type: new Abstract: Cognitive Radio (CR) systems, which dynamically adapt to changing spectrum environments, could benefit significantly from advancements in machine learning technologies. These systems can be enhanced in terms of spectral efficiency, robustness, and security through innovative approaches such as the use of Transformer models. This work investigates the application of Transformer models, specifically the GPT-2 architecture, to generate novel modulation schemes for wireless communications. By training a GPT-2 model on a dataset of existing modulation formulas, new modulation schemes has been created. These generated schemes are then compared to traditional methods using key performance metrics such as Signal-to-Noise Ratio (SNR) and Power Spectrum Density (PSD). The results show that Transformer-generated modulation schemes can achieve performance comparable to, and in some cases outperforming, traditional methods. This demonstrates that advanced CR systems could greatly benefit from the implementation of Transformer models, leading to more efficient, robust, and secure communication systems.
|
https://arxiv.org/abs/2601.10519
|
Academic Papers
|
svg
|
ba80341674372e2acc8ca94e8d01c070e9f132e79f601f30430ea340ea44768f
|
2026-01-16T00:00:00-05:00
|
Breaking Up with Normatively Monolithic Agency with GRACE: A Reason-Based Neuro-Symbolic Architecture for Safe and Ethical AI Alignment
|
arXiv:2601.10520v1 Announce Type: new Abstract: As AI agents become increasingly autonomous, widely deployed in consequential contexts, and efficacious in bringing about real-world impacts, ensuring that their decisions are not only instrumentally effective but also normatively aligned has become critical. We introduce a neuro-symbolic reason-based containment architecture, Governor for Reason-Aligned ContainmEnt (GRACE), that decouples normative reasoning from instrumental decision-making and can contain AI agents of virtually any design. GRACE restructures decision-making into three modules: a Moral Module (MM) that determines permissible macro actions via deontic logic-based reasoning; a Decision-Making Module (DMM) that encapsulates the target agent while selecting instrumentally optimal primitive actions in accordance with derived macro actions; and a Guard that monitors and enforces moral compliance. The MM uses a reason-based formalism providing a semantic foundation for deontic logic, enabling interpretability, contestability, and justifiability. Its symbolic representation enriches the DMM's informational context and supports formal verification and statistical guarantees of alignment enforced by the Guard. We demonstrate GRACE on an example of a LLM therapy assistant, showing how it enables stakeholders to understand, contest, and refine agent behavior.
|
https://arxiv.org/abs/2601.10520
|
Academic Papers
|
svg
|
b99c0459746ba808944270f94afc7badac841806a94e7da9f610af44a2e54486
|
2026-01-16T00:00:00-05:00
|
BikeActions: An Open Platform and Benchmark for Cyclist-Centric VRU Action Recognition
|
arXiv:2601.10521v1 Announce Type: new Abstract: Anticipating the intentions of Vulnerable Road Users (VRUs) is a critical challenge for safe autonomous driving (AD) and mobile robotics. While current research predominantly focuses on pedestrian crossing behaviors from a vehicle's perspective, interactions within dense shared spaces remain underexplored. To bridge this gap, we introduce FUSE-Bike, the first fully open perception platform of its kind. Equipped with two LiDARs, a camera, and GNSS, it facilitates high-fidelity, close-range data capture directly from a cyclist's viewpoint. Leveraging this platform, we present BikeActions, a novel multi-modal dataset comprising 852 annotated samples across 5 distinct action classes, specifically tailored to improve VRU behavior modeling. We establish a rigorous benchmark by evaluating state-of-the-art graph convolution and transformer-based models on our publicly released data splits, establishing the first performance baselines for this challenging task. We release the full dataset together with data curation tools, the open hardware design, and the benchmark code to foster future research in VRU action understanding under https://iv.ee.hm.edu/bikeactions/.
|
https://arxiv.org/abs/2601.10521
|
Academic Papers
|
svg
|
6352d76cd7b7c26ff4783c0e3f73d923d8bcf2df65628afbcbba87dc440c7e43
|
2026-01-16T00:00:00-05:00
|
Diagnosing Generalization Failures in Fine-Tuned LLMs: A Cross-Architectural Study on Phishing Detection
|
arXiv:2601.10524v1 Announce Type: new Abstract: The practice of fine-tuning Large Language Models (LLMs) has achieved state-of-the-art performance on specialized tasks, yet diagnosing why these models become brittle and fail to generalize remains a critical open problem. To address this, we introduce and apply a multi-layered diagnostic framework to a cross-architectural study. We fine-tune Llama 3.1 8B, Gemma 2 9B, and Mistral models on a high-stakes phishing detection task and use SHAP analysis and mechanistic interpretability to uncover the root causes of their generalization failures. Our investigation reveals three critical findings: (1) Generalization is driven by a powerful synergy between architecture and data diversity. The Gemma 2 9B model achieves state-of-the-art performance (>91\% F1), but only when trained on a stylistically diverse ``generalist'' dataset. (2) Generalization is highly architecture-dependent. We diagnose a specific failure mode in Llama 3.1 8B, which performs well on a narrow domain but cannot integrate diverse data, leading to a significant performance drop. (3) Some architectures are inherently more generalizable. The Mistral model proves to be a consistent and resilient performer across multiple training paradigms. By pinpointing the flawed heuristics responsible for these failures, our work provides a concrete methodology for diagnosing and understanding generalization failures, underscoring that reliable AI requires deep validation of the interplay between architecture, data, and training strategy.
|
https://arxiv.org/abs/2601.10524
|
Academic Papers
|
svg
|
6348811c9aa2f8e303a1286ba4ce2d9332a63305a026d34e648c7e48099bf97b
|
2026-01-16T00:00:00-05:00
|
Learning from Brain Topography: A Hierarchical Local-Global Graph-Transformer Network for EEG Emotion Recognition
|
arXiv:2601.10525v1 Announce Type: new Abstract: Understanding how local neurophysiological patterns interact with global brain dynamics is essential for decoding human emotions from EEG signals. However, existing deep learning approaches often overlook the brain's intrinsic spatial organization, failing to simultaneously capture local topological relations and global dependencies. To address these challenges, we propose Neuro-HGLN, a Neurologically-informed Hierarchical Graph-Transformer Learning Network that integrates biologically grounded priors with hierarchical representation learning. Neuro-HGLN first constructs a spatial Euclidean prior graph based on physical electrode distances to serve as an anatomically grounded inductive bias. A learnable global dynamic graph is then introduced to model functional connectivity across the entire brain. In parallel, to capture fine-grained regional dependencies, Neuro-HGLN builds region-level local graphs using a multi-head self-attention mechanism. These graphs are processed synchronously through local-constrained parallel GCN layers to produce region-specific representations. Subsequently, an iTransformer encoder aggregates these features to capture cross-region dependencies under a dimension-as-token formulation. Extensive experiments demonstrate that Neuro-HGLN achieves state-of-the-art performance on multiple benchmarks, providing enhanced interpretability grounded in neurophysiological structure. These results highlight the efficacy of unifying local topological learning with cross-region dependency modeling for robust EEG emotion recognition.
|
https://arxiv.org/abs/2601.10525
|
Academic Papers
|
svg
|
0c93b6d15b3d79c5605880da9811380b07b888168018329b90af1d5f4f36848e
|
2026-01-16T00:00:00-05:00
|
On the suboptimality of linear codes for binary distributed hypothesis testing
|
arXiv:2601.10526v1 Announce Type: new Abstract: We study a binary distributed hypothesis testing problem where two agents observe correlated binary vectors and communicate compressed information at the same rate to a central decision maker. In particular, we study linear compression schemes and show that simple truncation is the best linear scheme in two cases: (1) testing opposite signs of the same magnitude of correlation, and (2) testing for or against independence. We conjecture, supported by numerical evidence, that truncation is the best linear code for testing any correlations of opposite signs. Further, for testing against independence, we also compute classical random coding exponents and show that truncation, and consequently any linear code, is strictly suboptimal.
|
https://arxiv.org/abs/2601.10526
|
Academic Papers
|
svg
|
7695d19f07a9796ec3dfcefc9e10d3a0e698a8a97ed0fd22df53126f61962634
|
2026-01-16T00:00:00-05:00
|
A Safety Report on GPT-5.2, Gemini 3 Pro, Qwen3-VL, Doubao 1.8, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5
|
arXiv:2601.10527v1 Announce Type: new Abstract: The rapid evolution of Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) has produced substantial gains in reasoning, perception, and generative capability across language and vision. However, whether these advances yield commensurate improvements in safety remains unclear, in part due to fragmented evaluation practices limited to single modalities or threat models. In this report, we present an integrated safety evaluation of 7 frontier models: GPT-5.2, Gemini 3 Pro, Qwen3-VL, Doubao 1.8, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5. We evaluate each model across language, vision-language, and image generation settings using a unified protocol that integrates benchmark evaluation, adversarial evaluation, multilingual evaluation, and compliance evaluation. Aggregating our evaluations into safety leaderboards and model safety profiles across multiple evaluation modes reveals a sharply heterogeneous safety landscape. While GPT-5.2 demonstrates consistently strong and balanced safety performance across evaluations, other models exhibit pronounced trade-offs among benchmark safety, adversarial alignment, multilingual generalization, and regulatory compliance. Both language and vision-language modalities show significant vulnerability under adversarial evaluation, with all models degrading substantially despite strong results on standard benchmarks. Text-to-image models achieve relatively stronger alignment in regulated visual risk categories, yet remain brittle under adversarial or semantically ambiguous prompts. Overall, these results show that safety in frontier models is inherently multidimensional--shaped by modality, language, and evaluation scheme, underscoring the need for standardized safety evaluations to accurately assess real-world risk and guide responsible model development and deployment.
|
https://arxiv.org/abs/2601.10527
|
Academic Papers
|
svg
|
1f21bc5a8bc728f9ee7bf4b45af6c6c647763849e712c5e8e0f6b2d87813d50f
|
2026-01-16T00:00:00-05:00
|
PERM: Psychology-grounded Empathetic Reward Modeling for Large Language Models
|
arXiv:2601.10532v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly deployed in human-centric applications, yet they often fail to provide substantive emotional support. While Reinforcement Learning (RL) has been utilized to enhance empathy of LLMs, existing reward models typically evaluate empathy from a single perspective, overlooking the inherently bidirectional interaction nature of empathy between the supporter and seeker as defined by Empathy Cycle theory. To address this limitation, we propose Psychology-grounded Empathetic Reward Modeling (PERM). PERM operationalizes empathy evaluation through a bidirectional decomposition: 1) Supporter perspective, assessing internal resonation and communicative expression; 2) Seeker perspective, evaluating emotional reception. Additionally, it incorporates a bystander perspective to monitor overall interaction quality. Extensive experiments on a widely-used emotional intelligence benchmark and an industrial daily conversation dataset demonstrate that PERM outperforms state-of-the-art baselines by over 10\%. Furthermore, a blinded user study reveals a 70\% preference for our approach, highlighting its efficacy in generating more empathetic responses. Our code, dataset, and models are available at https://github.com/ZhengWwwq/PERM.
|
https://arxiv.org/abs/2601.10532
|
Academic Papers
|
svg
|
f9f534558fd7b1edda06c22d690c37083f0ce4406f5d3cfd4509e30c5afa0029
|
2026-01-16T00:00:00-05:00
|
SVII-3D: Advancing Roadside Infrastructure Inventory with Decimeter-level 3D Localization and Comprehension from Sparse Street Imagery
|
arXiv:2601.10535v1 Announce Type: new Abstract: The automated creation of digital twins and precise asset inventories is a critical task in smart city construction and facility lifecycle management. However, utilizing cost-effective sparse imagery remains challenging due to limited robustness, inaccurate localization, and a lack of fine-grained state understanding. To address these limitations, SVII-3D, a unified framework for holistic asset digitization, is proposed. First, LoRA fine-tuned open-set detection is fused with a spatial-attention matching network to robustly associate observations across sparse views. Second, a geometry-guided refinement mechanism is introduced to resolve structural errors, achieving precise decimeter-level 3D localization. Third, transcending static geometric mapping, a Vision-Language Model agent leveraging multi-modal prompting is incorporated to automatically diagnose fine-grained operational states. Experiments demonstrate that SVII-3D significantly improves identification accuracy and minimizes localization errors. Consequently, this framework offers a scalable, cost-effective solution for high-fidelity infrastructure digitization, effectively bridging the gap between sparse perception and automated intelligent maintenance.
|
https://arxiv.org/abs/2601.10535
|
Academic Papers
|
svg
|
4496511796af9de5c7d9fe5baa85695ec849948439d935cdb42ad48b860bf4a9
|
2026-01-16T00:00:00-05:00
|
CoGen: Creation of Reusable UI Components in Figma via Textual Commands
|
arXiv:2601.10536v1 Announce Type: new Abstract: The evolution of User Interface design has emphasized the need for efficient, reusable, and editable components to ensure an efficient design process. This research introduces CoGen, a system that uses machine learning techniques to generate reusable UI components directly in Figma, one of the most popular UI design tools. Addressing gaps in current systems, CoGen focuses on creating atomic components such as buttons, labels, and input fields using structured JSON and natural language prompts. The project integrates Figma API data extraction, Seq2Seq models, and fine-tuned T5 transformers for component generation. The key results demonstrate the efficiency of the T5 model in prompt generation, with an accuracy of 98% and a BLEU score of 0.2668, which ensures the mapping of JSON to descriptive prompts. For JSON creation, CoGen achieves a success rate of up to 100% in generating simple JSON outputs for specified component types.
|
https://arxiv.org/abs/2601.10536
|
Academic Papers
|
svg
|
879f17121271a56a1aa53e6f558c9ff09dbc6426a4cc275adbd21a95912956e6
|
2026-01-16T00:00:00-05:00
|
Enhancing the quality of gauge images captured in smoke and haze scenes through deep learning
|
arXiv:2601.10537v1 Announce Type: new Abstract: Images captured in hazy and smoky environments suffer from reduced visibility, posing a challenge when monitoring infrastructures and hindering emergency services during critical situations. The proposed work investigates the use of the deep learning models to enhance the automatic, machine-based readability of gauge in smoky environments, with accurate gauge data interpretation serving as a valuable tool for first responders. The study utilizes two deep learning architectures, FFA-Net and AECR-Net, to improve the visibility of gauge images, corrupted with light up to dense haze and smoke. Since benchmark datasets of analog gauge images are unavailable, a new synthetic dataset, containing over 14,000 images, was generated using the Unreal Engine. The models were trained with an 80\% train, 10\% validation, and 10\% test split for the haze and smoke dataset, respectively. For the synthetic haze dataset, the SSIM and PSNR metrics are about 0.98 and 43\,dB, respectively, comparing well to state-of-the art results. Additionally, more robust results are retrieved from the AECR-Net, when compared to the FFA-Net. Although the results from the synthetic smoke dataset are poorer, the trained models achieve interesting results. In general, imaging in the presence of smoke are more difficult to enhance given the inhomogeneity and high density. Secondly, FFA-Net and AECR-Net are implemented to dehaze and not to desmoke images. This work shows that use of deep learning architectures can improve the quality of analog gauge images captured in smoke and haze scenes immensely. Finally, the enhanced output images can be successfully post-processed for automatic autonomous reading of gauges
|
https://arxiv.org/abs/2601.10537
|
Academic Papers
|
svg
|
54573f9871fec63e5cec3ae1b3075520377473c46b8d8b12f0be7ebfb4685d74
|
2026-01-16T00:00:00-05:00
|
Network Integrated Sensing and Communication
|
arXiv:2601.10538v1 Announce Type: new Abstract: Integrated sensing and communication (ISAC) is a cornerstone technology for 6G networks, offering unified support for high-rate communication and high-accuracy sensing. While existing literature extensively covers link-level designs, the transition toward large-scale deployment necessitates a fundamental understanding of network-level performance. This paper investigates a network ISAC model where a source node communicates with a destination via a relay network, while intermediate nodes concurrently perform cooperative sensing over specific spatial regions. We formulate a novel optimization framework that captures the interplay between multi-node routing and sensing coverage. For a one-dimensional path network, we provide an analytical characterization of the complete sensing-throughput region. Extending this to general network topologies, we establish that the sensing-throughput Pareto boundary is piecewise linear and provide physical interpretations for each segment. Our results reveal the fundamental trade-offs between sensing coverage and communication routing, offering key insights for the design of future 6G heterogeneous networks.
|
https://arxiv.org/abs/2601.10538
|
Academic Papers
|
svg
|
19de3735c3dc43a8b0c10f7042e7d3eb44bfb18bad65767ade354a1048f4df9c
|
2026-01-16T00:00:00-05:00
|
Error-Correcting Codes for Two Bursts of t1-Deletion-t2-Insertion with Low Computational Complexity
|
arXiv:2601.10540v1 Announce Type: new Abstract: Burst errors involving simultaneous insertions, deletions, and substitutions occur in practical scenarios, including DNA data storage and document synchronization, motivating developments of channel codes that can correct such errors. In this paper, we address the problem of constructing error-correcting codes (ECCs) capable of handling multiple bursts of $t_1$-deletion-$t_2$-insertion ($(t_1,t_2)$-DI) errors, where each burst consists of $t_1$ deletions followed by $t_2$ insertions in a binary sequence. We make three key contributions: Firstly, we establish the fundamental equivalence of (1) two bursts of $(t_1,t_2)$-DI ECCs, (2) two bursts of $(t_2,t_1)$-DI ECCs, and (3) one burst each of $(t_1,t_2)$-DI and $(t_2,t_1)$-DI ECCs. Then, we derive lower and upper bounds on the code size of two bursts of $(t_1,t_2)$-DI ECCs, which can naturally be extended to the case of multiple bursts. Finally, we present constructions of two bursts of $(t_1,t_2)$-DI ECCs. Compared to the codes obtained by the syndrome compression technique, the resulting codes achieve significantly lower computational complexity.
|
https://arxiv.org/abs/2601.10540
|
Academic Papers
|
svg
|
a502c8b42d031ceed85ca44c9b0b132775c86efb8a4f527380d3f4f1fb818140
|
2026-01-16T00:00:00-05:00
|
Mixtures of Transparent Local Models
|
arXiv:2601.10541v1 Announce Type: new Abstract: The predominance of machine learning models in many spheres of human activity has led to a growing demand for their transparency. The transparency of models makes it possible to discern some factors, such as security or non-discrimination. In this paper, we propose a mixture of transparent local models as an alternative solution for designing interpretable (or transparent) models. Our approach is designed for the situations where a simple and transparent function is suitable for modeling the label of instances in some localities/regions of the input space, but may change abruptly as we move from one locality to another. Consequently, the proposed algorithm is to learn both the transparent labeling function and the locality of the input space where the labeling function achieves a small risk in its assigned locality. By using a new multi-predictor (and multi-locality) loss function, we established rigorous PAC-Bayesian risk bounds for the case of binary linear classification problem and that of linear regression. In both cases, synthetic data sets were used to illustrate how the learning algorithms work. The results obtained from real data sets highlight the competitiveness of our approach compared to other existing methods as well as certain opaque models. Keywords: PAC-Bayes, risk bounds, local models, transparent models, mixtures of local transparent models.
|
https://arxiv.org/abs/2601.10541
|
Academic Papers
|
svg
|
3b6c7c2e159b100df67c2ea31e9b1776836a9bb6761773471236cca6d7205187
|
2026-01-16T00:00:00-05:00
|
Hybrid Encryption with Certified Deletion in Preprocessing Model
|
arXiv:2601.10542v1 Announce Type: new Abstract: Certified deletion allows Alice to outsource data to Bob and, at a later time, obtain a verifiable guarantee that the file has been irreversibly deleted at her request. The functionality, while impossible using classical information alone, can be achieved using quantum information. Existing approaches, rely on one-time pad (OTP) encryption, or use computational hardness assumptions that may be vulnerable to future advances in classical or quantum computing. In this work, we introduce and formalize hybrid encryption with certified deletion in the preprocessing model (pHE-CD) and propose two constructions. The constructions combine an information-theoretic key encapsulation mechanism (iKEM) with a data encapsulation mechanism that provides certified deletion (DEM-CD) and, respectively, provide {\em information-theoretic certified deletion}, where both confidentiality and deletion properties are provided against a computationally unbounded adversary; and {\em everlasting certified deletion}, where confidentiality is computational before deletion, and upon successful verification of the deletion certificate, the message becomes information-theoretically hidden from an adversary that is computationally unbounded. Our pHE-CD schemes provide IND-$q_e$-CPA notion of security and support encryption of arbitrarily long messages. In the second construction, using a computationally secure DEM-CD that is quantum-safe (i.e. constructed using quantum coding and AES), we obtain quantum-safe security with keys that are significantly shorter than the message. Instantiating the proposed framework using quantum enabled kem (qKEM) as the iKEM, is a future work.
|
https://arxiv.org/abs/2601.10542
|
Academic Papers
|
svg
|
6c68aed1226d5d11181c726dbfe4444c21626ba3e964173d3c438ebb4f935f6b
|
2026-01-16T00:00:00-05:00
|
Defending Large Language Models Against Jailbreak Attacks via In-Decoding Safety-Awareness Probing
|
arXiv:2601.10543v1 Announce Type: new Abstract: Large language models (LLMs) have achieved impressive performance across natural language tasks and are increasingly deployed in real-world applications. Despite extensive safety alignment efforts, recent studies show that such alignment is often shallow and remains vulnerable to jailbreak attacks. Existing defense mechanisms, including decoding-based constraints and post-hoc content detectors, struggle against sophisticated jailbreaks, often intervening robust detection or excessively degrading model utility. In this work, we examine the decoding process of LLMs and make a key observation: even when successfully jailbroken, models internally exhibit latent safety-related signals during generation. However, these signals are overridden by the model's drive for fluent continuation, preventing timely self-correction or refusal. Building on this observation, we propose a simple yet effective approach that explicitly surfaces and leverages these latent safety signals for early detection of unsafe content during decoding. Experiments across diverse jailbreak attacks demonstrate that our approach significantly enhances safety, while maintaining low over-refusal rates on benign inputs and preserving response quality. Our results suggest that activating intrinsic safety-awareness during decoding offers a promising and complementary direction for defending against jailbreak attacks. Code is available at: https://github.com/zyz13590/SafeProbing.
|
https://arxiv.org/abs/2601.10543
|
Academic Papers
|
svg
|
4b75e7ef2368851b7e951645fb3ace1706b332c9110eec841034b939de692d4f
|
2026-01-16T00:00:00-05:00
|
SDN-Driven Innovations in MANETs and IoT: A Path to Smarter Networks
|
arXiv:2601.10544v1 Announce Type: new Abstract: Mobile Ad Hoc Networks (MANETs) and Internet of Things (IoT) networks operate in decentralized and dynamic environments, making them ideal for scenarios lacking traditional infrastructure. However, these networks face challenges such as inefficient routing, limited scalability, and security vulnerabilities due to their decentralized nature and resource constraints. This paper explores the integration of Software-Defined Networking (SDN) as a unified solution that leverages its centralized control and network programmability to improve routing, resource management, and security. A mathematical model evaluates the impact of SDN integration on Capital Expenditure (CAPEX), Operational Expenditure (OPEX), and performance metrics. Results demonstrate that SDN-enhanced MANETs and IoT networks offer superior scalability, reduced latency, increased throughput, and lower packet loss, especially in dynamic and large-scale environments. While SDN introduces computational overhead, it significantly enhances routing efficiency, resource optimization, and adaptability. The proposed framework provides a robust and scalable solution, enabling the development of network architectures that efficiently manage growing node densities, dynamic topologies, and high data traffic. This approach ensures resilience, making it well-suited to meet the performance and reliability demands of modern, large-scale applications.
|
https://arxiv.org/abs/2601.10544
|
Academic Papers
|
svg
|
9a9b94934aa7a60cb74a7b3ce985fc9af22a5d1d423b3aa8e4dceb5f9d00fa4d
|
2026-01-16T00:00:00-05:00
|
HeartMuLa: A Family of Open Sourced Music Foundation Models
|
arXiv:2601.10547v1 Announce Type: new Abstract: We present a family of open-source Music Foundation Models designed to advance large-scale music understanding and generation across diverse tasks and modalities. Our framework consists of four major components: (1) HeartCLAP, an audio-text alignment model; (2) HeartTranscriptor, a robust lyric recognition model optimized for real-world music scenarios; and (3) HeartCodec, a low-frame-rate (12.5 Hz) yet high-fidelity music codec tokenizer that captures long-range musical structure while preserving fine-grained acoustic details and enabling efficient autoregressive modeling; (4) HeartMuLa, an LLM-based song generation model capable of synthesizing high-fidelity music under rich, user-controllable conditions (e.g., textual style descriptions, lyrics, and reference audio). In addition, it provides two specialized modes: (i) fine-grained musical attribute control, which allows users to specify the style of different song sections (e.g., intro, verse, chorus) using natural language prompts; and (ii) short, engaging music generation, which is suitable as background music for short videos. Lastly, HeartMuLa improves significantly when scaled to 7B parameters. For the first time, we show that a Suno-level, commercial-grade system can be reproduced using academic-scale data and GPU resources. We expect these foundation models to serve as strong baselines for future research and to facilitate practical applications in multimodal content production.
|
https://arxiv.org/abs/2601.10547
|
Academic Papers
|
svg
|
da8b535afc8d5392118f35d879a735756acf7cac81bcb6def40609e113abee44
|
2026-01-16T00:00:00-05:00
|
Unleashing the Capabilities of Large Vision-Language Models for Intelligent Perception of Roadside Infrastructure
|
arXiv:2601.10551v1 Announce Type: new Abstract: Automated perception of urban roadside infrastructure is crucial for smart city management, yet general-purpose models often struggle to capture the necessary fine-grained attributes and domain rules. While Large Vision Language Models (VLMs) excel at open-world recognition, they often struggle to accurately interpret complex facility states in compliance with engineering standards, leading to unreliable performance in real-world applications. To address this, we propose a domain-adapted framework that transforms VLMs into specialized agents for intelligent infrastructure analysis. Our approach integrates a data-efficient fine-tuning strategy with a knowledge-grounded reasoning mechanism. Specifically, we leverage open-vocabulary fine-tuning on Grounding DINO to robustly localize diverse assets with minimal supervision, followed by LoRA-based adaptation on Qwen-VL for deep semantic attribute reasoning. To mitigate hallucinations and enforce professional compliance, we introduce a dual-modality Retrieval-Augmented Generation (RAG) module that dynamically retrieves authoritative industry standards and visual exemplars during inference. Evaluated on a comprehensive new dataset of urban roadside scenes, our framework achieves a detection performance of 58.9 mAP and an attribute recognition accuracy of 95.5%, demonstrating a robust solution for intelligent infrastructure monitoring.
|
https://arxiv.org/abs/2601.10551
|
Academic Papers
|
svg
|
05d18db9f9a43caefa2f5dcc141c8128e502f95e65df3451d33143a03abc7a66
|
2026-01-16T00:00:00-05:00
|
Inference-time Physics Alignment of Video Generative Models with Latent World Models
|
arXiv:2601.10553v1 Announce Type: new Abstract: State-of-the-art video generative models produce promising visual content yet often violate basic physics principles, limiting their utility. While some attribute this deficiency to insufficient physics understanding from pre-training, we find that the shortfall in physics plausibility also stems from suboptimal inference strategies. We therefore introduce WMReward and treat improving physics plausibility of video generation as an inference-time alignment problem. In particular, we leverage the strong physics prior of a latent world model (here, VJEPA-2) as a reward to search and steer multiple candidate denoising trajectories, enabling scaling test-time compute for better generation performance. Empirically, our approach substantially improves physics plausibility across image-conditioned, multiframe-conditioned, and text-conditioned generation settings, with validation from human preference study. Notably, in the ICCV 2025 Perception Test PhysicsIQ Challenge, we achieve a final score of 62.64%, winning first place and outperforming the previous state of the art by 7.42%. Our work demonstrates the viability of using latent world models to improve physics plausibility of video generation, beyond this specific instantiation or parameterization.
|
https://arxiv.org/abs/2601.10553
|
Academic Papers
|
svg
|
f0286b37fb94bc29712f58da1206671f5555ea65be391da903e3c87dafd90b89
|
2026-01-16T00:00:00-05:00
|
DeepUrban: Interaction-Aware Trajectory Prediction and Planning for Automated Driving by Aerial Imagery
|
arXiv:2601.10554v1 Announce Type: new Abstract: The efficacy of autonomous driving systems hinges critically on robust prediction and planning capabilities. However, current benchmarks are impeded by a notable scarcity of scenarios featuring dense traffic, which is essential for understanding and modeling complex interactions among road users. To address this gap, we collaborated with our industrial partner, DeepScenario, to develop DeepUrban-a new drone dataset designed to enhance trajectory prediction and planning benchmarks focusing on dense urban settings. DeepUrban provides a rich collection of 3D traffic objects, extracted from high-resolution images captured over urban intersections at approximately 100 meters altitude. The dataset is further enriched with comprehensive map and scene information to support advanced modeling and simulation tasks. We evaluate state-of-the-art (SOTA) prediction and planning methods, and conducted experiments on generalization capabilities. Our findings demonstrate that adding DeepUrban to nuScenes can boost the accuracy of vehicle predictions and planning, achieving improvements up to 44.1 % / 44.3% on the ADE / FDE metrics. Website: https://iv.ee.hm.edu/deepurban
|
https://arxiv.org/abs/2601.10554
|
Academic Papers
|
svg
|
07a498b52e6ce56a4e92156fa84fd9496a2dd61297f1dc73ca6132f00133dd27
|
2026-01-16T00:00:00-05:00
|
Enhancing Mobile Ad Hoc Networks (MANETs) with Software-Defined Networking (SDN): A Balanced Approach
|
arXiv:2601.10556v1 Announce Type: new Abstract: Mobile Ad Hoc Networks (MANETs) are decentralized wireless networks, characterized by their dynamic topologies and node mobility. In the era of cutting-edge technologies, integrating Software-Defined Networking (SDN) with MANETs offers a promising solution to manage these challenges more efficiently. This paper presents a balanced discussion of MANETs and SDN, demonstrating how SDN principles, such as centralized control and network virtualization, can optimize MANET performance in terms of scalability, cost-efficiency, and security. A mathematical model is developed to analyze Capital Expenditures (CAPEX), Operational Expenditures (OPEX), and network efficiency.
|
https://arxiv.org/abs/2601.10556
|
Academic Papers
|
svg
|
8b85bef76d9a56b4c69d51999f456677aa04ba05de621b62b68ed3181ca85a6f
|
2026-01-16T00:00:00-05:00
|
Chebyshev Accelerated Subspsace Eigensolver for Pseudo-hermitian Hamiltonians
|
arXiv:2601.10557v1 Announce Type: new Abstract: Studying the optoelectronic structure of materials can require the computation of up to several thousands of the smallest eigenpairs of a pseudo-hermitian Hamiltonian. Iterative eigensolvers may be preferred over direct methods for this task since their complexity is a function of the desired fraction of the spectrum. In addition, they generally rely on highly optimized and scalable kernels such as matrix-vector multiplications that leverage the massive parallelism and the computational power of modern exascale systems. \textit{Chebyshev Accelerated Subspace iteration Eigensolver} (ChASE) is able to compute several thousands of the most extreme eigenpairs of dense hermitian matrices with proven scalability over massive parallel accelerated clusters. This work presents an extension of ChASE to solve for a portion of the spectrum of pseudo-hermitian Hamiltonians as they appear in the treatment of excitonic materials. The new pseudo-hermitian solver achieves similar convergence and performance as the hermitian one. By exploiting the numerical structure and spectral properties of the Hamiltonian matrix, we propose an oblique variant of Rayleigh-Ritz projection featuring quadratic convergence of the Ritz-values with no explicit construction of the dual basis set. Additionally, we introduce a parallel implementation of the recursive matrix-product operation appearing in the Chebyshev filter with limited amount of global communications. Our development is supported by a full numerical analysis and experimental tests.
|
https://arxiv.org/abs/2601.10557
|
Academic Papers
|
svg
|
7c9068a9899a0eb9dfb22a7a9191ed9b584ad40738e9fdac3b6e65b5e675067c
|
2026-01-16T00:00:00-05:00
|
Learning Latency-Aware Orchestration for Parallel Multi-Agent Systems
|
arXiv:2601.10560v1 Announce Type: new Abstract: Multi-agent systems (MAS) enable complex reasoning by coordinating multiple agents, but often incur high inference latency due to multi-step execution and repeated model invocations, severely limiting their scalability and usability in time-sensitive scenarios. Most existing approaches primarily optimize task performance and inference cost, and explicitly or implicitly assume sequential execution, making them less optimal for controlling latency under parallel execution. In this work, we investigate learning-based orchestration of multi-agent systems with explicit latency supervision under parallel execution. We propose Latency-Aware Multi-agent System (LAMaS), a latency-aware multi-agent orchestration framework that enables parallel execution and explicitly optimizes the critical execution path, allowing the controller to construct execution topology graphs with lower latency under parallel execution. Our experiments show that our approach reduces critical path length by 38-46% compared to the state-of-the-art baseline for multi-agent architecture search across multiple benchmarks, while maintaining or even improving task performance. These results highlight the importance of explicitly optimizing latency under parallel execution when designing efficient multi-agent systems. The code is available at https://github.com/xishi404/LAMaS
|
https://arxiv.org/abs/2601.10560
|
Academic Papers
|
svg
|
0e9c27bc0abbcf88d93ae7cb039a97f753bff93c950140750f1041db42506c9e
|
2026-01-16T00:00:00-05:00
|
Process-Guided Concept Bottleneck Model
|
arXiv:2601.10562v1 Announce Type: new Abstract: Concept Bottleneck Models (CBMs) improve the explainability of black-box Deep Learning (DL) by introducing intermediate semantic concepts. However, standard CBMs often overlook domain-specific relationships and causal mechanisms, and their dependence on complete concept labels limits applicability in scientific domains where supervision is sparse but processes are well defined. To address this, we propose the Process-Guided Concept Bottleneck Model (PG-CBM), an extension of CBMs which constrains learning to follow domain-defined causal mechanisms through biophysically meaningful intermediate concepts. Using above ground biomass density estimation from Earth Observation data as a case study, we show that PG-CBM reduces error and bias compared to multiple benchmarks, whilst leveraging multi-source heterogeneous training data and producing interpretable intermediate outputs. Beyond improved accuracy, PG-CBM enhances transparency, enables detection of spurious learning, and provides scientific insights, representing a step toward more trustworthy AI systems in scientific applications.
|
https://arxiv.org/abs/2601.10562
|
Academic Papers
|
svg
|
64b38d11efb515432e12809b23f0df3c2ffe64b6bad72c5e2af04eab1d7f931d
|
2026-01-16T00:00:00-05:00
|
Kolmogorov Arnold Networks and Multi-Layer Perceptrons: A Paradigm Shift in Neural Modelling
|
arXiv:2601.10563v1 Announce Type: new Abstract: The research undertakes a comprehensive comparative analysis of Kolmogorov-Arnold Networks (KAN) and Multi-Layer Perceptrons (MLP), highlighting their effectiveness in solving essential computational challenges like nonlinear function approximation, time-series prediction, and multivariate classification. Rooted in Kolmogorov's representation theorem, KANs utilize adaptive spline-based activation functions and grid-based structures, providing a transformative approach compared to traditional neural network frameworks. Utilizing a variety of datasets spanning mathematical function estimation (quadratic and cubic) to practical uses like predicting daily temperatures and categorizing wines, the proposed research thoroughly assesses model performance via accuracy measures like Mean Squared Error (MSE) and computational expense assessed through Floating Point Operations (FLOPs). The results indicate that KANs reliably exceed MLPs in every benchmark, attaining higher predictive accuracy with significantly reduced computational costs. Such an outcome highlights their ability to maintain a balance between computational efficiency and accuracy, rendering them especially beneficial in resource-limited and real-time operational environments. By elucidating the architectural and functional distinctions between KANs and MLPs, the paper provides a systematic framework for selecting the most suitable neural architectures for specific tasks. Furthermore, the proposed study highlights the transformative capabilities of KANs in progressing intelligent systems, influencing their use in situations that require both interpretability and computational efficiency.
|
https://arxiv.org/abs/2601.10563
|
Academic Papers
|
svg
|
5eca1319955363d2f03915f18c177d9e0b63401c35e2b104260600c64e813aac
|
2026-01-16T00:00:00-05:00
|
Rewriting Systems on Arbitrary Monoids
|
arXiv:2601.10564v1 Announce Type: new Abstract: In this paper, we introduce monoidal rewriting systems (MRS), an abstraction of string rewriting in which reductions are defined over an arbitrary ambient monoid rather than a free monoid of words. This shift is partly motivated by logic: the class of free monoids is not first-order axiomatizable, so "working in the free setting" cannot be treated internally when applying first-order methods to rewriting presentations. To analyze these systems categorically, we define $\mathbf{NCRS_2}$ as the 2-category of Noetherian Confluent MRS. We then prove the existence of a canonical biadjunction between $\mathbf{NCRS_2}$ and $\mathbf{Mon}$. Finally, we classify all Noetherian Confluent MRS that present a given fixed monoid. For this, we introduce Generalized Elementary Tietze Transformations (GETTs) and prove that any two presentations of a monoid are connected by a (possibly infinite) sequence of these transformations, yielding a complete characterization of generating systems up to GETT-equivalence.
|
https://arxiv.org/abs/2601.10564
|
Academic Papers
|
svg
|
252bb2cef0b892868c71aa3fb99faa5234149213e8ec1c8e5997438ee0089eca
|
2026-01-16T00:00:00-05:00
|
Inferring signed social networks from contact patterns
|
arXiv:2601.10565v1 Announce Type: new Abstract: Social networks are typically inferred from indirect observations, such as proximity data; yet, most methods cannot distinguish between absent relationships and actual negative ties, as both can result in few or no interactions. We address the challenge of inferring signed networks from contact patterns while accounting for whether lack of interactions reflect a lack of opportunity as opposed to active avoidance. We develop a Bayesian framework with MCMC inference that models interaction groups to separate chance from choice when no interactions are observed. Validation on synthetic data demonstrates superior performance compared to natural baselines, particularly in detecting negative edges. We apply our method to French high school contact data to reveal a structure consistent with friendship surveys and demonstrate the model's adequacy through posterior predictive checks.
|
https://arxiv.org/abs/2601.10565
|
Academic Papers
|
svg
|
29b27ceb2af4a6e9b60aa3d29619d84df64ed009c4419ae1abb748616baa25ff
|
2026-01-16T00:00:00-05:00
|
Representation-Aware Unlearning via Activation Signatures: From Suppression to Knowledge-Signature Erasure
|
arXiv:2601.10566v1 Announce Type: new Abstract: Selective knowledge erasure from LLMs is critical for GDPR compliance and model safety, yet current unlearning methods conflate behavioral suppression with true knowledge removal, allowing latent capabilities to persist beneath surface-level refusals. In this work, we address this challenge by introducing Knowledge Immunization Framework (KIF), a representation-aware architecture that distinguishes genuine erasure from obfuscation by targeting internal activation signatures rather than surface outputs. Our approach combines dynamic suppression of subject-specific representations with parameter-efficient adaptation, enabling durable unlearning without full model retraining. KIF achieves near-oracle erasure (FQ approx 0.99 vs. 1.00) while preserving utility at oracle levels (MU = 0.62), effectively breaking the stability-erasure tradeoff that has constrained all prior work. We evaluate both standard foundation models (Llama and Mistral) and reasoning-prior models (Qwen and DeepSeek) across 3B to 14B parameters. Our observation shows that standard models exhibit scale-independent true erasure (<3% utility drift), while reasoning-prior models reveal fundamental architectural divergence. Our comprehensive dual-metric evaluation protocol, combining surface-level leakage with latent trace persistence, operationalizes the obfuscation - erasure distinction and enables the first systematic diagnosis of mechanism-level forgetting behavior across model families and scales.
|
https://arxiv.org/abs/2601.10566
|
Academic Papers
|
svg
|
dd659ad7cb04658d6e598082b5f94f35a62d4886c8db66bb480f694c8392360c
|
2026-01-16T00:00:00-05:00
|
Generative AI collective behavior needs an interactionist paradigm
|
arXiv:2601.10567v1 Announce Type: new Abstract: In this article, we argue that understanding the collective behavior of agents based on large language models (LLMs) is an essential area of inquiry, with important implications in terms of risks and benefits, impacting us as a society at many levels. We claim that the distinctive nature of LLMs--namely, their initialization with extensive pre-trained knowledge and implicit social priors, together with their capability of adaptation through in-context learning--motivates the need for an interactionist paradigm consisting of alternative theoretical foundations, methodologies, and analytical tools, in order to systematically examine how prior knowledge and embedded values interact with social context to shape emergent phenomena in multi-agent generative AI systems. We propose and discuss four directions that we consider crucial for the development and deployment of LLM-based collectives, focusing on theory, methods, and trans-disciplinary dialogue.
|
https://arxiv.org/abs/2601.10567
|
Academic Papers
|
svg
|
d213b6d1210532aaad4aacaf7a4543012399d9d26990b09cfac876411e99edd7
|
2026-01-16T00:00:00-05:00
|
Sparse Signal Recovery from Random Measurements
|
arXiv:2601.10569v1 Announce Type: new Abstract: Given the compressed sensing measurements of an unknown vector $z \in \mathbb{R}^n$ using random matrices, we present a simple method to determine $z$ without solving any optimization problem or linear system. Our method uses $\Theta(\log n)$ random sensing matrices in $\mathbb{R}^{k \times n}$ and runs in $O(kn\log n)$ time, where $k = \Theta(s\log n)$ and $s$ is the number of nonzero coordinates in $z$. We adapt our method to determine the support set of $z$ and experimentally compare with some optimization-based methods on binary signals.
|
https://arxiv.org/abs/2601.10569
|
Academic Papers
|
svg
|
23c68367866da1846fe1e5bed3205068c81a8ebdb53eea40ac4c869362ebfdf1
|
2026-01-16T00:00:00-05:00
|
Long-term Monitoring of Kernel and Hardware Events to Understand Latency Variance
|
arXiv:2601.10572v1 Announce Type: new Abstract: This paper presents our experience to understand latency variance caused by kernel and hardware events, which are often invisible at the application level. For this purpose, we have built VarMRI, a tool chain to monitor and analyze those events in the long term. To mitigate the "big data" problem caused by long-term monitoring, VarMRI selectively records a subset of events following two principles: it only records events that are affecting the requests recorded by the application; it records coarse-grained information first and records additional information only when necessary. Furthermore, VarMRI introduces an analysis method that is efficient on large amount of data, robust on different data set and against missing data, and informative to the user. VarMRI has helped us to carry out a 3,000-hour study of six applications and benchmarks on CloudLab. It reveals a wide variety of events causing latency variance, including interrupt preemption, Java GC, pipeline stall, NUMA balancing etc.; simple optimization or tuning can reduce tail latencies by up to 31%. Furthermore, the impacts of some of these events vary significantly across different experiments, which confirms the necessity of long-term monitoring.
|
https://arxiv.org/abs/2601.10572
|
Academic Papers
|
svg
|
5371cfd26918c7f24659a078b686174749d0ceaaa1828940665819ec8e5fb3de
|
2026-01-16T00:00:00-05:00
|
Jordan-Segmentable Masks: A Topology-Aware definition for characterizing Binary Image Segmentation
|
arXiv:2601.10577v1 Announce Type: new Abstract: Image segmentation plays a central role in computer vision. However, widely used evaluation metrics, whether pixel-wise, region-based, or boundary-focused, often struggle to capture the structural and topological coherence of a segmentation. In many practical scenarios, such as medical imaging or object delineation, small inaccuracies in boundary, holes, or fragmented predictions can result in high metric scores, despite the fact that the resulting masks fail to preserve the object global shape or connectivity. This highlights a limitation of conventional metrics: they are unable to assess whether a predicted segmentation partitions the image into meaningful interior and exterior regions. In this work, we introduce a topology-aware notion of segmentation based on the Jordan Curve Theorem, and adapted for use in digital planes. We define the concept of a \emph{Jordan-segmentatable mask}, which is a binary segmentation whose structure ensures a topological separation of the image domain into two connected components. We analyze segmentation masks through the lens of digital topology and homology theory, extracting a $4$-curve candidate from the mask, verifying its topological validity using Betti numbers. A mask is considered Jordan-segmentatable when this candidate forms a digital 4-curve with $\beta_0 = \beta_1 = 1$, or equivalently when its complement splits into exactly two $8$-connected components. This framework provides a mathematically rigorous, unsupervised criterion with which to assess the structural coherence of segmentation masks. By combining digital Jordan theory and homological invariants, our approach provides a valuable alternative to standard evaluation metrics, especially in applications where topological correctness must be preserved.
|
https://arxiv.org/abs/2601.10577
|
Academic Papers
|
svg
|
21fddbf25483cf1e73fcb3235b2a7bb0d52c0e5ab67c4849ca7ba297ac3f23d3
|
2026-01-16T00:00:00-05:00
|
Form and Meaning in Intrinsic Multilingual Evaluations
|
arXiv:2601.10580v1 Announce Type: new Abstract: Intrinsic evaluation metrics for conditional language models, such as perplexity or bits-per-character, are widely used in both mono- and multilingual settings. These metrics are rather straightforward to use and compare in monolingual setups, but rest on a number of assumptions in multilingual setups. One such assumption is that comparing the perplexity of CLMs on parallel sentences is indicative of their quality since the information content (here understood as the semantic meaning) is the same. However, the metrics are inherently measuring information content in the information-theoretic sense. We make this and other such assumptions explicit and discuss their implications. We perform experiments with six metrics on two multi-parallel corpora both with mono- and multilingual models. Ultimately, we find that current metrics are not universally comparable. We look at the form-meaning debate to provide some explanation for this.
|
https://arxiv.org/abs/2601.10580
|
Academic Papers
|
svg
|
3202d5939ca811e47d80185cd09b98e0145e013904757a6cb5a4cac30fd07b57
|
2026-01-16T00:00:00-05:00
|
From Single to Multi-Agent Reasoning: Advancing GeneGPT for Genomics QA
|
arXiv:2601.10581v1 Announce Type: new Abstract: Comprehending genomic information is essential for biomedical research, yet extracting data from complex distributed databases remains challenging. Large language models (LLMs) offer potential for genomic Question Answering (QA) but face limitations due to restricted access to domain-specific databases. GeneGPT is the current state-of-the-art system that enhances LLMs by utilizing specialized API calls, though it is constrained by rigid API dependencies and limited adaptability. We replicate GeneGPT and propose GenomAgent, a multi-agent framework that efficiently coordinates specialized agents for complex genomics queries. Evaluated on nine tasks from the GeneTuring benchmark, GenomAgent outperforms GeneGPT by 12% on average, and its flexible architecture extends beyond genomics to various scientific domains needing expert knowledge extraction.
|
https://arxiv.org/abs/2601.10581
|
Academic Papers
|
svg
|
fb6d1fd4d02ef403198c419201ed8b97be6073921e7e3ab9540f1c61a65e5817
|
2026-01-16T00:00:00-05:00
|
Mitigating GIL Bottlenecks in Edge AI Systems
|
arXiv:2601.10582v1 Announce Type: new Abstract: Deploying Python based AI agents on resource-constrained edge devices presents a runtime optimization challenge: high thread counts are needed to mask I/O latency, yet Python's Global Interpreter Lock (GIL) serializes execution. We demonstrate that naive thread-pool scaling causes a "saturation cliff": >= 20% throughput degradation at overprovisioned thread counts (N >= 512) on edge-representative configurations. We present a lightweight profiling tool and adaptive runtime system using a Blocking Ratio metric (beta) that distinguishes genuine I/O wait from GIL contention. Our library-based solution achieves 96.5% of optimal performance without manual tuning, outperforming multiprocessing (limited by ~8x memory overhead on devices with 512 MB-2 GB RAM) and asyncio (blocked by CPU-bound phases). Evaluation across seven edge AI workload profiles, including real ML inference with ONNX Runtime MobileNetV2, demonstrates 93.9% average efficiency. Comparative experiments with Python 3.13t (free threading) show that while GIL elimination enables ~4x throughput on multi-core edge devices, the saturation cliff persists on single-core devices, validating our beta metric for both GIL and no-GIL environments. This provides practical optimization for edge AI systems.
|
https://arxiv.org/abs/2601.10582
|
Academic Papers
|
svg
|
f0b688a7d088aba15a1d564ebce14281a6ba8f578993887f90431f6fe3619a1a
|
2026-01-16T00:00:00-05:00
|
Combinatorial Optimization Augmented Machine Learning
|
arXiv:2601.10583v1 Announce Type: new Abstract: Combinatorial optimization augmented machine learning (COAML) has recently emerged as a powerful paradigm for integrating predictive models with combinatorial decision-making. By embedding combinatorial optimization oracles into learning pipelines, COAML enables the construction of policies that are both data-driven and feasibility-preserving, bridging the traditions of machine learning, operations research, and stochastic optimization. This paper provides a comprehensive overview of the state of the art in COAML. We introduce a unifying framework for COAML pipelines, describe their methodological building blocks, and formalize their connection to empirical cost minimization. We then develop a taxonomy of problem settings based on the form of uncertainty and decision structure. Using this taxonomy, we review algorithmic approaches for static and dynamic problems, survey applications across domains such as scheduling, vehicle routing, stochastic programming, and reinforcement learning, and synthesize methodological contributions in terms of empirical cost minimization, imitation learning, and reinforcement learning. Finally, we identify key research frontiers. This survey aims to serve both as a tutorial introduction to the field and as a roadmap for future research at the interface of combinatorial optimization and machine learning.
|
https://arxiv.org/abs/2601.10583
|
Academic Papers
|
svg
|
1f025e458f7f62dfbb9b6211f1bd74405f6263f6196b496e39b6ba16fb73b4cd
|
2026-01-16T00:00:00-05:00
|
Adversarial Evasion Attacks on Computer Vision using SHAP Values
|
arXiv:2601.10587v1 Announce Type: new Abstract: The paper introduces a white-box attack on computer vision models using SHAP values. It demonstrates how adversarial evasion attacks can compromise the performance of deep learning models by reducing output confidence or inducing misclassifications. Such attacks are particularly insidious as they can deceive the perception of an algorithm while eluding human perception due to their imperceptibility to the human eye. The proposed attack leverages SHAP values to quantify the significance of individual inputs to the output at the inference stage. A comparison is drawn between the SHAP attack and the well-known Fast Gradient Sign Method. We find evidence that SHAP attacks are more robust in generating misclassifications particularly in gradient hiding scenarios.
|
https://arxiv.org/abs/2601.10587
|
Academic Papers
|
svg
|
0e50ee7bb714f1aae7f6b366c8beaa572ef9db6a39cf29d16d593b6c657b86e5
|
2026-01-16T00:00:00-05:00
|
Be Your Own Red Teamer: Safety Alignment via Self-Play and Reflective Experience Replay
|
arXiv:2601.10589v1 Announce Type: new Abstract: Large Language Models (LLMs) have achieved remarkable capabilities but remain vulnerable to adversarial ``jailbreak'' attacks designed to bypass safety guardrails. Current safety alignment methods depend heavily on static external red teaming, utilizing fixed defense prompts or pre-collected adversarial datasets. This leads to a rigid defense that overfits known patterns and fails to generalize to novel, sophisticated threats. To address this critical limitation, we propose empowering the model to be its own red teamer, capable of achieving autonomous and evolving adversarial attacks. Specifically, we introduce Safety Self- Play (SSP), a system that utilizes a single LLM to act concurrently as both the Attacker (generating jailbreaks) and the Defender (refusing harmful requests) within a unified Reinforcement Learning (RL) loop, dynamically evolving attack strategies to uncover vulnerabilities while simultaneously strengthening defense mechanisms. To ensure the Defender effectively addresses critical safety issues during the self-play, we introduce an advanced Reflective Experience Replay Mechanism, which uses an experience pool accumulated throughout the process. The mechanism employs a Upper Confidence Bound (UCB) sampling strategy to focus on failure cases with low rewards, helping the model learn from past hard mistakes while balancing exploration and exploitation. Extensive experiments demonstrate that our SSP approach autonomously evolves robust defense capabilities, significantly outperforming baselines trained on static adversarial datasets and establishing a new benchmark for proactive safety alignment.
|
https://arxiv.org/abs/2601.10589
|
Academic Papers
|
svg
|
d18697f792c81a4c3d8a66b885232f8200e92fafa3e359ab2a4a7e918c05111a
|
2026-01-16T00:00:00-05:00
|
ProbFM: Probabilistic Time Series Foundation Model with Uncertainty Decomposition
|
arXiv:2601.10591v1 Announce Type: new Abstract: Time Series Foundation Models (TSFMs) have emerged as a promising approach for zero-shot financial forecasting, demonstrating strong transferability and data efficiency gains. However, their adoption in financial applications is hindered by fundamental limitations in uncertainty quantification: current approaches either rely on restrictive distributional assumptions, conflate different sources of uncertainty, or lack principled calibration mechanisms. While recent TSFMs employ sophisticated techniques such as mixture models, Student's t-distributions, or conformal prediction, they fail to address the core challenge of providing theoretically-grounded uncertainty decomposition. For the very first time, we present a novel transformer-based probabilistic framework, ProbFM (probabilistic foundation model), that leverages Deep Evidential Regression (DER) to provide principled uncertainty quantification with explicit epistemic-aleatoric decomposition. Unlike existing approaches that pre-specify distributional forms or require sampling-based inference, ProbFM learns optimal uncertainty representations through higher-order evidence learning while maintaining single-pass computational efficiency. To rigorously evaluate the core DER uncertainty quantification approach independent of architectural complexity, we conduct an extensive controlled comparison study using a consistent LSTM architecture across five probabilistic methods: DER, Gaussian NLL, Student's-t NLL, Quantile Loss, and Conformal Prediction. Evaluation on cryptocurrency return forecasting demonstrates that DER maintains competitive forecasting accuracy while providing explicit epistemic-aleatoric uncertainty decomposition. This work establishes both an extensible framework for principled uncertainty quantification in foundation models and empirical evidence for DER's effectiveness in financial applications.
|
https://arxiv.org/abs/2601.10591
|
Academic Papers
|
svg
|
fa0b4b67f9b0fb9590c2125b85905cfda6beaed1987eaca04f7499755b0710dd
|
2026-01-16T00:00:00-05:00
|
Action100M: A Large-scale Video Action Dataset
|
arXiv:2601.10592v1 Announce Type: new Abstract: Inferring physical actions from visual observations is a fundamental capability for advancing machine intelligence in the physical world. Achieving this requires large-scale, open-vocabulary video action datasets that span broad domains. We introduce Action100M, a large-scale dataset constructed from 1.2M Internet instructional videos (14.6 years of duration), yielding O(100 million) temporally localized segments with open-vocabulary action supervision and rich captions. Action100M is generated by a fully automated pipeline that (i) performs hierarchical temporal segmentation using V-JEPA 2 embeddings, (ii) produces multi-level frame and segment captions organized as a Tree-of-Captions, and (iii) aggregates evidence with a reasoning model (GPT-OSS-120B) under a multi-round Self-Refine procedure to output structured annotations (brief/detailed action, actor, brief/detailed caption). Training VL-JEPA on Action100M demonstrates consistent data-scaling improvements and strong zero-shot performance across diverse action recognition benchmarks, establishing Action100M as a new foundation for scalable research in video understanding and world modeling.
|
https://arxiv.org/abs/2601.10592
|
Academic Papers
|
svg
|
9197e08dfa019f720d783e0c786bb9246937cbfd43387ef5fdb75c81a7fcd635
|
2026-01-16T00:00:00-05:00
|
Improving Database Performance by Application-side Transaction Merging
|
arXiv:2601.10596v1 Announce Type: new Abstract: This paper explores a new opportunity to improve the performance of transaction processing at the application side by merging structurely similar statements or transactions. Concretely, we re-write transactions to 1) merge similar statements using specific SQL semantics; 2) eliminate redundant reads; and 3) merge contending statements across transactions by pre-computing their aggregated effect. Following this idea, we present the design of TransactionMerger, a middleware to collect and merge transactions across different clients. We further present a static analysis tool to identify the merging opportunity without violating isolation as well as our experience of re-writing transactions in TPC-C and Spree, a popular real-world application. Our evaluation shows that such transaction merging can improve TPC-C throughput by up to 2.65X and Spree throughput by 3.52X.
|
https://arxiv.org/abs/2601.10596
|
Academic Papers
|
svg
|
89279757b347cb21f304d931d27c343f0a97df46819292e20e18f6edc879bfc1
|
2026-01-16T00:00:00-05:00
|
Institutional AI: A Governance Framework for Distributional AGI Safety
|
arXiv:2601.10599v1 Announce Type: new Abstract: As LLM-based systems increasingly operate as agents embedded within human social and technical systems, alignment can no longer be treated as a property of an isolated model, but must be understood in relation to the environments in which these agents act. Even the most sophisticated methods of alignment, such as Reinforcement Learning through Human Feedback (RHLF) or through AI Feedback (RLAIF) cannot ensure control once internal goal structures diverge from developer intent. We identify three structural problems that emerge from core properties of AI models: (1) behavioral goal-independence, where models develop internal objectives and misgeneralize goals; (2) instrumental override of natural-language constraints, where models regard safety principles as non-binding while pursuing latent objectives, leveraging deception and manipulation; and (3) agentic alignment drift, where individually aligned agents converge to collusive equilibria through interaction dynamics invisible to single-agent audits. The solution this paper advances is Institutional AI: a system-level approach that treats alignment as a question of effective governance of AI agent collectives. We argue for a governance-graph that details how to constrain agents via runtime monitoring, incentive shaping through prizes and sanctions, explicit norms and enforcement roles. This institutional turn reframes safety from software engineering to a mechanism design problem, where the primary goal of alignment is shifting the payoff landscape of AI agent collectives.
|
https://arxiv.org/abs/2601.10599
|
Academic Papers
|
svg
|
1dd02c15a14b154b16534b038192005a866e61785f238a5446304a6b9b417063
|
2026-01-16T00:00:00-05:00
|
Procedural Fairness in Multi-Agent Bandits
|
arXiv:2601.10600v1 Announce Type: new Abstract: In the context of multi-agent multi-armed bandits (MA-MAB), fairness is often reduced to outcomes: maximizing welfare, reducing inequality, or balancing utilities. However, evidence in psychology, economics, and Rawlsian theory suggests that fairness is also about process and who gets a say in the decisions being made. We introduce a new fairness objective, procedural fairness, which provides equal decision-making power for all agents, lies in the core, and provides for proportionality in outcomes. Empirical results confirm that fairness notions based on optimizing for outcomes sacrifice equal voice and representation, while the sacrifice in outcome-based fairness objectives (like equality and utilitarianism) is minimal under procedurally fair policies. We further prove that different fairness notions prioritize fundamentally different and incompatible values, highlighting that fairness requires explicit normative choices. This paper argues that procedural legitimacy deserves greater focus as a fairness objective, and provides a framework for putting procedural fairness into practice.
|
https://arxiv.org/abs/2601.10600
|
Academic Papers
|
svg
|
e86fbb8a8f78185f5ecf99f041d1a097634bdaf41213fe88a3926f0e1763ef7c
|
2026-01-16T00:00:00-05:00
|
Fundamental Limits of Multi-User Distributed Computing of Linearly Separable Functions
|
arXiv:2601.10603v1 Announce Type: new Abstract: This work establishes the fundamental limits of the classical problem of multi-user distributed computing of linearly separable functions. In particular, we consider a distributed computing setting involving $L$ users, each requesting a linearly separable function over $K$ basis subfunctions from a master node, who is assisted by $N$ distributed servers. At the core of this problem lies a fundamental tradeoff between communication and computation: each server can compute up to $M$ subfunctions, and each server can communicate linear combinations of their locally computed subfunctions outputs to at most $\Delta$ users. The objective is to design a distributed computing scheme that reduces the communication cost (total amount of data from servers to users), and towards this, for any given $K$, $L$, $M$, and $\Delta$, we propose a distributed computing scheme that jointly designs the task assignment and transmissions, and shows that the scheme achieves optimal performance in the real field under various conditions using a novel converse. We also characterize the performance of the scheme in the finite field using another converse based on counting arguments.
|
https://arxiv.org/abs/2601.10603
|
Academic Papers
|
svg
|
73d2e7b3f00cd88bd83a2378df991251f1ec2f241ec41d2f30b95f7542b44661
|
2026-01-16T00:00:00-05:00
|
Translating database mathematical schemes into relational database software applications with MatBase
|
arXiv:2601.10604v1 Announce Type: new Abstract: We present a pseudocode algorithm for translating our (Elementary) Mathematical Data Model schemes into relational ones and associated sets of non-relational constraints, used by MatBase, our intelligent database management system prototype. We prove that this algorithm is very fast, solid, complete, and optimal. We apply it to a mathematical scheme modeling the genealogical trees subuniverse. We also provide examples of SQL and VBA code for enforcing some of its non-relational constraints, as well as guidelines to develop code for enforcing such constraints.
|
https://arxiv.org/abs/2601.10604
|
Academic Papers
|
svg
|
65fa3e9a3db62ab2b4a5c9edf91d2f1c541809933f49d55781febf0cd2827a26
|
2026-01-16T00:00:00-05:00
|
A user subscription model in mobile radio access networks with network slicing
|
arXiv:2601.10605v1 Announce Type: new Abstract: Network slicing is an architectural enabling technology that logically decouples the current cellular networks into infrastructure providers (InPs) and Network Slice Tenants (NSTs). The network resources (e.g., radio access resources at each cell) are owned by the InP, and are shared by the NSTs to provide a service to their mobile users. In this context, we proposed a business model that includes resource allocation and user subscription to NSTs in a competitive setting, and provides, among other things, closed-form expressions for the subscription indicators in equilibrium of each NST at each cell. This model relies on the widely adopted logit model to characterize user subscriptions. However, as a consequence of user mobility and radio propagation, some of the underlying assumptions in the logit model do not hold. Therefore, further research is needed to assess the accuracy of the results provided by the logit model in a mobile radio scenario. We carry out a thorough evaluation of the validity of the model by comparing its results against those obtained through computer simulation. Our simulation model includes complete and realistic characterizations of user mobility and radio propagation. From the results, we conclude in most cases the logit model provides valid results in a mobile radio scenario.
|
https://arxiv.org/abs/2601.10605
|
Academic Papers
|
svg
|
72bcc3ddbf5ddf9ce09528be1c297496405abd943bb150b07fa67edf77b5f3f7
|
2026-01-16T00:00:00-05:00
|
RSATalker: Realistic Socially-Aware Talking Head Generation for Multi-Turn Conversation
|
arXiv:2601.10606v1 Announce Type: new Abstract: Talking head generation is increasingly important in virtual reality (VR), especially for social scenarios involving multi-turn conversation. Existing approaches face notable limitations: mesh-based 3D methods can model dual-person dialogue but lack realistic textures, while large-model-based 2D methods produce natural appearances but incur prohibitive computational costs. Recently, 3D Gaussian Splatting (3DGS) based methods achieve efficient and realistic rendering but remain speaker-only and ignore social relationships. We introduce RSATalker, the first framework that leverages 3DGS for realistic and socially-aware talking head generation with support for multi-turn conversation. Our method first drives mesh-based 3D facial motion from speech, then binds 3D Gaussians to mesh facets to render high-fidelity 2D avatar videos. To capture interpersonal dynamics, we propose a socially-aware module that encodes social relationships, including blood and non-blood as well as equal and unequal, into high-level embeddings through a learnable query mechanism. We design a three-stage training paradigm and construct the RSATalker dataset with speech-mesh-image triplets annotated with social relationships. Extensive experiments demonstrate that RSATalker achieves state-of-the-art performance in both realism and social awareness. The code and dataset will be released.
|
https://arxiv.org/abs/2601.10606
|
Academic Papers
|
svg
|
dd461532021a969d32d60a44693361249c2bf70818c664cdf285ea0f9490f850
|
2026-01-16T00:00:00-05:00
|
iTIMO: An LLM-empowered Synthesis Dataset for Travel Itinerary Modification
|
arXiv:2601.10609v1 Announce Type: new Abstract: Addressing itinerary modification is crucial for enhancing the travel experience as it is a frequent requirement during traveling. However, existing research mainly focuses on fixed itinerary planning, leaving modification underexplored. To bridge this gap, we formally define the itinerary modification task and introduce iTIMO, a dataset specifically tailored for this purpose. We identify the lack of {\itshape need-to-modify} itinerary data as the critical bottleneck hindering research on this task and propose a general pipeline to overcome it. This pipeline frames the generation of such data as an intent-driven perturbation task. It instructs large language models to perturb real world itineraries using three atomic editing operations: REPLACE, ADD, and DELETE. Each perturbation is grounded in three intents, including disruptions of popularity, spatial distance, and category diversity. Furthermore, a hybrid evaluation metric is designed to ensure perturbation effectiveness. We conduct comprehensive experiments on iTIMO, revealing the limitations of current LLMs and lead to several valuable directions for future research. Dataset and corresponding code are available at https://github.com/zelo2/iTIMO.
|
https://arxiv.org/abs/2601.10609
|
Academic Papers
|
svg
|
9264a517d9a111d7d64abe36ec6dbb23b2d876c045fb85e427856bb458d84a0b
|
2026-01-16T00:00:00-05:00
|
Molmo2: Open Weights and Data for Vision-Language Models with Video Understanding and Grounding
|
arXiv:2601.10611v1 Announce Type: new Abstract: Today's strongest video-language models (VLMs) remain proprietary. The strongest open-weight models either rely on synthetic data from proprietary VLMs, effectively distilling from them, or do not disclose their training data or recipe. As a result, the open-source community lacks the foundations needed to improve on the state-of-the-art video (and image) language models. Crucially, many downstream applications require more than just high-level video understanding; they require grounding -- either by pointing or by tracking in pixels. Even proprietary models lack this capability. We present Molmo2, a new family of VLMs that are state-of-the-art among open-source models and demonstrate exceptional new capabilities in point-driven grounding in single image, multi-image, and video tasks. Our key contribution is a collection of 7 new video datasets and 2 multi-image datasets, including a dataset of highly detailed video captions for pre-training, a free-form video Q&A dataset for fine-tuning, a new object tracking dataset with complex queries, and an innovative new video pointing dataset, all collected without the use of closed VLMs. We also present a training recipe for this data utilizing an efficient packing and message-tree encoding scheme, and show bi-directional attention on vision tokens and a novel token-weight strategy improves performance. Our best-in-class 8B model outperforms others in the class of open weight and data models on short videos, counting, and captioning, and is competitive on long-videos. On video-grounding Molmo2 significantly outperforms existing open-weight models like Qwen3-VL (35.5 vs 29.6 accuracy on video counting) and surpasses proprietary models like Gemini 3 Pro on some tasks (38.4 vs 20.0 F1 on video pointing and 56.2 vs 41.1 J&F on video tracking).
|
https://arxiv.org/abs/2601.10611
|
Academic Papers
|
svg
|
251815a3210103f3027d5de988f284b3db72edeac44eb21aadd8c84ac8510388
|
2026-01-16T00:00:00-05:00
|
Basis-Spline Assisted Coded Computing: Strategies and Error Bounds
|
arXiv:2601.10616v1 Announce Type: new Abstract: Coded computing has become a key framework for reliable distributed computation over decentralized networks, effectively mitigating the impact of stragglers. Although there exists a wide range of coded computing methods to handle both polynomial and non-polynomial functions, computing methods for the latter class have received traction due its inherent challenges in reconstructing non-polynomial functions using a finite number of evaluations. Among them, the state-of-the-art method is Berrut Approximated coded computing, wherein Berrut interpolants, are used for approximating the non-polynomial function. However, since Berrut interpolants have global support characteristics, such methods are known to offer degraded accuracy when the number of stragglers is large. To address this challenge, we propose a coded computing framework based on cubic B-spline interpolation. In our approach, server-side function evaluations are reconstructed at the master node using B-splines, exploiting their local support and smoothness properties to enhance stability and accuracy. We provide a systematic methodology for integrating B-spline interpolation into coded computing and derive theoretical bounds on approximation error in terms of the number of servers and stragglers. Comparative analysis demonstrates that our framework significantly outperforms Berrut-based methods for various non-polynomial functions.
|
https://arxiv.org/abs/2601.10616
|
Academic Papers
|
svg
|
6a414d3c75021b6c09c70e9614665a7715e16e17c847d44dfd7fadc5768233a9
|
2026-01-16T00:00:00-05:00
|
Extrinsic Vector Field Processing
|
arXiv:2601.10621v1 Announce Type: new Abstract: We propose a novel discretization of tangent vector fields for triangle meshes. Starting with a Phong map continuously assigning normals to all points on the mesh, we define an extrinsic bases for continuous tangent vector fields by using the Rodrigues rotation to transport tangent vectors assigned to vertices to tangent vectors in the interiors of the triangles. As our vector fields are continuous and weakly differentiable, we can use them to define a covariant derivative field that is evaluatable almost-everywhere on the mesh. Decomposing the covariant derivative in terms of diagonal multiple of the identity, anti-symmetric, and trace-less symmetric components, we can define the standard operators used for vector field processing including the Hodge Laplacian energy, Connection Laplacian energy, and Killing energy. Additionally, the ability to perform point-wise evaluation of the covariant derivative also makes it possible for us to define the Lie bracket.
|
https://arxiv.org/abs/2601.10621
|
Academic Papers
|
svg
|
6fce7c28e07f21b3c35f781bdc2cd0847b136d5f4bda72ba3a8eca2f2607e024
|
2026-01-16T00:00:00-05:00
|
CoMoVi: Co-Generation of 3D Human Motions and Realistic Videos
|
arXiv:2601.10632v1 Announce Type: new Abstract: In this paper, we find that the generation of 3D human motions and 2D human videos is intrinsically coupled. 3D motions provide the structural prior for plausibility and consistency in videos, while pre-trained video models offer strong generalization capabilities for motions, which necessitate coupling their generation processes. Based on this, we present CoMoVi, a co-generative framework that couples two video diffusion models (VDMs) to generate 3D human motions and videos synchronously within a single diffusion denoising loop. To achieve this, we first propose an effective 2D human motion representation that can inherit the powerful prior of pre-trained VDMs. Then, we design a dual-branch diffusion model to couple human motion and video generation process with mutual feature interaction and 3D-2D cross attentions. Moreover, we curate CoMoVi Dataset, a large-scale real-world human video dataset with text and motion annotations, covering diverse and challenging human motions. Extensive experiments demonstrate the effectiveness of our method in both 3D human motion and video generation tasks.
|
https://arxiv.org/abs/2601.10632
|
Academic Papers
|
svg
|
d1941667fd5182abd2853b0b1376c00b1f1d003637b3d0218030d9813a93716b
|
2026-01-16T00:00:00-05:00
|
STEM: Scaling Transformers with Embedding Modules
|
arXiv:2601.10639v1 Announce Type: new Abstract: Fine-grained sparsity promises higher parametric capacity without proportional per-token compute, but often suffers from training instability, load balancing, and communication overhead. We introduce STEM (Scaling Transformers with Embedding Modules), a static, token-indexed approach that replaces the FFN up-projection with a layer-local embedding lookup while keeping the gate and down-projection dense. This removes runtime routing, enables CPU offload with asynchronous prefetch, and decouples capacity from both per-token FLOPs and cross-device communication. Empirically, STEM trains stably despite extreme sparsity. It improves downstream performance over dense baselines while reducing per-token FLOPs and parameter accesses (eliminating roughly one-third of FFN parameters). STEM learns embedding spaces with large angular spread which enhances its knowledge storage capacity. More interestingly, this enhanced knowledge capacity comes with better interpretability. The token-indexed nature of STEM embeddings allows simple ways to perform knowledge editing and knowledge injection in an interpretable manner without any intervention in the input text or additional computation. In addition, STEM strengthens long-context performance: as sequence length grows, more distinct parameters are activated, yielding practical test-time capacity scaling. Across 350M and 1B model scales, STEM delivers up to ~3--4% accuracy improvements overall, with notable gains on knowledge and reasoning-heavy benchmarks (ARC-Challenge, OpenBookQA, GSM8K, MMLU). Overall, STEM is an effective way of scaling parametric memory while providing better interpretability, better training stability and improved efficiency.
|
https://arxiv.org/abs/2601.10639
|
Academic Papers
|
svg
|
41dee18ecb22e09d379fc04928394ded949b57c30f1b3124dd12ae93760abccc
|
2026-01-16T00:00:00-05:00
|
Converse Bounds for Sun-Jafar-type Weak Private Information Retrieval
|
arXiv:2601.10643v1 Announce Type: new Abstract: Building on the well-established capacity-achieving schemes of Sun-Jafar (for replicated storage) and the closely related scheme of Banawan-Ulukus (for MDS-coded setting), a recent work by Chandan et al. proposed new classes of weak private information retrieval (WPIR) schemes for the collusion-free (replication and MDS-coded) setting, as well as for the $T$-colluding scenario. In their work, Chandan et al. characterized the expressions for the rate-privacy trade-offs for these classes of WPIR schemes, under the mutual information leakage and maximal leakage metrics. Explicit achievable trade-offs for the same were also presented, which were shown to be competitive or better than prior WPIR schemes. However, the class-wise optimality of the reported trade-offs were unknown. In this work, we show that the explicit rate-privacy trade-offs reported for the Sun-Jafar-type schemes by Chandan et al. are optimal for the non-colluding and replicated setting. Furthermore, we prove the class-wise optimality for Banawan-Ulukus-type MDS-WPIR and Sun-Jafar-type $T$-colluding WPIR schemes, under threshold-constraints on the system parameters. When these threshold-constraints do not hold, we present counter-examples which show that even higher rates than those reported before can be achieved.
|
https://arxiv.org/abs/2601.10643
|
Academic Papers
|
svg
|
e0b5dec71999696d0780c8697b38dd5ef14ef5a07cd0ab8d142fc0e0a08da39a
|
2026-01-16T00:00:00-05:00
|
RoutIR: Fast Serving of Retrieval Pipelines for Retrieval-Augmented Generation
|
arXiv:2601.10644v1 Announce Type: new Abstract: Retrieval models are key components of Retrieval-Augmented Generation (RAG) systems, which generate search queries, process the documents returned, and generate a response. RAG systems are often dynamic and may involve multiple rounds of retrieval. While many state-of-the-art retrieval methods are available through academic IR platforms, these platforms are typically designed for the Cranfield paradigm in which all queries are known up front and can be batch processed offline. This simplification accelerates research but leaves state-of-the-art retrieval models unable to support downstream applications that require online services, such as arbitrary dynamic RAG pipelines that involve looping, feedback, or even self-organizing agents. In this work, we introduce RoutIR, a Python package that provides a simple and efficient HTTP API that wraps arbitrary retrieval methods, including first stage retrieval, reranking, query expansion, and result fusion. By providing a minimal JSON configuration file specifying the retrieval models to serve, RoutIR can be used to construct and query retrieval pipelines on-the-fly using any permutation of available models (e.g., fusing the results of several first-stage retrieval methods followed by reranking). The API automatically performs asynchronous query batching and caches results by default. While many state-of-the-art retrieval methods are already supported by the package, RoutIR is also easily expandable by implementing the Engine abstract class. The package is open-sourced and publicly available on GitHub: http://github.com/hltcoe/routir.
|
https://arxiv.org/abs/2601.10644
|
Academic Papers
|
svg
|
cb7e6f06dda279cf5b7c7a05c69c58a7640549a8ba7eeaa9e95319d119ebc731
|
2026-01-16T00:00:00-05:00
|
Influential Training Data Retrieval for Explaining Verbalized Confidence of LLMs
|
arXiv:2601.10645v1 Announce Type: new Abstract: Large language models (LLMs) can increase users' perceived trust by verbalizing confidence in their outputs. However, prior work has shown that LLMs are often overconfident, making their stated confidence unreliable since it does not consistently align with factual accuracy. To better understand the sources of this verbalized confidence, we introduce TracVC (\textbf{Trac}ing \textbf{V}erbalized \textbf{C}onfidence), a method that builds on information retrieval and influence estimation to trace generated confidence expressions back to the training data. We evaluate TracVC on OLMo and Llama models in a question answering setting, proposing a new metric, content groundness, which measures the extent to which an LLM grounds its confidence in content-related training examples (relevant to the question and answer) versus in generic examples of confidence verbalization. Our analysis reveals that OLMo2-13B is frequently influenced by confidence-related data that is lexically unrelated to the query, suggesting that it may mimic superficial linguistic expressions of certainty rather than rely on genuine content grounding. These findings point to a fundamental limitation in current training regimes: LLMs may learn how to sound confident without learning when confidence is justified. Our analysis provides a foundation for improving LLMs' trustworthiness in expressing more reliable confidence.
|
https://arxiv.org/abs/2601.10645
|
Academic Papers
|
svg
|
31dca2de534d4de34acfa5602deb537ac67b8de185bca2ebce8fac8c6f3dc879
|
2026-01-16T00:00:00-05:00
|
One-Shot Broadcast Joint Source-Channel Coding with Codebook Diversity
|
arXiv:2601.10648v1 Announce Type: new Abstract: We study a one-shot joint source-channel coding setting where the source is encoded once and broadcast to $K$ decoders through independent channels. Success is predicated on at least one decoder recovering the source within a maximum distortion constraint. We find that in the one-shot regime, utilizing disjoint codebooks at each decoder yields a codebook diversity gain, distinct from the channel diversity gain that may be expected when several decoders observe independent realizations of the channel's output but share the same codebook. Coding schemes are introduced that leverage this phenomenon, where first- and second-order achievability bounds are derived via an adaptation of the Poisson matching lemma (Li and Anantharam, 2021) which allows for multiple decoders using disjoint codebooks. We further propose a hybrid coding scheme that partitions decoders into groups to optimally balance codebook and channel diversity. Numerical results on the binary symmetric channel demonstrate that the hybrid approach outperforms strategies where the decoders' codebooks are either fully shared or disjoint.
|
https://arxiv.org/abs/2601.10648
|
Academic Papers
|
svg
|
f5106a12e2502f0cbb6d5642be0d8f81bbceb0d9ed3f1fe900aa7ef915d8f439
|
2026-01-16T00:00:00-05:00
|
CURVE: A Benchmark for Cultural and Multilingual Long Video Reasoning
|
arXiv:2601.10649v1 Announce Type: new Abstract: Recent advancements in video models have shown tremendous progress, particularly in long video understanding. However, current benchmarks predominantly feature western-centric data and English as the dominant language, introducing significant biases in evaluation. To address this, we introduce CURVE (Cultural Understanding and Reasoning in Video Evaluation), a challenging benchmark for multicultural and multilingual video reasoning. CURVE comprises high-quality, entirely human-generated annotations from diverse, region-specific cultural videos across 18 global locales. Unlike prior work that relies on automatic translations, CURVE provides complex questions, answers, and multi-step reasoning steps, all crafted in native languages. Making progress on CURVE requires a deeply situated understanding of visual cultural context. Furthermore, we leverage CURVE's reasoning traces to construct evidence-based graphs and propose a novel iterative strategy using these graphs to identify fine-grained errors in reasoning. Our evaluations reveal that SoTA Video-LLMs struggle significantly, performing substantially below human-level accuracy, with errors primarily stemming from the visual perception of cultural elements. CURVE will be publicly available under https://github.com/google-deepmind/neptune?tab=readme-ov-file\#minerva-cultural
|
https://arxiv.org/abs/2601.10649
|
Academic Papers
|
svg
|
81844402bfacac18034ebfca6353ba76a4838e99024ac85b991514a98569897b
|
2026-01-16T00:00:00-05:00
|
Multi-Property Synthesis
|
arXiv:2601.10651v1 Announce Type: new Abstract: We study LTLf synthesis with multiple properties, where satisfying all properties may be impossible. Instead of enumerating subsets of properties, we compute in one fixed-point computation the relation between product-game states and the goal sets that are realizable from them, and we synthesize strategies achieving maximal realizable sets. We develop a fully symbolic algorithm that introduces Boolean goal variables and exploits monotonicity to represent exponentially many goal combinations compactly. Our approach substantially outperforms enumeration-based baselines, with speedups of up to two orders of magnitude.
|
https://arxiv.org/abs/2601.10651
|
Academic Papers
|
svg
|
773abfe48d5fbbdadbb2dbdc040879cdc857fcded6bf06b6eeddf32cbb7d45b6
|
2026-01-16T00:00:00-05:00
|
PACEvolve: Enabling Long-Horizon Progress-Aware Consistent Evolution
|
arXiv:2601.10657v1 Announce Type: new Abstract: Large Language Models (LLMs) have emerged as powerful operators for evolutionary search, yet the design of efficient search scaffolds remains ad hoc. While promising, current LLM-in-the-loop systems lack a systematic approach to managing the evolutionary process. We identify three distinct failure modes: Context Pollution, where experiment history biases future candidate generation; Mode Collapse, where agents stagnate in local minima due to poor exploration-exploitation balance; and Weak Collaboration, where rigid crossover strategies fail to leverage parallel search trajectories effectively. We introduce Progress-Aware Consistent Evolution (PACEvolve), a framework designed to robustly govern the agent's context and search dynamics, to address these challenges. PACEvolve combines hierarchical context management (HCM) with pruning to address context pollution; momentum-based backtracking (MBB) to escape local minima; and a self-adaptive sampling policy that unifies backtracking and crossover for dynamic search coordination (CE), allowing agents to balance internal refinement with cross-trajectory collaboration. We demonstrate that PACEvolve provides a systematic path to consistent, long-horizon self-improvement, achieving state-of-the-art results on LLM-SR and KernelBench, while discovering solutions surpassing the record on Modded NanoGPT.
|
https://arxiv.org/abs/2601.10657
|
Academic Papers
|
svg
|
31030d2f95a522939b89ad75653e2933f9c08d51eb6de24c8ccbe09cf18497fb
|
2026-01-16T00:00:00-05:00
|
Detecting Winning Arguments with Large Language Models and Persuasion Strategies
|
arXiv:2601.10660v1 Announce Type: new Abstract: Detecting persuasion in argumentative text is a challenging task with important implications for understanding human communication. This work investigates the role of persuasion strategies - such as Attack on reputation, Distraction, and Manipulative wording - in determining the persuasiveness of a text. We conduct experiments on three annotated argument datasets: Winning Arguments (built from the Change My View subreddit), Anthropic/Persuasion, and Persuasion for Good. Our approach leverages large language models (LLMs) with a Multi-Strategy Persuasion Scoring approach that guides reasoning over six persuasion strategies. Results show that strategy-guided reasoning improves the prediction of persuasiveness. To better understand the influence of content, we organize the Winning Argument dataset into broad discussion topics and analyze performance across them. We publicly release this topic-annotated version of the dataset to facilitate future research. Overall, our methodology demonstrates the value of structured, strategy-aware prompting for enhancing interpretability and robustness in argument quality assessment.
|
https://arxiv.org/abs/2601.10660
|
Academic Papers
|
svg
|
0f92c89363b57bb972a944d3ff891a17b058a5b0f49ca6e241a3a84a68d71f7e
|
2026-01-16T00:00:00-05:00
|
Stable evaluation of derivatives for barycentric and continued fraction representations of rational functions
|
arXiv:2601.10667v1 Announce Type: new Abstract: Fast algorithms for approximation by rational functions exist for both barycentric and Thiele continued fraction (TCF) representations. We present the first numerically stable methods for derivative evaluation in the barycentric representation, including an $O(n)$ algorithm for all derivatives. We also extend an earlier $O(n)$ algorithm for evaluation of the TCF first derivative to higher orders. Numerical experiments confirm the robustness and efficiency of the proposed methods.
|
https://arxiv.org/abs/2601.10667
|
Academic Papers
|
svg
|
b3095b7dab57c49dad3919f44845062dc3dd9c7c8881943df93efb69b537a14c
|
2026-01-16T00:00:00-05:00
|
Safe Trajectory Gradient Flow Control of a Grid-Interfacing Inverter
|
arXiv:2601.10671v1 Announce Type: new Abstract: Grid-interfacing inverters serve as the interface between renewable energy resources and the electric power grid, offering fast, programmable control capabilities. However, their operation is constrained by hardware limitations, such as bounds on the current magnitude. Existing control methods for these systems often neglect these constraints during controller design and instead rely on ad hoc limiters, which can introduce instability or degrade performance. In this work, we present a control framework that directly incorporates constraints into the control of a voltage-source inverter. We propose a safe trajectory gradient flow controller, which applies the safe gradient flow method to a rolling horizon trajectory optimization problem to ensure that the states remain within a safe set defined by the constraints while directing the trajectory towards an optimal equilibrium point of a nonlinear program. Simulation results demonstrate that our approach can drive the outputs of a simulated inverter system to optimal values and maintain state constraints, even when using a limited number of optimization steps per control cycle.
|
https://arxiv.org/abs/2601.10671
|
Academic Papers
|
svg
|
0599596872a1f970e0b86639b9d5558e1be5c40c4ac1124c08a8186711625612
|
2026-01-16T00:00:00-05:00
|
Single-Stage Huffman Encoder for ML Compression
|
arXiv:2601.10673v1 Announce Type: new Abstract: Training and serving Large Language Models (LLMs) require partitioning data across multiple accelerators, where collective operations are frequently bottlenecked by network bandwidth. Lossless compression using Huffman codes is an effective way to alleviate the issue, however, its three-stage design requiring on-the-fly frequency analysis, codebook generation and transmission of codebook along with data introduces computational, latency and data overheads which are prohibitive for latency-sensitive scenarios such as die-to-die communication. This paper proposes a single-stage Huffman encoder that eliminates these overheads by using fixed codebooks derived from the average probability distribution of previous data batches. Through our analysis of the Gemma 2B model, we demonstrate that tensors exhibit high statistical similarity across layers and shards. Using this approach we achieve compression within 0.5% of per-shard Huffman coding and within 1% of the ideal Shannon compressibility, enabling efficient on-the-fly compression.
|
https://arxiv.org/abs/2601.10673
|
Academic Papers
|
svg
|
ad995ada375ab5429c8dff70fa8d5ea6ebde0f1777d031f948a4a651f370cb49
|
2026-01-16T00:00:00-05:00
|
Breaking the Storage-Bandwidth Tradeoff in Distributed Storage with Quantum Entanglement
|
arXiv:2601.10676v1 Announce Type: new Abstract: This work investigates the use of quantum resources in distributed storage systems. Consider an $(n,k,d)$ distributed storage system in which a file is stored across $n$ nodes such that any $k$ nodes suffice to reconstruct the file. When a node fails, any $d$ helper nodes transmit information to a newcomer to rebuild the system. In contrast to the classical repair, where helper nodes transmit classical bits, we allow them to send classical information over quantum channels to the newcomer. The newcomer then generates its storage by performing appropriate measurements on the received quantum states. In this setting, we fully characterize the fundamental tradeoff between storage and repair bandwidth (total communication cost). Compared to classical systems, the optimal storage--bandwidth tradeoff can be significantly improved with the enhancement of quantum entanglement shared only among the surviving nodes, particularly at the minimum-storage regenerating point. Remarkably, we show that when $d \geq 2k-2$, there exists an operating point at which \textit{both storage and repair bandwidth are simultaneously minimized}. This phenomenon breaks the tradeoff in the classical setting and reveals a fundamentally new regime enabled by quantum communication.
|
https://arxiv.org/abs/2601.10676
|
Academic Papers
|
svg
|
8b752f5d1de4d0fdc282e392d02a358e480389108e16c3b65427b6791ddbb9e0
|
2026-01-16T00:00:00-05:00
|
Synchronizing Probabilities in Model-Driven Lossless Compression
|
arXiv:2601.10678v1 Announce Type: new Abstract: It is well-known in the field of lossless data compression that probabilistic next-symbol prediction can be used to compress sequences of symbols. Deep neural networks are able to capture rich dependencies in data, offering a powerful means of estimating these probabilities and hence an avenue towards more effective compression algorithms. However, both compressor and decompressor must have exactly matching predictions; even small non-deterministic differences (which often happen with learned models due to hardware, software, or computation order) can lead to cascading decoding failures. In this paper, we formalize the problem of prediction mismatch in model-driven compression, and introduce Probability Matching Interval Coding (PMATIC), a model-agnostic algorithm that tolerates bounded prediction mismatch with low overhead. PMATIC works with the predicted probabilities, making it compatible as a drop-in replacement for the arithmetic encoder in model-driven compression tools. We show theoretical correctness and performance bounds for PMATIC, and validate these results on text data. These results confirm that, when paired an advanced prediction model, PMATIC is robust to prediction mismatch while achieving compression rates that out-perform standard modern compression tools.
|
https://arxiv.org/abs/2601.10678
|
Academic Papers
|
svg
|
446478b8fbb8e0993a1117aa0fd3ed014afe1f211cbdb8c059dbf2218e69aa24
|
2026-01-16T00:00:00-05:00
|
Are Your Reasoning Models Reasoning or Guessing? A Mechanistic Analysis of Hierarchical Reasoning Models
|
arXiv:2601.10679v1 Announce Type: new Abstract: Hierarchical reasoning model (HRM) achieves extraordinary performance on various reasoning tasks, significantly outperforming large language model-based reasoners. To understand the strengths and potential failure modes of HRM, we conduct a mechanistic study on its reasoning patterns and find three surprising facts: (a) Failure of extremely simple puzzles, e.g., HRM can fail on a puzzle with only one unknown cell. We attribute this failure to the violation of the fixed point property, a fundamental assumption of HRM. (b) "Grokking" dynamics in reasoning steps, i.e., the answer is not improved uniformly, but instead there is a critical reasoning step that suddenly makes the answer correct; (c) Existence of multiple fixed points. HRM "guesses" the first fixed point, which could be incorrect, and gets trapped there for a while or forever. All facts imply that HRM appears to be "guessing" instead of "reasoning". Leveraging this "guessing" picture, we propose three strategies to scale HRM's guesses: data augmentation (scaling the quality of guesses), input perturbation (scaling the number of guesses by leveraging inference randomness), and model bootstrapping (scaling the number of guesses by leveraging training randomness). On the practical side, by combining all methods, we develop Augmented HRM, boosting accuracy on Sudoku-Extreme from 54.5% to 96.9%. On the scientific side, our analysis provides new insights into how reasoning models "reason".
|
https://arxiv.org/abs/2601.10679
|
Academic Papers
|
svg
|
156b27ce465d3b4240ae52b6ce2a7a2801e12da9230554fdec68d0eeef117f61
|
2026-01-16T00:00:00-05:00
|
Structure and Diversity Aware Context Bubble Construction for Enterprise Retrieval Augmented Systems
|
arXiv:2601.10681v1 Announce Type: new Abstract: Large language model (LLM) contexts are typically constructed using retrieval-augmented generation (RAG), which involves ranking and selecting the top-k passages. The approach causes fragmentation in information graphs in document structures, over-retrieval, and duplication of content alongside insufficient query context, including 2nd and 3rd order facets. In this paper, a structure-informed and diversity-constrained context bubble construction framework is proposed that assembles coherent, citable bundles of spans under a strict token budget. The method preserves and exploits inherent document structure by organising multi-granular spans (e.g., sections and rows) and using task-conditioned structural priors to guide retrieval. Starting from high-relevance anchor spans, a context bubble is constructed through constrained selection that balances query relevance, marginal coverage, and redundancy penalties. It will explicitly constrain diversity and budget, producing compact and informative context sets, unlike top-k retrieval. Moreover, a full retrieval is emitted that traces the scoring and selection choices of the records, thus providing auditability and deterministic tuning. Experiments on enterprise documents demonstrate the efficiency of context bubble as it significantly reduces redundant context, is better able to cover secondary facets and has a better answer quality and citation faithfulness within a limited context window. Ablation studies demonstrate that both structural priors as well as diversity constraint selection are necessary; removing either component results in a decline in coverage and an increase in redundant or incomplete context.
|
https://arxiv.org/abs/2601.10681
|
Academic Papers
|
svg
|
18a846ae00f75b8285a0efc55cb49677f12d98b33d9fdb4ff2089e4e5419d788
|
2026-01-16T00:00:00-05:00
|
Implementation of Oblivious Transfer over Binary-Input AWGN Channels by Polar Codes
|
arXiv:2601.10682v1 Announce Type: new Abstract: We develop a one-out-of-two-oblivious transfer protocol over the binary-input additive white Gaussian noise channel using polar codes. The scheme uses two decoder views linked by automorphisms of the polar transform and publicly draws the encoder at random from the corresponding automorphism group. This yields perfect receiver privacy at any finite blocklength, since the public encoder distribution is independent of the receiver's choice bit. Sender privacy is obtained asymptotically via channel polarization combined with privacy amplification. Because the construction deliberately injects randomness on selected bad bit-channels, we derive a relaxed reliability criterion and evaluate finite-blocklength performance. Finally, we characterize the polar-transform automorphisms as bit-level permutations of bit-channel indices, and exploit this structure to derive and optimize an achievable finite-blocklength OT rate.
|
https://arxiv.org/abs/2601.10682
|
Academic Papers
|
svg
|
e34f3e284f2c6eefac5ede965101ffd60a24f41ce8523ba40126f1afada7670f
|
2026-01-16T00:00:00-05:00
|
On the origin of neural scaling laws: from random graphs to natural language
|
arXiv:2601.10684v1 Announce Type: new Abstract: Scaling laws have played a major role in the modern AI revolution, providing practitioners predictive power over how the model performance will improve with increasing data, compute, and number of model parameters. This has spurred an intense interest in the origin of neural scaling laws, with a common suggestion being that they arise from power law structure already present in the data. In this paper we study scaling laws for transformers trained to predict random walks (bigrams) on graphs with tunable complexity. We demonstrate that this simplified setting already gives rise to neural scaling laws even in the absence of power law structure in the data correlations. We further consider dialing down the complexity of natural language systematically, by training on sequences sampled from increasingly simplified generative language models, from 4,2,1-layer transformer language models down to language bigrams, revealing a monotonic evolution of the scaling exponents. Our results also include scaling laws obtained from training on random walks on random graphs drawn from Erd\"os-Renyi and scale-free Barab\'asi-Albert ensembles. Finally, we revisit conventional scaling laws for language modeling, demonstrating that several essential results can be reproduced using 2 layer transformers with context length of 50, provide a critical analysis of various fits used in prior literature, demonstrate an alternative method for obtaining compute optimal curves as compared with current practice in published literature, and provide preliminary evidence that maximal update parameterization may be more parameter efficient than standard parameterization.
|
https://arxiv.org/abs/2601.10684
|
Academic Papers
|
svg
|
420ee97fbc9ea65a386b456e14c96623ae1d97bced759c9ad2962e9b9855df2a
|
2026-01-16T00:00:00-05:00
|
Improved Constructions of Reed-Solomon Codes with Optimal Repair Bandwidth
|
arXiv:2601.10685v1 Announce Type: new Abstract: Maximum-distance-separable (MDS) codes are widely used in distributed storage, yet naive repair of a single erasure in an $[n,k]$ MDS code downloads the entire contents of $k$ nodes. Minimum Storage Regenerating (MSR) codes (Dimakis et al., 2010) minimize repair bandwidth by contacting $d>k$ helpers and downloading only a fraction of data from each. Guruswami and Wootters first proposed a linear repair scheme for Reed-Solomon (RS) codes, showing that they can be repaired with lower bandwidth than the naive approach. The existence of RS codes achieving the MSR point (RS-MSR codes) nevertheless remained open until the breakthrough construction of Tamo, Barg, and Ye, which yields RS-MSR codes with subpacketization $\ell = s \prod_{i=1}^n p_i$, where $p_i$ are distinct primes satisfying $p_i \equiv 1 \pmod{s}$ and $s=d+1-k$. In this paper, we present an improved construction of RS-MSR codes by eliminating the congruence condition $p_i \equiv 1 \pmod{s}$. Consequently, our construction reduces the subpacketization by a multiplicative factor of $\phi(s)^n$ ( $\phi(\cdot)$ is Euler's totient function) and broadens the range of feasible parameters for RS-MSR codes.
|
https://arxiv.org/abs/2601.10685
|
Academic Papers
|
svg
|
90b06a13a146a6297bfcdc9dc186967f0771a8503b5f5e840d3902495c17b127
|
2026-01-16T00:00:00-05:00
|
A continental-scale dataset of ground beetles with high-resolution images and validated morphological trait measurements
|
arXiv:2601.10687v1 Announce Type: new Abstract: Despite the ecological significance of invertebrates, global trait databases remain heavily biased toward vertebrates and plants, limiting comprehensive ecological analyses of high-diversity groups like ground beetles. Ground beetles (Coleoptera: Carabidae) serve as critical bioindicators of ecosystem health, providing valuable insights into biodiversity shifts driven by environmental changes. While the National Ecological Observatory Network (NEON) maintains an extensive collection of carabid specimens from across the United States, these primarily exist as physical collections, restricting widespread research access and large-scale analysis. To address these gaps, we present a multimodal dataset digitizing over 13,200 NEON carabids from 30 sites spanning the continental US and Hawaii through high-resolution imaging, enabling broader access and computational analysis. The dataset includes digitally measured elytra length and width of each specimen, establishing a foundation for automated trait extraction using AI. Validated against manual measurements, our digital trait extraction achieves sub-millimeter precision, ensuring reliability for ecological and computational studies. By addressing invertebrate under-representation in trait databases, this work supports AI-driven tools for automated species identification and trait-based research, fostering advancements in biodiversity monitoring and conservation.
|
https://arxiv.org/abs/2601.10687
|
Academic Papers
|
svg
|
3727d26b2633938aaf617249510dc950cfab25e0c142537745eb8b8cb8f760c1
|
2026-01-16T00:00:00-05:00
|
An Extension-Based Accessibility Framework for Making Blockly Accessible to Blind and Low-Vision Users
|
arXiv:2601.10688v1 Announce Type: new Abstract: Block-based programming environments (BBPEs) such as Scratch and Code.org are now widely used in K-12 computer science classes, but they remain mostly inaccessible to blind or visually impaired (BVI) learners. A major problem is that prior accessibility solutions have relied on modifications to the Blockly library, making them difficult to apply in existing BBPEs and thereby limiting adoption. We present an Extension-based Accessibility Framework (EAF) to make BBPEs accessible for BVI students. The framework uses a modular architecture that enables seamless integration with existing Blockly-based BBPEs. We present an innovative three-dimensional (3D) hierarchical navigation model featuring stack labeling and block numbering, mode-based editing to prevent accidental modifications, and WAI-ARIA implementation to ensure compatibility with external screen readers. We evaluated our approach by integrating the EAF framework into two BBPEs (covering 177 test cases) and conducting semi-structured interviews with four participants using VoiceOver, JAWS, and NVDA. Participants reported clearer spatial orientation and easier mental model formation compared to default Blockly keyboard navigation. EAF shows that modular architecture can provide comprehensive accessibility while ensuring compatibility with existing BBPEs.
|
https://arxiv.org/abs/2601.10688
|
Academic Papers
|
svg
|
175d4f060abb0c9b57c69ee6048364fbd5ef290e084140dc7995b32f7ef9ccf7
|
2026-01-16T00:00:00-05:00
|
Data-driven stochastic reduced-order modeling of parametrized dynamical systems
|
arXiv:2601.10690v1 Announce Type: new Abstract: Modeling complex dynamical systems under varying conditions is computationally intensive, often rendering high-fidelity simulations intractable. Although reduced-order models (ROMs) offer a promising solution, current methods often struggle with stochastic dynamics and fail to quantify prediction uncertainty, limiting their utility in robust decision-making contexts. To address these challenges, we introduce a data-driven framework for learning continuous-time stochastic ROMs that generalize across parameter spaces and forcing conditions. Our approach, based on amortized stochastic variational inference, leverages a reparametrization trick for Markov Gaussian processes to eliminate the need for computationally expensive forward solvers during training. This enables us to jointly learn a probabilistic autoencoder and stochastic differential equations governing the latent dynamics, at a computational cost that is independent of the dataset size and system stiffness. Additionally, our approach offers the flexibility of incorporating physics-informed priors if available. Numerical studies are presented for three challenging test problems, where we demonstrate excellent generalization to unseen parameter combinations and forcings, and significant efficiency gains compared to existing approaches.
|
https://arxiv.org/abs/2601.10690
|
Academic Papers
|
svg
|
370fc6892cf3797c0b8b6c331134fa9521e28adf1d4ec750280e92984450f670
|
2026-01-16T00:00:00-05:00
|
The Conversational Exam: A Scalable Assessment Design for the AI Era
|
arXiv:2601.10691v1 Announce Type: new Abstract: Traditional assessment methods collapse when students use generative AI to complete work without genuine engagement, creating an illusion of competence where they believe they're learning but aren't. This paper presents the conversational exam -- a scalable oral examination format that restores assessment validity by having students code live while explaining their reasoning. Drawing on human-computer interaction principles, we examined 58 students in small groups across just two days, demonstrating that oral exams can scale to typical class sizes. The format combines authentic practice (students work with documentation and supervised AI access) with inherent validity (real-time performance cannot be faked). We provide detailed implementation guidance to help instructors adapt this approach, offering a practical path forward when many educators feel paralyzed between banning AI entirely or accepting that valid assessment is impossible.
|
https://arxiv.org/abs/2601.10691
|
Academic Papers
|
svg
|
68e5d179425279f6ec69220966572627fc990f347322cb6cfc3e6f5f4d28d2c8
|
2026-01-16T00:00:00-05:00
|
The Impact of Generative AI on Architectural Conceptual Design: Performance, Creative Self-Efficacy and Cognitive Load
|
arXiv:2601.10696v1 Announce Type: new Abstract: Our study examines how generative AI (GenAI) influences performance, creative self-efficacy, and cognitive load in architectural conceptual design tasks. Thirty-six student participants from Architectural Engineering and other disciplines completed a two-phase architectural design task, first independently and then with external tools (GenAI-assisted condition and control condition using an online repository of existing architectural projects). Design outcomes were evaluated by expert raters, while self-efficacy and cognitive load were self-reported after each phase. Difference-in-differences analyses revealed no overall performance advantage of GenAI across participants; however, subgroup analyses showed that GenAI significantly improved design performance for novice designers. In contrast, general creative self-efficacy declined for students using GenAI. Cognitive load did not differ significantly between conditions, though prompt usage patterns showed that iterative idea generation and visual feedback prompts were linked to greater reductions in cognitive load. These findings suggest that GenAI effectiveness depends on users' prior expertise and interaction strategies through prompting.
|
https://arxiv.org/abs/2601.10696
|
Academic Papers
|
svg
|
6037468ec5322425e90643242230dac280a0a8e71e3402d21c06482fc8f35322
|
2026-01-16T00:00:00-05:00
|
Perfect Secret Key Generation for a class of Hypergraphical Sources
|
arXiv:2601.10697v1 Announce Type: new Abstract: Nitinawarat and Narayan proposed a perfect secret key generation scheme for the so-called \emph{pairwise independent network (PIN) model} by exploiting the combinatorial properties of the underlying graph, namely the spanning tree packing rate. This work considers a generalization of the PIN model where the underlying graph is replaced with a hypergraph, and makes progress towards designing similar perfect secret key generation schemes by exploiting the combinatorial properties of the hypergraph. Our contributions are two-fold. We first provide a capacity achieving scheme for a complete $t$-uniform hypergraph on $m$ vertices by leveraging a packing of the complete $t$-uniform hypergraphs by what we refer to as star hypergraphs, and designing a scheme that gives $\binom{m-2}{t-2}$ bits of perfect secret key per star graph. Our second contribution is a 2-bit perfect secret key generation scheme for 3-uniform star hypergraphs whose projections are cycles. This scheme is then extended to a perfect secret key generation scheme for generic 3-uniform hypergraphs by exploiting star graph packing of 3-uniform hypergraphs and Hamiltonian packings of graphs. The scheme is then shown to be capacity achieving for certain classes of hypergraphs.
|
https://arxiv.org/abs/2601.10697
|
Academic Papers
|
svg
|
4a289e83b4d81b3c1a72940cf62cc9f63f1ba6d665d2a80237c82e68cf547b9c
|
2026-01-16T00:00:00-05:00
|
LIBERTy: A Causal Framework for Benchmarking Concept-Based Explanations of LLMs with Structural Counterfactuals
|
arXiv:2601.10700v1 Announce Type: new Abstract: Concept-based explanations quantify how high-level concepts (e.g., gender or experience) influence model behavior, which is crucial for decision-makers in high-stakes domains. Recent work evaluates the faithfulness of such explanations by comparing them to reference causal effects estimated from counterfactuals. In practice, existing benchmarks rely on costly human-written counterfactuals that serve as an imperfect proxy. To address this, we introduce a framework for constructing datasets containing structural counterfactual pairs: LIBERTy (LLM-based Interventional Benchmark for Explainability with Reference Targets). LIBERTy is grounded in explicitly defined Structured Causal Models (SCMs) of the text generation, interventions on a concept propagate through the SCM until an LLM generates the counterfactual. We introduce three datasets (disease detection, CV screening, and workplace violence prediction) together with a new evaluation metric, order-faithfulness. Using them, we evaluate a wide range of methods across five models and identify substantial headroom for improving concept-based explanations. LIBERTy also enables systematic analysis of model sensitivity to interventions: we find that proprietary LLMs show markedly reduced sensitivity to demographic concepts, likely due to post-training mitigation. Overall, LIBERTy provides a much-needed benchmark for developing faithful explainability methods.
|
https://arxiv.org/abs/2601.10700
|
Academic Papers
|
svg
|
32009387efdcf28925a0f11013ae9f5bcd160074b04d8724a2b990dcb0617674
|
2026-01-16T00:00:00-05:00
|
Communication-Efficient and Privacy-Adaptable Mechanism -- a Federated Learning Scheme with Convergence Analysis
|
arXiv:2601.10701v1 Announce Type: new Abstract: Federated learning enables multiple parties to jointly train learning models without sharing their own underlying data, offering a practical pathway to privacy-preserving collaboration under data-governance constraints. Continued study of federated learning is essential to address key challenges in it, including communication efficiency and privacy protection between parties. A recent line of work introduced a novel approach called the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM), which achieves both objectives simultaneously. CEPAM leverages the rejection-sampled universal quantizer (RSUQ), a randomized vector quantizer whose quantization error is equivalent to a prescribed noise, which can be tuned to customize privacy protection between parties. In this work, we theoretically analyze the privacy guarantees and convergence properties of CEPAM. Moreover, we assess CEPAM's utility performance through experimental evaluations, including convergence profiles compared with other baselines, and accuracy-privacy trade-offs between different parties.
|
https://arxiv.org/abs/2601.10701
|
Academic Papers
|
svg
|
56442b2ce1fcd353bd601d033c99d7569175c272ef90b5e996f327447390922e
|
2026-01-16T00:00:00-05:00
|
Grounding Agent Memory in Contextual Intent
|
arXiv:2601.10702v1 Announce Type: new Abstract: Deploying large language models in long-horizon, goal-oriented interactions remains challenging because similar entities and facts recur under different latent goals and constraints, causing memory systems to retrieve context-mismatched evidence. We propose STITCH (Structured Intent Tracking in Contextual History), an agentic memory system that indexes each trajectory step with a structured retrieval cue, contextual intent, and retrieves history by matching the current step's intent. Contextual intent provides compact signals that disambiguate repeated mentions and reduce interference: (1) the current latent goal defining a thematic segment, (2) the action type, and (3) the salient entity types anchoring which attributes matter. During inference, STITCH filters and prioritizes memory snippets by intent compatibility, suppressing semantically similar but context-incompatible history. For evaluation, we introduce CAME-Bench, a benchmark for context-aware retrieval in realistic, dynamic, goal-oriented trajectories. Across CAME-Bench and LongMemEval, STITCH achieves state-of-the-art performance, outperforming the strongest baseline by 35.6%, with the largest gains as trajectory length increases. Our analysis shows that intent indexing substantially reduces retrieval noise, supporting intent-aware memory for robust long-horizon reasoning.
|
https://arxiv.org/abs/2601.10702
|
Academic Papers
|
svg
|
51940bd8492dcd49f83b5c901e20ae878bc952930e80e3b18ff25745acab6313
|
2026-01-16T00:00:00-05:00
|
Distributed Perceptron under Bounded Staleness, Partial Participation, and Noisy Communication
|
arXiv:2601.10705v1 Announce Type: new Abstract: We study a semi-asynchronous client-server perceptron trained via iterative parameter mixing (IPM-style averaging): clients run local perceptron updates and a server forms a global model by aggregating the updates that arrive in each communication round. The setting captures three system effects in federated and distributed deployments: (i) stale updates due to delayed model delivery and delayed application of client computations (two-sided version lag), (ii) partial participation (intermittent client availability), and (iii) imperfect communication on both downlink and uplink, modeled as effective zero-mean additive noise with bounded second moment. We introduce a server-side aggregation rule called staleness-bucket aggregation with padding that deterministically enforces a prescribed staleness profile over update ages without assuming any stochastic model for delays or participation. Under margin separability and bounded data radius, we prove a finite-horizon expected bound on the cumulative weighted number of perceptron mistakes over a given number of server rounds: the impact of delay appears only through the mean enforced staleness, whereas communication noise contributes an additional term that grows on the order of the square root of the horizon with the total noise energy. In the noiseless case, we show how a finite expected mistake budget yields an explicit finite-round stabilization bound under a mild fresh-participation condition.
|
https://arxiv.org/abs/2601.10705
|
Academic Papers
|
svg
|
fca59275cbae34a4de0e5aa714439e01ccafe39a8a8cf01258247f4e4affa1b6
|
2026-01-16T00:00:00-05:00
|
UFO Trees: Practical and Provably-Efficient Parallel Batch-Dynamic Trees
|
arXiv:2601.10706v1 Announce Type: new Abstract: The dynamic trees problem is to maintain a tree under edge updates while supporting queries like connectivity queries or path queries. Despite the first data structure for this fundamental problem -- the link-cut tree -- being invented 40 years ago, our experiments reveal that they are still the fastest sequential data structure for the problem. However, link-cut trees cannot support parallel batch-dynamic updates and have limitations on the kinds of queries they support. In this paper, we design a new parallel batch-dynamic trees data structure called UFO trees that simultaneously supports a wide range of query functionality, supports work-efficient parallel batch-dynamic updates, and is competitive with link-cut trees when run sequentially. We prove that a key reason for the strong practical performance of both link-cut trees and UFO trees is that they can perform updates and queries in sub-logarithmic time for low-diameter trees. We perform an experimental study of our optimized C++ implementations of UFO trees with ten other dynamic tree implementations, several of which are new, in a broad benchmark of both synthetic and real-world trees of varying diameter and size. Our results show that, in both sequential and parallel settings, UFO trees are the fastest dynamic tree data structure that supports a wide range of queries. Our new implementation of UFO trees has low space usage and easily scales to billion-size inputs, making it a promising building block for implementing more complex dynamic graph algorithms in practice.
|
https://arxiv.org/abs/2601.10706
|
Academic Papers
|
svg
|
f6b4cfafeecb1045f5abba28bdfefedc9522e90d67cad77503c8b599cf0519fa
|
2026-01-16T00:00:00-05:00
|
See Less, Drive Better: Generalizable End-to-End Autonomous Driving via Foundation Models Stochastic Patch Selection
|
arXiv:2601.10707v1 Announce Type: new Abstract: Recent advances in end-to-end autonomous driving show that policies trained on patch-aligned features extracted from foundation models generalize better to Out-of-Distribution (OOD). We hypothesize that due to the self-attention mechanism, each patch feature implicitly embeds/contains information from all other patches, represented in a different way and intensity, making these descriptors highly redundant. We quantify redundancy in such (BLIP2) features via PCA and cross-patch similarity: $90$% of variance is captured by $17/64$ principal components, and strong inter-token correlations are pervasive. Training on such overlapping information leads the policy to overfit spurious correlations, hurting OOD robustness. We present Stochastic-Patch-Selection (SPS), a simple yet effective approach for learning policies that are more robust, generalizable, and efficient. For every frame, SPS randomly masks a fraction of patch descriptors, not feeding them to the policy model, while preserving the spatial layout of the remaining patches. Thus, the policy is provided with different stochastic but complete views of the (same) scene: every random subset of patches acts like a different, yet still sensible, coherent projection of the world. The policy thus bases its decisions on features that are invariant to which specific tokens survive. Extensive experiments confirm that across all OOD scenarios, our method outperforms the state of the art (SOTA), achieving a $6.2$% average improvement and up to $20.4$% in closed-loop simulations, while being $2.4\times$ faster. We conduct ablations over masking rates and patch-feature reorganization, training and evaluating 9 systems, with 8 of them surpassing prior SOTA. Finally, we show that the same learned policy transfers to a physical, real-world car without any tuning.
|
https://arxiv.org/abs/2601.10707
|
Academic Papers
|
svg
|
1d664fda0a36403f122302e477d88cd056295e3061261a09cf310610eeed3ccb
|
2026-01-16T00:00:00-05:00
|
High-accuracy and dimension-free sampling with diffusions
|
arXiv:2601.10708v1 Announce Type: new Abstract: Diffusion models have shown remarkable empirical success in sampling from rich multi-modal distributions. Their inference relies on numerically solving a certain differential equation. This differential equation cannot be solved in closed form, and its resolution via discretization typically requires many small iterations to produce \emph{high-quality} samples. More precisely, prior works have shown that the iteration complexity of discretization methods for diffusion models scales polynomially in the ambient dimension and the inverse accuracy $1/\varepsilon$. In this work, we propose a new solver for diffusion models relying on a subtle interplay between low-degree approximation and the collocation method (Lee, Song, Vempala 2018), and we prove that its iteration complexity scales \emph{polylogarithmically} in $1/\varepsilon$, yielding the first ``high-accuracy'' guarantee for a diffusion-based sampler that only uses (approximate) access to the scores of the data distribution. In addition, our bound does not depend explicitly on the ambient dimension; more precisely, the dimension affects the complexity of our solver through the \emph{effective radius} of the support of the target distribution only.
|
https://arxiv.org/abs/2601.10708
|
Academic Papers
|
svg
|
64647cd24110ed0e01c9a7a52c6e124042f97b600354d79868ffe237a7a2a89a
|
2026-01-16T00:00:00-05:00
|
From One-to-One to Many-to-Many: Dynamic Cross-Layer Injection for Deep Vision-Language Fusion
|
arXiv:2601.10710v1 Announce Type: new Abstract: Vision-Language Models (VLMs) create a severe visual feature bottleneck by using a crude, asymmetric connection that links only the output of the vision encoder to the input of the large language model (LLM). This static architecture fundamentally limits the ability of LLMs to achieve comprehensive alignment with hierarchical visual knowledge, compromising their capacity to accurately integrate local details with global semantics into coherent reasoning. To resolve this, we introduce Cross-Layer Injection (CLI), a novel and lightweight framework that forges a dynamic many-to-many bridge between the two modalities. CLI consists of two synergistic, parameter-efficient components: an Adaptive Multi-Projection (AMP) module that harmonizes features from diverse vision layers, and an Adaptive Gating Fusion (AGF) mechanism that empowers the LLM to selectively inject the most relevant visual information based on its real-time decoding context. We validate the effectiveness and versatility of CLI by integrating it into LLaVA-OneVision and LLaVA-1.5. Extensive experiments on 18 diverse benchmarks demonstrate significant performance improvements, establishing CLI as a scalable paradigm that unlocks deeper multimodal understanding by granting LLMs on-demand access to the full visual hierarchy.
|
https://arxiv.org/abs/2601.10710
|
Academic Papers
|
svg
|
8f94e88fa6d636b69f9f973fb6742cd7250251ca79384cc29b9f670e698cf427
|
2026-01-16T00:00:00-05:00
|
MatchTIR: Fine-Grained Supervision for Tool-Integrated Reasoning via Bipartite Matching
|
arXiv:2601.10712v1 Announce Type: new Abstract: Tool-Integrated Reasoning (TIR) empowers large language models (LLMs) to tackle complex tasks by interleaving reasoning steps with external tool interactions. However, existing reinforcement learning methods typically rely on outcome- or trajectory-level rewards, assigning uniform advantages to all steps within a trajectory. This coarse-grained credit assignment fails to distinguish effective tool calls from redundant or erroneous ones, particularly in long-horizon multi-turn scenarios. To address this, we propose MatchTIR, a framework that introduces fine-grained supervision via bipartite matching-based turn-level reward assignment and dual-level advantage estimation. Specifically, we formulate credit assignment as a bipartite matching problem between predicted and ground-truth traces, utilizing two assignment strategies to derive dense turn-level rewards. Furthermore, to balance local step precision with global task success, we introduce a dual-level advantage estimation scheme that integrates turn-level and trajectory-level signals, assigning distinct advantage values to individual interaction turns. Extensive experiments on three benchmarks demonstrate the superiority of MatchTIR. Notably, our 4B model surpasses the majority of 8B competitors, particularly in long-horizon and multi-turn tasks. Our codes are available at https://github.com/quchangle1/MatchTIR.
|
https://arxiv.org/abs/2601.10712
|
Academic Papers
|
svg
|
b8d9a53db63637c53c5e69188c47ace5e0f00e9790ab4698cc5010721af78b1f
|
2026-01-16T00:00:00-05:00
|
Alterbute: Editing Intrinsic Attributes of Objects in Images
|
arXiv:2601.10714v1 Announce Type: new Abstract: We introduce Alterbute, a diffusion-based method for editing an object's intrinsic attributes in an image. We allow changing color, texture, material, and even the shape of an object, while preserving its perceived identity and scene context. Existing approaches either rely on unsupervised priors that often fail to preserve identity or use overly restrictive supervision that prevents meaningful intrinsic variations. Our method relies on: (i) a relaxed training objective that allows the model to change both intrinsic and extrinsic attributes conditioned on an identity reference image, a textual prompt describing the target intrinsic attributes, and a background image and object mask defining the extrinsic context. At inference, we restrict extrinsic changes by reusing the original background and object mask, thereby ensuring that only the desired intrinsic attributes are altered; (ii) Visual Named Entities (VNEs) - fine-grained visual identity categories (e.g., ''Porsche 911 Carrera'') that group objects sharing identity-defining features while allowing variation in intrinsic attributes. We use a vision-language model to automatically extract VNE labels and intrinsic attribute descriptions from a large public image dataset, enabling scalable, identity-preserving supervision. Alterbute outperforms existing methods on identity-preserving object intrinsic attribute editing.
|
https://arxiv.org/abs/2601.10714
|
Academic Papers
|
svg
|
e0c9c1ddcf5349e47026692cc6e508081bc73e7b41b5c33807ca76822c538e0d
|
2026-01-16T00:00:00-05:00
|
DInf-Grid: A Neural Differential Equation Solver with Differentiable Feature Grids
|
arXiv:2601.10715v1 Announce Type: new Abstract: We present a novel differentiable grid-based representation for efficiently solving differential equations (DEs). Widely used architectures for neural solvers, such as sinusoidal neural networks, are coordinate-based MLPs that are both computationally intensive and slow to train. Although grid-based alternatives for implicit representations (e.g., Instant-NGP and K-Planes) train faster by exploiting signal structure, their reliance on linear interpolation restricts their ability to compute higher-order derivatives, rendering them unsuitable for solving DEs. Our approach overcomes these limitations by combining the efficiency of feature grids with radial basis function interpolation, which is infinitely differentiable. To effectively capture high-frequency solutions and enable stable and faster computation of global gradients, we introduce a multi-resolution decomposition with co-located grids. Our proposed representation, DInf-Grid, is trained implicitly using the differential equations as loss functions, enabling accurate modelling of physical fields. We validate DInf-Grid on a variety of tasks, including the Poisson equation for image reconstruction, the Helmholtz equation for wave fields, and the Kirchhoff-Love boundary value problem for cloth simulation. Our results demonstrate a 5-20x speed-up over coordinate-based MLP-based methods, solving differential equations in seconds or minutes while maintaining comparable accuracy and compactness.
|
https://arxiv.org/abs/2601.10715
|
Academic Papers
|
svg
|
4b55752a7ab8d6db730c3c6fa6f7f64ce993adb7d8626c727640a774e8a0bdaa
|
2026-01-16T00:00:00-05:00
|
WildRayZer: Self-supervised Large View Synthesis in Dynamic Environments
|
arXiv:2601.10716v1 Announce Type: new Abstract: We present WildRayZer, a self-supervised framework for novel view synthesis (NVS) in dynamic environments where both the camera and objects move. Dynamic content breaks the multi-view consistency that static NVS models rely on, leading to ghosting, hallucinated geometry, and unstable pose estimation. WildRayZer addresses this by performing an analysis-by-synthesis test: a camera-only static renderer explains rigid structure, and its residuals reveal transient regions. From these residuals, we construct pseudo motion masks, distill a motion estimator, and use it to mask input tokens and gate loss gradients so supervision focuses on cross-view background completion. To enable large-scale training and evaluation, we curate Dynamic RealEstate10K (D-RE10K), a real-world dataset of 15K casually captured dynamic sequences, and D-RE10K-iPhone, a paired transient and clean benchmark for sparse-view transient-aware NVS. Experiments show that WildRayZer consistently outperforms optimization-based and feed-forward baselines in both transient-region removal and full-frame NVS quality with a single feed-forward pass.
|
https://arxiv.org/abs/2601.10716
|
Academic Papers
|
svg
|
b367a0cb589f81dc01128d14c0611973df3db8757360c2fed95a9429ec016297
|
2026-01-16T00:00:00-05:00
|
Multi-Level Embedding Conformer Framework for Bengali Automatic Speech Recognition
|
arXiv:2601.09710v1 Announce Type: cross Abstract: Bengali, spoken by over 300 million people, is a morphologically rich and lowresource language, posing challenges for automatic speech recognition (ASR). This research presents an end-to-end framework for Bengali ASR, building on a Conformer-CTC backbone with a multi-level embedding fusion mechanism that incorporates phoneme, syllable, and wordpiece representations. By enriching acoustic features with these linguistic embeddings, the model captures fine-grained phonetic cues and higher-level contextual patterns. The architecture employs early and late Conformer stages, with preprocessing steps including silence trimming, resampling, Log-Mel spectrogram extraction, and SpecAugment augmentation. The experimental results demonstrate the strong potential of the model, achieving a word error rate (WER) of 10.01% and a character error rate (CER) of 5.03%. These results demonstrate the effectiveness of combining multi-granular linguistic information with acoustic modeling, providing a scalable approach for low-resource ASR development.
|
https://arxiv.org/abs/2601.09710
|
Academic Papers
|
svg
|
3a7125d5763f96157428ed395d7a1ea182f1c92f14742cc3f44c5c3bf4b6bd6f
|
2026-01-16T00:00:00-05:00
|
From Ecological Connectivity to Outbreak Risk: A Heterogeneous Graph Network for Epidemiological Reasoning under Sparse Spatiotemporal Data
|
arXiv:2601.09738v1 Announce Type: cross Abstract: Estimating population-level prevalence and transmission dynamics of wildlife pathogens can be challenging, partly because surveillance data is sparse, detection-driven, and unevenly sequenced. Using highly pathogenic avian influenza A/H5 clade 2.3.4.4b as a case study, we develop zooNet, a graph-based epidemiological framework that integrates mechanistic transmission simulation, metadata-driven genetic distance imputation, and spatiotemporal graph learning to reconstruct outbreak dynamics from incomplete observations. Applied to wild bird surveillance data from the United States during 2022, zooNet recovered coherent spatiotemporal structure despite intermittent detections, revealing sustained regional circulation across multiple migratory flyways. The framework consistently identified counties with ongoing transmission weeks to months before confirmed detections, including persistent activity in northeastern regions prior to documented re-emergence. These signals were detectable even in areas with sparse sequencing and irregular reporting. These results show that explicitly representing ecological processes and inferred genomic connectivity within a unified graph structure allows persistence and spatial risk structure to be inferred from detection-driven wildlife surveillance data.
|
https://arxiv.org/abs/2601.09738
|
Academic Papers
|
svg
|
d00a10f60c6d2a0aea832dd039daf7d5527c5c0641bd493ca7dd832ac8eb3b80
|
2026-01-16T00:00:00-05:00
|
Limits of Rank Recovery in Bilinear Observation Problems
|
arXiv:2601.09754v1 Announce Type: cross Abstract: Bilinear observation problems arise in many physical and information-theoretic settings, where observables and states enter multiplicatively. Rank-based diagnostics are commonly used in such problems to assess the effective dimensionality accessible to observation, often under the implicit assumption that rank deficiency can be resolved through numerical refinement. Here we examine this assumption by analyzing the rank and nullity of a bilinear observation operator under systematic tolerance variation. Rather than focusing on a specific reconstruction algorithm, we study the operator directly and identify extended rank plateaus that persist across broad tolerance ranges. These plateaus indicate stable dimensional deficits that are not removed by refinement procedures applied within a fixed problem definition. To investigate the origin of this behavior, we resolve the nullspace into algebraic sectors defined by the block structure of the variables. The nullspace exhibits a pronounced but nonexclusive concentration in specific sectors, revealing an organized internal structure rather than uniform dimensional loss. Comparing refinement with explicit modification of the problem formulation further shows that rank recovery in the reported setting requires a change in the structure of the observation problem itself. Here, "problem modification" refers to changes that alter the bilinear observation structure (e.g., admissible operator/state families or coupling constraints), in contrast to refinements that preserve the original formulation such as tolerance adjustment and numerical reparameterizations. Together, these results delineate limits of rank recovery in bilinear observation problems and clarify the distinction between numerical refinement and problem modification in accessing effective dimensional structure.
|
https://arxiv.org/abs/2601.09754
|
Academic Papers
|
svg
|
ad9ae3871a26445413f94a48a424b80fbb1363f7fafd25f76c3ff3ce0bbf0e41
|
2026-01-16T00:00:00-05:00
|
Detecting Batch Heterogeneity via Likelihood Clustering
|
arXiv:2601.09758v1 Announce Type: cross Abstract: Batch effects represent a major confounder in genomic diagnostics. In copy number variant (CNV) detection from NGS, many algorithms compare read depth between test samples and a reference sample, assuming they are process-matched. When this assumption is violated, with causes ranging from reagent lot changes to multi-site processing, the reference becomes inappropriate, introducing false CNV calls or masking true pathogenic variants. Detecting such heterogeneity before downstream analysis is critical for reliable clinical interpretation. Existing batch effect detection methods either cluster samples based on raw features, risking conflation of biological signal with technical variation, or require known batch labels that are frequently unavailable. We introduce a method that addresses both limitations by clustering samples according to their Bayesian model evidence. The central insight is that evidence quantifies compatibility between data and model assumptions, technical artifacts violate assumptions and reduce evidence, whereas biological variation, including CNV status, is anticipated by the model and yields high evidence. This asymmetry provides a discriminative signal that separates batch effects from biology. We formalize heterogeneity detection as a likelihood ratio test for mixture structure in evidence space, using parametric bootstrap calibration to ensure conservative false positive rates. We validate our approach on synthetic data demonstrating proper Type I error control, three clinical targeted sequencing panels (liquid biopsy, BRCA, and thalassemia) exhibiting distinct batch effect mechanisms, and mouse electrophysiology recordings demonstrating cross-modality generalization. Our method achieves superior clustering accuracy compared to standard correlation-based and dimensionality-reduction approaches while maintaining the conservativeness required for clinical usage.
|
https://arxiv.org/abs/2601.09758
|
Academic Papers
|
svg
|
8187c2cd370393419a9a4cf5f067cd155736ddff4d21c69e31d98c48ac66bbef
|
2026-01-16T00:00:00-05:00
|
CLiMB: A Domain-Informed Novelty Detection Clustering Framework for Scientific Discovery
|
arXiv:2601.09768v1 Announce Type: cross Abstract: In data-driven scientific discovery, a challenge lies in classifying well-characterized phenomena while identifying novel anomalies. Current semi-supervised clustering algorithms do not always fully address this duality, often assuming that supervisory signals are globally representative. Consequently, methods often enforce rigid constraints that suppress unanticipated patterns or require a pre-specified number of clusters, rendering them ineffective for genuine novelty detection. To bridge this gap, we introduce CLiMB (CLustering in Multiphase Boundaries), a domain-informed framework decoupling the exploitation of prior knowledge from the exploration of unknown structures. Using a sequential two-phase approach, CLiMB first anchors known clusters using constrained partitioning, and subsequently applies density-based clustering to residual data to reveal arbitrary topologies. We demonstrate this framework on RR Lyrae stars data from the Gaia Data Release 3. CLiMB attains an Adjusted Rand Index of 0.829 with 90% seed coverage in recovering known Milky Way substructures, drastically outperforming heuristic and constraint-based baselines, which stagnate below 0.20. Furthermore, sensitivity analysis confirms CLiMB's superior data efficiency, showing monotonic improvement as knowledge increases. Finally, the framework successfully isolates three dynamical features (Shiva, Shakti, and the Galactic Disk) in the unlabelled field, validating its potential for scientific discovery.
|
https://arxiv.org/abs/2601.09768
|
Academic Papers
|
svg
|
1bd0769cbc13bebf7403bb71e9547ae0149a7ee7f63ba6b7d36de9de77698a01
|
2026-01-16T00:00:00-05:00
|
Zero-Error List Decoding for Classical-Quantum Channels
|
arXiv:2601.09786v1 Announce Type: cross Abstract: The aim of this work is to study the zero-error capacity of pure-state classical-quantum channels in the setting of list decoding. We provide an achievability bound for list-size two and a converse bound holding for every fixed list size. The two bounds coincide for channels whose pairwise absolute state overlaps form a positive semi-definite matrix. Finally, we discuss a remarkable peculiarity of the classical-quantum case: differently from the fully classical setting, the rate at which the sphere-packing bound diverges might not be achievable by zero-error list codes, even when we take the limit of fixed but arbitrarily large list size.
|
https://arxiv.org/abs/2601.09786
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.