id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
954c3dd7411e47a5aa232c2976bf5857eefa514d2c8aefecb1c4fde1a0956dda
|
2026-01-16T00:00:00-05:00
|
On some Exotic Cylindrical Algebraic Decompositions and Cells
|
arXiv:2601.09795v1 Announce Type: cross Abstract: Cylindrical Algebraic Decompositions (CADs) endowed with additional topological properties have found applications beyond their original logical setting, including algorithmic optimizations in CAD construction, robot motion planning, and the algorithmic study of the topology of semi-algebraic sets. In this paper, we construct explicit examples of CADs and CAD cells that refute several conjectures and open questions of J. H. Davenport, A. Locatelli, and G. K. Sankaran concerning these topological assumptions.
|
https://arxiv.org/abs/2601.09795
|
Academic Papers
|
svg
|
613243d0dc7346cd80196dc4d1cf5e388a2183e244cf50a7d49be7119bd4f48e
|
2026-01-16T00:00:00-05:00
|
Shallow-KAN Based Solution of Moving Boundary PDEs
|
arXiv:2601.09818v1 Announce Type: cross Abstract: Kolmogorov-Arnold Networks (KANs) require significantly smaller architectures compared to multilayer perceptron (MLP)-based approaches, while retaining expressive power through spline-based activations. We propose a shallow KAN framework that directly approximates the temperature distribution T(x,t) and the moving interface $\Gamma(t)$, enforcing the governing PDEs, phase equilibrium, and Stefan condition through physics-informed residuals. To enhance accuracy, we employ interface-focused collocation resampling. Numerical experiments in one and two dimensions show that the framework achieves accurate reconstructions of both temperature fields and interface dynamics, highlighting the potential of KANs as a compact and efficient alternative for moving boundary PDEs. First, we validate the model with semi-infinite analytical solutions. Subsequently, the model is extended to 2D using a level-set based formulation for interface propagation, which is solved within the KAN framework. This work demonstrates that KANs are capable of solving complex moving boundary problems without the need for measurement data.
|
https://arxiv.org/abs/2601.09818
|
Academic Papers
|
svg
|
7adab6df2c3593ab630840fb66744544d9939ae7b70d953771b4ce3795ad7a55
|
2026-01-16T00:00:00-05:00
|
Accelerated Regularized Wasserstein Proximal Sampling Algorithms
|
arXiv:2601.09848v1 Announce Type: cross Abstract: We consider sampling from a Gibbs distribution by evolving a finite number of particles using a particular score estimator rather than Brownian motion. To accelerate the particles, we consider a second-order score-based ODE, similar to Nesterov acceleration. In contrast to traditional kernel density score estimation, we use the recently proposed regularized Wasserstein proximal method, yielding the Accelerated Regularized Wasserstein Proximal method (ARWP). We provide a detailed analysis of continuous- and discrete-time non-asymptotic and asymptotic mixing rates for Gaussian initial and target distributions, using techniques from Euclidean acceleration and accelerated information gradients. Compared with the kinetic Langevin sampling algorithm, the proposed algorithm exhibits a higher contraction rate in the asymptotic time regime. Numerical experiments are conducted across various low-dimensional experiments, including multi-modal Gaussian mixtures and ill-conditioned Rosenbrock distributions. ARWP exhibits structured and convergent particles, accelerated discrete-time mixing, and faster tail exploration than the non-accelerated regularized Wasserstein proximal method and kinetic Langevin methods. Additionally, ARWP particles exhibit better generalization properties for some non-log-concave Bayesian neural network tasks.
|
https://arxiv.org/abs/2601.09848
|
Academic Papers
|
svg
|
0995ca14942b9d9c7f6bf805a1234b3b82c10a4962b7f06440888415245df9c8
|
2026-01-16T00:00:00-05:00
|
Distortion maps for elliptic curves over finite fields
|
arXiv:2601.09904v1 Announce Type: cross Abstract: The Weil pairing on elliptic curves has deep links with discrete logarithm problems. In practice, to better suit the functionalities of cryptosystems, one often needs to modify the original Weil pairing via what is called a distortion map. We propose a study on the question of the existence of distortion maps for elliptic curves over finite fields. We revisit results from the literature and provide detailed proofs. We also propose new perspectives at times.
|
https://arxiv.org/abs/2601.09904
|
Academic Papers
|
svg
|
49d78f6ba1dca46528659ea268c6f51919be1fbad46852354a39ce3ef0c8f162
|
2026-01-16T00:00:00-05:00
|
Learning to Decode in Parallel: Self-Coordinating Neural Network for Real-Time Quantum Error Correction
|
arXiv:2601.09921v1 Announce Type: cross Abstract: Fast, reliable decoders are pivotal components for enabling fault-tolerant quantum computation (FTQC). Neural network decoders like AlphaQubit have demonstrated potential, achieving higher accuracy than traditional human-designed decoding algorithms. However, existing implementations of neural network decoders lack the parallelism required to decode the syndrome stream generated by a superconducting logical qubit in real time. Moreover, integrating AlphaQubit with sliding window-based parallel decoding schemes presents non-trivial challenges: AlphaQubit is trained solely to output a single bit corresponding to the global logical correction for an entire memory experiment, rather than local physical corrections that can be easily integrated. We address this issue by training a recurrent, transformer-based neural network specifically tailored for parallel window decoding. While it still outputs a single bit, we derive training labels from a consistent set of local corrections and train on various types of decoding windows simultaneously. This approach enables the network to self-coordinate across neighboring windows, facilitating high-accuracy parallel decoding of arbitrarily long memory experiments. As a result, we overcome the throughput bottleneck that previously precluded the use of AlphaQubit-type decoders in FTQC. Our work presents the first scalable, neural-network-based parallel decoding framework that simultaneously achieves SOTA accuracy and the stringent throughput required for real-time quantum error correction. Using an end-to-end experimental workflow, we benchmark our decoder on the Zuchongzhi 3.2 superconducting quantum processor on surface codes with distances up to 7, demonstrating its superior accuracy. Moreover, we demonstrate that, using our approach, a single TPU v6e is capable of decoding surface codes with distances up to 25 within 1us per decoding round.
|
https://arxiv.org/abs/2601.09921
|
Academic Papers
|
svg
|
e159e97f47e77cf6da42dae0738728278ef5159f0a550814ab369a8895c4e396
|
2026-01-16T00:00:00-05:00
|
A Level Set Method on Particle Flow Maps
|
arXiv:2601.09939v1 Announce Type: cross Abstract: This paper introduces a Particle Flow Map Level Set (PFM-LS) method for high-fidelity interface tracking. We store level-set values, gradients, and Hessians on particles concentrated in a narrow band around the interface, advecting them via bidirectional flow maps while using a conventional grid-based representation elsewhere. By interpreting the level set value as a 3-form and its gradient as a 1-form, PFM-LS achieves exceptional geometric fidelity during complex deformations and preserves sub-grid features that traditional methods cannot capture. Our dual-timescale approach utilizes long-range maps for values and gradients, with frequent reinitialization of short-range maps for the distortion-sensitive Hessian, alongside adaptive particle control that maintains sufficient density within the narrow band. We also develop a hybrid particle-grid quasi-Newton redistancing scheme that preserves fine-scale features while enforcing the signed-distance property. Benchmark comparisons in 2D and 3D demonstrate that PFM-LS achieves state-of-the-art volume preservation and shape fidelity against a broad range of existing level-set methods.
|
https://arxiv.org/abs/2601.09939
|
Academic Papers
|
svg
|
6f673ef5b55ae2250d3fce564f70ddf99deec0b69cf0a41aa9961c1b2466bb7c
|
2026-01-16T00:00:00-05:00
|
Interfacing Superconductor and Semiconductor Digital Electronics
|
arXiv:2601.09969v1 Announce Type: cross Abstract: Interface circuits are the key components that enable the hybrid integration of superconductor and semiconductor digital electronics. The design requirements of superconductor-semiconductor interface circuits vary depending on the application, such as high-performance classical computing, superconducting quantum computing, and digital signal processing. In this survey, various interface circuits are categorized based on the working principle and structure. The superconducting output drivers are explored, which are capable of converting and amplifying, e.g., single flux quantum (SFQ) voltage pulses, to voltage levels that semiconductor circuits can process. Several trade-offs between circuit- and system-level design parameters are examined. Accordingly, parameters such as the data rate, output voltage, power dissipation, layout area, thermal/heat load of cryogenic cables, and bit-error rate are considered.
|
https://arxiv.org/abs/2601.09969
|
Academic Papers
|
svg
|
abe5a04bff58978128ec653b7eb63f59814fcd6aaea1734d4844b0f14e634202
|
2026-01-16T00:00:00-05:00
|
Performance of AI agents based on reasoning language models on ALD process optimization tasks
|
arXiv:2601.09980v1 Announce Type: cross Abstract: In this work we explore the performance and behavior of reasoning large language models to autonomously optimize atomic layer deposition (ALD) processes. In the ALD process optimization task, an agent built on top of a reasoning LLM has to find optimal dose times for an ALD precursor and a coreactant without any prior knowledge on the process, including whether it is actually self-limited. The agent is meant to interact iteratively with an ALD reactor in a fully unsupervised way. We evaluate this agent using a simple model of an ALD tool that incorporates ALD processes with different self-limited surface reaction pathways as well as a non self-limited component. Our results show that agents based on reasoning models like OpenAI's o3 and GPT5 consistently succeeded at completing this optimization task. However, we observed significant run-to-run variability due to the non deterministic nature of the model's response. In order to understand the logic followed by the reasoning model, the agent uses a two step process in which the model first generates an open response detailing the reasoning process. This response is then transformed into a structured output. An analysis of these reasoning traces showed that the logic of the model was sound and that its reasoning was based on the notions of self-limited process and saturation expected in the case of ALD. However, the agent can sometimes be misled by its own prior choices when exploring the optimization space.
|
https://arxiv.org/abs/2601.09980
|
Academic Papers
|
svg
|
8868ad60a15cc6aebfb5159d66aef8f7c6bf24b2476786040558ac8cdf343a64
|
2026-01-16T00:00:00-05:00
|
Clustering-Based User Selection in Federated Learning: Metadata Exploitation for 3GPP Networks
|
arXiv:2601.10013v1 Announce Type: cross Abstract: Federated learning (FL) enables collaborative model training without sharing raw user data, but conventional simulations often rely on unrealistic data partitioning and current user selection methods ignore data correlation among users. To address these challenges, this paper proposes a metadatadriven FL framework. We first introduce a novel data partition model based on a homogeneous Poisson point process (HPPP), capturing both heterogeneity in data quantity and natural overlap among user datasets. Building on this model, we develop a clustering-based user selection strategy that leverages metadata, such as user location, to reduce data correlation and enhance label diversity across training rounds. Extensive experiments on FMNIST and CIFAR-10 demonstrate that the proposed framework improves model performance, stability, and convergence in non-IID scenarios, while maintaining comparable performance under IID settings. Furthermore, the method shows pronounced advantages when the number of selected users per round is small. These findings highlight the framework's potential for enhancing FL performance in realistic deployments and guiding future standardization.
|
https://arxiv.org/abs/2601.10013
|
Academic Papers
|
svg
|
008e0ef6c15a7245308420d1e0e2b35dfe79f3f415c2be5acc672012de23fe6c
|
2026-01-16T00:00:00-05:00
|
What Understanding Means in AI-Laden Astronomy
|
arXiv:2601.10038v1 Announce Type: cross Abstract: Artificial intelligence is rapidly transforming astronomical research, yet the scientific community has largely treated this transformation as an engineering challenge rather than an epistemological one. This perspective article argues that philosophy of science offers essential tools for navigating AI's integration into astronomy--conceptual clarity about what "understanding" means, critical examination of assumptions about data and discovery, and frameworks for evaluating AI's roles across different research contexts. Drawing on an interdisciplinary workshop convening astronomers, philosophers, and computer scientists, we identify several tensions. First, the narrative that AI will "derive fundamental physics" from data misconstrues contemporary astronomy as equation-derivation rather than the observation-driven enterprise it is. Second, scientific understanding involves more than prediction--it requires narrative construction, contextual judgment, and communicative achievement that current AI architectures struggle to provide. Third, because narrative and judgment matter, human peer review remains essential--yet AI-generated content flooding the literature threatens our capacity to identify genuine insight. Fourth, while AI excels at well-defined problem-solving, the ill-defined problem-finding that drives breakthroughs appears to require capacities beyond pattern recognition. Fifth, as AI accelerates what is feasible, pursuitworthiness criteria risk shifting toward what AI makes easy rather than what is genuinely important. We propose "pragmatic understanding" as a framework for integration--recognizing AI as a tool that extends human cognition while requiring new norms for validation and epistemic evaluation. Engaging with these questions now may help the community shape the transformation rather than merely react to it.
|
https://arxiv.org/abs/2601.10038
|
Academic Papers
|
svg
|
122b2e98625f2b209a0d85d6ec47993694e991fb732a5af524c2402c972c009a
|
2026-01-16T00:00:00-05:00
|
Instruction Finetuning LLaMA-3-8B Model Using LoRA for Financial Named Entity Recognition
|
arXiv:2601.10043v1 Announce Type: cross Abstract: Particularly, financial named-entity recognition (NER) is one of the many important approaches to translate unformatted reports and news into structured knowledge graphs. However, free, easy-to-use large language models (LLMs) often fail to differentiate organisations as people, or disregard an actual monetary amount entirely. This paper takes Meta's Llama 3 8B and applies it to financial NER by combining instruction fine-tuning and Low-Rank Adaptation (LoRA). Each annotated sentence is converted into an instruction-input-output triple, enabling the model to learn task descriptions while fine-tuning with small low-rank matrices instead of updating all weights. Using a corpus of 1,693 sentences, our method obtains a micro-F1 score of 0.894 compared with Qwen3-8B, Baichuan2-7B, T5, and BERT-Base. We present dataset statistics, describe training hyperparameters, and perform visualizations of entity density, learning curves, and evaluation metrics. Our results show that instruction tuning combined with parameter-efficient fine-tuning enables state-of-the-art performance on domain-sensitive NER.
|
https://arxiv.org/abs/2601.10043
|
Academic Papers
|
svg
|
463b197b96f1cfd85c33a605fa2e9b7fe77354dfec9e2d94665c809f05fe6410
|
2026-01-16T00:00:00-05:00
|
Nearest Kronecker Product Decomposition Based Subband Adaptive Filter: Algorithms and Applications
|
arXiv:2601.10078v1 Announce Type: cross Abstract: Recently, the nearest Kronecker product (NKP) decomposition-based normalized least mean square (NLMS-NKP) algorithm has demonstrated superior convergence performance compared to the conventional NLMS algorithm. However, its convergence rate exhibits significant degradation when processing highly correlated input signals. To address this problem, we propose a type-I NKP-based normalized subband adaptive filter (NSAF) algorithm, namely NSAF-NKP-I. Nevertheless, this algorithm incurs substantially higher computational overhead than the NLMS-NKP algorithm. Remarkably, our enhanced type-II NKP-based NSAF (NSAF-NKP-II) algorithm achieves equivalent convergence performance while substantially reducing computational complexity. Furthermore, to enhance robustness against impulsive noise interference, we develop two robust variants: the maximum correntropy criterion-based robust NSAF-NKP (RNSAF-NKP-MCC) and logarithmic criterion-based robust NSAF-NKP (RNSAF-NKP-LC) algorithms. Additionally, detailed analyses of computational complexity, step-size range, and theoretical steady-state performance are provided for theproposed algorithms. To enhance the practicability of the NSAF-NKP-II algorithm in complex nonlinear environments, we further devise two nonlinear implementations: the trigonometric functional link network-based NKP-NSAF (TFLN-NSAF-NKP) and Volterra series expansion-based NKP-NSAF (Volterra-NKP-NSAF) algorithms. In active noise control (ANC) systems, we further propose the filtered-x NSAF-NKP-II (NKP-FxNSAF) algorithm. Simulation experiments in echo cancellation, sparse system identification, nonlinear processing, and ANC scenarios are conducted to validate the superiority of the proposed algorithms over existing state-of-the-art counterparts.
|
https://arxiv.org/abs/2601.10078
|
Academic Papers
|
svg
|
0d86264abf026775cf112feb3ee043abec43d2cc6c3868ea52e7669a5dc5b64c
|
2026-01-16T00:00:00-05:00
|
Bayesian Model Selection for Complex Flows of Yield Stress Fluids
|
arXiv:2601.10115v1 Announce Type: cross Abstract: Modeling yield stress fluids in complex flow scenarios presents significant challenges, particularly because conventional rheological characterization methods often yield material parameters that are not fully representative of the intricate constitutive behavior observed in complex conditions. We propose a Bayesian uncertainty quantification framework for the calibration and selection of constitutive models for yield stress fluids, explicitly accounting for uncertainties in both modeling accuracy and experimental observations. The framework addresses the challenge of complex flow modeling by making discrepancies that emanate from rheological measurements explicit and quantifiable. We apply the Bayesian framework to rheological measurements and squeeze flow experiments on Carbopol 980. Our analysis demonstrates that Bayesian model selection yields robust probabilistic predictions and provides an objective assessment of model suitability through evaluated plausibilities. The framework naturally penalizes unnecessary complexity and shows that the optimal model choice depends on the incorporated physics, the prior information, and the availability of data. In rheological settings, the Herschel-Bulkley and biviscous power law models perform well. However, when these rheological outcomes are used as prior information for a rheo-informed squeeze flow analysis, a significant mismatch with the experimental data is observed. This is due to the yield stress inferred from rheological measurements not being representative of the complex squeeze flow case. In contrast, an expert-informed squeeze flow analysis, based on broader priors, yields accurate predictions. These findings highlight the limitations of translating rheological measurements to complex flows and underscore the value of Bayesian approaches in quantifying model bias and guiding model selection under uncertainty.
|
https://arxiv.org/abs/2601.10115
|
Academic Papers
|
svg
|
918da3a3ed195386d101fa347783be5ac6da2f5985db48b4bbbe9f337e22e9d1
|
2026-01-16T00:00:00-05:00
|
A volume penalization method for solving conjugate scalar transport with interfacial jump conditions
|
arXiv:2601.10134v1 Announce Type: cross Abstract: Conjugate scalar transport with interfacial jump conditions on complex interfacial geometries is common in thermal and chemical processes, while its accurate and efficient simulations are still quite challenging. In the present study, a novel treatment of a two-phase interface in the volume penalization method, a kind of immersed boundary method, for solving conjugate scalar transport with general interfacial boundary conditions is developed. We first propose an interfacial treatment for solving an advection-diffusion equation with a Neumann boundary condition, and then extend it to general conjugate scalar transport with both interfacial flux and scalar jumps. A one-dimensional diffusion problem is solved to verify the present scheme and demonstrate the advantage of the present scheme in improving accuracy and unifying the governing equations in the two phases with an additional source term representing the local jump condition of the interfacial scalar flux. Then, the present scheme is further applied to fluid-solid coupled scalar diffusion and advection-diffusion problems with the scalar and its flux jumps across the interface. The simulation results of the present scheme generally show good agreement with reference results obtained by body-fitted mesh simulations with average relative deviations less than 3.0%.
|
https://arxiv.org/abs/2601.10134
|
Academic Papers
|
svg
|
1748eb7ec8411a25b5e1c429d4c2e4b68b079441affb0fbecc2390f68f30d5d0
|
2026-01-16T00:00:00-05:00
|
On 3-Connected Planar Graphs with Unique Orientable Circuit Double Covers
|
arXiv:2601.10171v1 Announce Type: cross Abstract: A circuit double cover of a bridgeless graph is a collection of even subgraphs such that every edge is contained in exactly two subgraphs of the given collection. Such a circuit double cover describes an embedding of the corresponding graph onto a surface. In this paper, we investigate the well-known Orientable Strong Embedding Conjecture. This conjecture proposes that every bridgeless graph has a circuit double cover describing an embedding on an orientable surface. In a recent paper, we have proved that a 3-connected cubic planar graph G has exactly one orientable circuit double cover if and only if G is the dual graph of an Apollonian network. In this paper, we extend this result by demonstrating that this characterisation applies to any 3-connected planar graph, regardless of whether it is cubic.
|
https://arxiv.org/abs/2601.10171
|
Academic Papers
|
svg
|
8a4457d38db0468877ddff1da7f250ae5ada670bb55d154511aab4c52238a8e6
|
2026-01-16T00:00:00-05:00
|
Discrete versus continuous -- lattice models and their exact continuous counterparts
|
arXiv:2601.10184v1 Announce Type: cross Abstract: We review and study the correspondence between discrete lattice/chain models of interacting particles and their continuous counterparts represented by partial differential equations. We study the correspondence problem for nearest neighbour interaction lattice models as well as for multiple-neighbour interaction lattice models, and we gradually proceed from infinite lattices to periodic lattices and finally to finite lattices with fixed ends/zero Dirichlet boundary conditions. The whole study is framed as systematic specialisation of Fourier analysis tools from the continuous to the discrete setting and vice versa, and the correspondence between the discrete and continuous models is examined primarily with regard to the dispersion relation.
|
https://arxiv.org/abs/2601.10184
|
Academic Papers
|
svg
|
fa7f9149db30e3b77609a67f8f1dddebc6ac35ee4672da5b312f7eadc41314d9
|
2026-01-16T00:00:00-05:00
|
Adversarial Hypothesis Testing for Quantum Channels
|
arXiv:2601.10243v1 Announce Type: cross Abstract: This paper presents a systematic study of adversarial hypothesis testing for both quantum-quantum (QQ) and classical-quantum (CQ) channels. Unlike conventional channel discrimination, we consider a framework where the sender, Alice, selects the channel input adversarially to minimize Bob's distinguishability. We analyze this problem across four settings based on whether Alice employs i.i.d. or general inputs and whether the receiver, Bob, is informed of the specific input choice (allowing his measurement to depend on the input). We characterize the Stein exponents for each setting and reveal a striking distinction in behavior: for QQ channels with i.i.d. inputs, Bob's knowledge of the input significantly enhances distinguishability, yet this advantage vanishes when general inputs are permitted. In contrast, for CQ channels, Bob being informed provides a consistent advantage over the corresponding entanglement-breaking channels for both i.i.d. and general inputs. These results demonstrate a unique phenomenon in adversarial hypothesis testing where the CQ channel does not merely behave as a special case of the QQ channel.
|
https://arxiv.org/abs/2601.10243
|
Academic Papers
|
svg
|
91e7252707f23b49fa247007e9f2fea3b20bcdea9ba7d2a243dae76142e1416b
|
2026-01-16T00:00:00-05:00
|
Cell Behavior Video Classification Challenge, a benchmark for computer vision methods in time-lapse microscopy
|
arXiv:2601.10250v1 Announce Type: cross Abstract: The classification of microscopy videos capturing complex cellular behaviors is crucial for understanding and quantifying the dynamics of biological processes over time. However, it remains a frontier in computer vision, requiring approaches that effectively model the shape and motion of objects without rigid boundaries, extract hierarchical spatiotemporal features from entire image sequences rather than static frames, and account for multiple objects within the field of view. To this end, we organized the Cell Behavior Video Classification Challenge (CBVCC), benchmarking 35 methods based on three approaches: classification of tracking-derived features, end-to-end deep learning architectures to directly learn spatiotemporal features from the entire video sequence without explicit cell tracking, or ensembling tracking-derived with image-derived features. We discuss the results achieved by the participants and compare the potential and limitations of each approach, serving as a basis to foster the development of computer vision methods for studying cellular dynamics.
|
https://arxiv.org/abs/2601.10250
|
Academic Papers
|
svg
|
92293915edd70a75e4fe74f9d93aeecafdd282354d80e0fc015fd767414ff4be
|
2026-01-16T00:00:00-05:00
|
Sim2Real Deep Transfer for Per-Device CFO Calibration
|
arXiv:2601.10264v1 Announce Type: cross Abstract: Carrier Frequency Offset (CFO) estimation in Orthogonal Frequency Division Multiplexing (OFDM) systems faces significant performance degradation across heterogeneous software-defined radio (SDR) platforms due to uncalibrated hardware impairments. Existing deep neural network (DNN)-based approaches lack device-level adaptation, limiting their practical deployment. This paper proposes a Sim2Real transfer learning framework for per-device CFO calibration, combining simulation-driven pretraining with lightweight receiver adaptation. A backbone DNN is pre-trained on synthetic OFDM signals incorporating parametric hardware distortions (e.g., phase noise, IQ imbalance), enabling generalized feature learning without costly cross-device data collection. Subsequently, only the regression layers are fine-tuned using $1,000$ real frames per target device, preserving hardware-agnostic knowledge while adapting to device-specific impairments. Experiments across three SDR families (USRP B210, USRP N210, HackRF One) achieve $30\times$ BER reduction compared to conventional CP-based methods under indoor multipath conditions. The framework bridges the simulation-to-reality gap for robust CFO estimation, enabling cost-effective deployment in heterogeneous wireless systems.
|
https://arxiv.org/abs/2601.10264
|
Academic Papers
|
svg
|
2d08c19e28561c3fa93de6bcdae98849de7dea087d0268f4387771a7c701dc7a
|
2026-01-16T00:00:00-05:00
|
On gradient stability in nonlinear PDE models and inference in interacting particle systems
|
arXiv:2601.10326v1 Announce Type: cross Abstract: We consider general parameter to solution maps $\theta \mapsto \mathcal G(\theta)$ of non-linear partial differential equations and describe an approach based on a Banach space version of the implicit function theorem to verify the gradient stability condition of Nickl&Wang (JEMS 2024) for the underlying non-linear inverse problem, providing also injectivity estimates and corresponding statistical identifiability results. We illustrate our methods in two examples involving a non-linear reaction diffusion system as well as a McKean--Vlasov interacting particle model, both with periodic boundary conditions. We apply our results to prove the polynomial time convergence of a Langevin-type algorithm sampling the posterior measure of the interaction potential arising from a discrete aggregate measurement of the interacting particle system.
|
https://arxiv.org/abs/2601.10326
|
Academic Papers
|
svg
|
96bbde4d4fcb23216db181319fb251a05e2998a7ba83bfe972519a67e3dee864
|
2026-01-16T00:00:00-05:00
|
H-EFT-VA: An Effective-Field-Theory Variational Ansatz with Provable Barren Plateau Avoidance
|
arXiv:2601.10479v1 Announce Type: cross Abstract: Variational Quantum Algorithms (VQAs) are critically threatened by the Barren Plateau (BP) phenomenon. In this work, we introduce the H-EFT Variational Ansatz (H-EFT-VA), an architecture inspired by Effective Field Theory (EFT). By enforcing a hierarchical "UV-cutoff" on initialization, we theoretically restrict the circuit's state exploration, preventing the formation of approximate unitary 2-designs. We provide a rigorous proof that this localization guarantees an inverse-polynomial lower bound on the gradient variance: $Var[\partial \theta] \in \Omega(1/poly(N))$. Crucially, unlike approaches that avoid BPs by limiting entanglement, we demonstrate that H-EFT-VA maintains volume-law entanglement and near-Haar purity, ensuring sufficient expressibility for complex quantum states. Extensive benchmarking across 16 experiments -- including Transverse Field Ising and Heisenberg XXZ models -- confirms a 109x improvement in energy convergence and a 10.7x increase in ground-state fidelity over standard Hardware-Efficient Ansatze (HEA), with a statistical significance of $p < 10^{-88}$.
|
https://arxiv.org/abs/2601.10479
|
Academic Papers
|
svg
|
2693bc309986ab9316e962c5f171dc050bd1d7ff568e4ceeb47fc3fa68cb51e8
|
2026-01-16T00:00:00-05:00
|
CROCS: A Two-Stage Clustering Framework for Behaviour-Centric Consumer Segmentation with Smart Meter Data
|
arXiv:2601.10494v1 Announce Type: cross Abstract: With grid operators confronting rising uncertainty from renewable integration and a broader push toward electrification, Demand-Side Management (DSM) -- particularly Demand Response (DR) -- has attracted significant attention as a cost-effective mechanism for balancing modern electricity systems. Unprecedented volumes of consumption data from a continuing global deployment of smart meters enable consumer segmentation based on real usage behaviours, promising to inform the design of more effective DSM and DR programs. However, existing clustering-based segmentation methods insufficiently reflect the behavioural diversity of consumers, often relying on rigid temporal alignment, and faltering in the presence of anomalies, missing data, or large-scale deployments. To address these challenges, we propose a novel two-stage clustering framework -- Clustered Representations Optimising Consumer Segmentation (CROCS). In the first stage, each consumer's daily load profiles are clustered independently to form a Representative Load Set (RLS), providing a compact summary of their typical diurnal consumption behaviours. In the second stage, consumers are clustered using the Weighted Sum of Minimum Distances (WSMD), a novel set-to-set measure that compares RLSs by accounting for both the prevalence and similarity of those behaviours. Finally, community detection on the WSMD-induced graph reveals higher-order prototypes that embody the shared diurnal behaviours defining consumer groups, enhancing the interpretability of the resulting clusters. Extensive experiments on both synthetic and real Australian smart meter datasets demonstrate that CROCS captures intra-consumer variability, uncovers both synchronous and asynchronous behavioural similarities, and remains robust to anomalies and missing data, while scaling efficiently through natural parallelisation. These results...
|
https://arxiv.org/abs/2601.10494
|
Academic Papers
|
svg
|
b6cc58c9286a14f5e5eedf1606c837b1a9a491d429a3454c0f5f665f824d8bb6
|
2026-01-16T00:00:00-05:00
|
The incompatibility of the Condorcet winner and loser criteria with positive involvement and resolvability
|
arXiv:2601.10506v1 Announce Type: cross Abstract: We prove that there is no preferential voting method satisfying the Condorcet winner and loser criteria, positive involvement (if a candidate $x$ wins in an initial preference profile, then adding a voter who ranks $x$ uniquely first cannot cause $x$ to lose), and resolvability (if $x$ initially ties for winning, then $x$ can be made the unique winner by adding a single voter). In a previous note, we proved an analogous result assuming an additional axiom of ordinal margin invariance, which we now show is unnecessary for an impossibility theorem, at least if the desired voting method is defined for five-candidate elections.
|
https://arxiv.org/abs/2601.10506
|
Academic Papers
|
svg
|
ea9f8a20354c562ca628dabf3bcebfaa85e7e1cb3d69079fc1d172dbb357b616
|
2026-01-16T00:00:00-05:00
|
Coarsening Causal DAG Models
|
arXiv:2601.10531v1 Announce Type: cross Abstract: Directed acyclic graphical (DAG) models are a powerful tool for representing causal relationships among jointly distributed random variables, especially concerning data from across different experimental settings. However, it is not always practical or desirable to estimate a causal model at the granularity of given features in a particular dataset. There is a growing body of research on causal abstraction to address such problems. We contribute to this line of research by (i) providing novel graphical identifiability results for practically-relevant interventional settings, (ii) proposing an efficient, provably consistent algorithm for directly learning abstract causal graphs from interventional data with unknown intervention targets, and (iii) uncovering theoretical insights about the lattice structure of the underlying search space, with connections to the field of causal discovery more generally. As proof of concept, we apply our algorithm on synthetic and real datasets with known ground truths, including measurements from a controlled physical system with interacting light intensity and polarization.
|
https://arxiv.org/abs/2601.10531
|
Academic Papers
|
svg
|
34da02c0f0268950bb7e4a511511f9e8b9d74c4782f3a568287a21ede73a850d
|
2026-01-16T00:00:00-05:00
|
A Mirror-Descent Algorithm for Computing the Petz-R\'enyi Capacity of Classical-Quantum Channels
|
arXiv:2601.10558v1 Announce Type: cross Abstract: We study the computation of the $\alpha$-R\'enyi capacity of a classical-quantum (c-q) channel for $\alpha\in(0,1)$. We propose an exponentiated-gradient (mirror descent) iteration that generalizes the Blahut-Arimoto algorithm. Our analysis establishes relative smoothness with respect to the entropy geometry, guaranteeing a global sublinear convergence of the objective values. Furthermore, under a natural tangent-space nondegeneracy condition (and a mild spectral lower bound in one regime), we prove local linear (geometric) convergence in Kullback-Leibler divergence on a truncated probability simplex, with an explicit contraction factor once the local curvature constants are bounded.
|
https://arxiv.org/abs/2601.10558
|
Academic Papers
|
svg
|
2c853272c465fd03832f43fca2f3c87b6c2a2d217a49845deae22c14a9e22fb9
|
2026-01-16T00:00:00-05:00
|
Achievable Degrees of Freedom Analysis and Optimization in Massive MIMO via Characteristic Mode Analysis
|
arXiv:2601.10576v1 Announce Type: cross Abstract: Massive multiple-input multiple-output (MIMO) is esteemed as a critical technology in 6G communications, providing large degrees of freedom (DoF) to improve multiplexing gain. This paper introduces characteristic mode analysis (CMA) to derive the achievable DoF. Unlike existing works primarily focusing on the DoF of the wireless channel,the excitation and radiation properties of antennas are also involved in our DoF analysis, which influences the number of independent data streams for communication of a MIMO system. Specifically, we model the excitation and radiation properties of transceiver antennas using CMA to analyze the excitation and radiation properties of antennas. The CMA-based DoF analysis framework is established and the achievable DoF is derived. A characteristic mode optimization problem of antennas is then formulated to maximize the achievable DoF. A case study where the reconfigurable holographic surface (RHS) antennas are deployed at the transceiver is investigated, and a CMA-based genetic algorithm is later proposed to solve the above problem. By changing the characteristic modes electric field and surface current distribution of RHS, the achievable DoF is enhanced. Full-wave simulation verifies the theoretical analysis on the the achievable DoF and shows that, via the reconfiguration of RHS based on the proposed algorithm, the achievable DoF is improved.
|
https://arxiv.org/abs/2601.10576
|
Academic Papers
|
svg
|
f64044ddd73212725bde46997d9598afb5e6197f83cdbd03ab41e941a8de6db4
|
2026-01-16T00:00:00-05:00
|
Searching for Quantum Effects in the Brain: A Bell-Type Test for Nonclassical Latent Representations in Autoencoders
|
arXiv:2601.10588v1 Announce Type: cross Abstract: Whether neural information processing is entirely classical or involves quantum-mechanical elements remains an open question. Here we propose a model-agnostic, information-theoretic test of nonclassicality that bypasses microscopic assumptions and instead probes the structure of neural representations themselves. Using autoencoders as a transparent model system, we introduce a Bell-type consistency test in latent space, and ask whether decoding statistics obtained under multiple readout contexts can be jointly explained by a single positive latent-variable distribution. By shifting the search for quantum-like signatures in neural systems from microscopic dynamics to experimentally testable constraints on information processing, this work opens a new route for probing the fundamental physics of neural computation.
|
https://arxiv.org/abs/2601.10588
|
Academic Papers
|
svg
|
1224af741388de2643f6ec9f5ea7a93baca0fc48178968acc7bba94a456335f9
|
2026-01-16T00:00:00-05:00
|
Multi-Objective Pareto-Front Optimization for Efficient Adaptive VVC Streaming
|
arXiv:2601.10607v1 Announce Type: cross Abstract: Adaptive video streaming has facilitated improved video streaming over the past years. A balance among coding performance objectives such as bitrate, video quality, and decoding complexity is required to achieve efficient, content- and codec-dependent, adaptive video streaming. This paper proposes a multi-objective Pareto-front (PF) optimization framework to construct quality-monotonic, content-adaptive bitrate ladders Versatile Video Coding (VVC) streaming that jointly optimize video quality, bitrate, and decoding time, which is used as a practical proxy for decoding energy. Two strategies are introduced: the Joint Rate-Quality-Time Pareto Front (JRQT-PF) and the Joint Quality-Time Pareto Front (JQT-PF), each exploring different tradeoff formulations and objective prioritizations. The ladders are constructed under quality monotonicity constraints during adaptive streaming to ensure a consistent Quality of Experience (QoE). Experiments are conducted on a large-scale UHD dataset (Inter-4K), with quality assessed using PSNR, VMAF, and XPSNR, and complexity measured via decoding time and energy consumption. The JQT-PF method achieves 11.76% average bitrate savings while reducing average decoding time by 0.29% to maintain the same XPSNR, compared to a widely-used fixed ladder. More aggressive configurations yield up to 27.88% bitrate savings at the cost of increased complexity. The JRQT-PF strategy, on the other hand, offers more controlled tradeoffs, achieving 6.38 % bitrate savings and 6.17 % decoding time reduction. This framework outperforms existing methods, including fixed ladders, VMAF- and XPSNR-based dynamic resolution selection, and complexity-aware benchmarks. The results confirm that PF optimization with decoding time constraints enables sustainable, high-quality streaming tailored to network and device capabilities.
|
https://arxiv.org/abs/2601.10607
|
Academic Papers
|
svg
|
9e0babe25b3404fc486d5ec9dae59a7123605aa0c17ec19aea357c51eb25e389
|
2026-01-16T00:00:00-05:00
|
Differentially Private Inference for Longitudinal Linear Regression
|
arXiv:2601.10626v1 Announce Type: cross Abstract: Differential Privacy (DP) provides a rigorous framework for releasing statistics while protecting individual information present in a dataset. Although substantial progress has been made on differentially private linear regression, existing methods almost exclusively address the item-level DP setting, where each user contributes a single observation. Many scientific and economic applications instead involve longitudinal or panel data, in which each user contributes multiple dependent observations. In these settings, item-level DP offers inadequate protection, and user-level DP - shielding an individual's entire trajectory - is the appropriate privacy notion. We develop a comprehensive framework for estimation and inference in longitudinal linear regression under user-level DP. We propose a user-level private regression estimator based on aggregating local regressions, and we establish finite-sample guarantees and asymptotic normality under short-range dependence. For inference, we develop a privatized, bias-corrected covariance estimator that is automatically heteroskedasticity- and autocorrelation-consistent. These results provide the first unified framework for practical user-level DP estimation and inference in longitudinal linear regression under dependence, with strong theoretical guarantees and promising empirical performance.
|
https://arxiv.org/abs/2601.10626
|
Academic Papers
|
svg
|
0135f38a1aa5e50fd956faf3093ca1881db1a1b93aa33dd5336cabc8dd070f7e
|
2026-01-16T00:00:00-05:00
|
Parametric RDT approach to computational gap of symmetric binary perceptron
|
arXiv:2601.10628v1 Announce Type: cross Abstract: We study potential presence of statistical-computational gaps (SCG) in symmetric binary perceptrons (SBP) via a parametric utilization of \emph{fully lifted random duality theory} (fl-RDT) [96]. A structural change from decreasingly to arbitrarily ordered $c$-sequence (a key fl-RDT parametric component) is observed on the second lifting level and associated with \emph{satisfiability} ($\alpha_c$) -- \emph{algorithmic} ($\alpha_a$) constraints density threshold change thereby suggesting a potential existence of a nonzero computational gap $SCG=\alpha_c-\alpha_a$. The second level estimate is shown to match the theoretical $\alpha_c$ whereas the $r\rightarrow \infty$ level one is proposed to correspond to $\alpha_a$. For example, for the canonical SBP ($\kappa=1$ margin) we obtain $\alpha_c\approx 1.8159$ on the second and $\alpha_a\approx 1.6021$ (with converging tendency towards $\sim 1.59$ range) on the seventh level. Our propositions remarkably well concur with recent literature: (i) in [20] local entropy replica approach predicts $\alpha_{LE}\approx 1.58$ as the onset of clustering defragmentation (presumed driving force behind locally improving algorithms failures); (ii) in $\alpha\rightarrow 0$ regime we obtain on the third lifting level $\kappa\approx 1.2385\sqrt{\frac{\alpha_a}{-\log\left ( \alpha_a \right ) }}$ which qualitatively matches overlap gap property (OGP) based predictions of [43] and identically matches local entropy based predictions of [24]; (iii) $c$-sequence ordering change phenomenology mirrors the one observed in asymmetric binary perceptron (ABP) in [98] and the negative Hopfield model in [100]; and (iv) as in [98,100], we here design a CLuP based algorithm whose practical performance closely matches proposed theoretical predictions.
|
https://arxiv.org/abs/2601.10628
|
Academic Papers
|
svg
|
de4c6a20929e35587294dda875ba9108fe809cd1eeaeb9f21f5eda0ca552d967
|
2026-01-16T00:00:00-05:00
|
Classification Imbalance as Transfer Learning
|
arXiv:2601.10630v1 Announce Type: cross Abstract: Classification imbalance arises when one class is much rarer than the other. We frame this setting as transfer learning under label (prior) shift between an imbalanced source distribution induced by the observed data and a balanced target distribution under which performance is evaluated. Within this framework, we study a family of oversampling procedures that augment the training data by generating synthetic samples from an estimated minority-class distribution to roughly balance the classes, among which the celebrated SMOTE algorithm is a canonical example. We show that the excess risk decomposes into the rate achievable under balanced training (as if the data had been drawn from the balanced target distribution) and an additional term, the cost of transfer, which quantifies the discrepancy between the estimated and true minority-class distributions. In particular, we show that the cost of transfer for SMOTE dominates that of bootstrapping (random oversampling) in moderately high dimensions, suggesting that we should expect bootstrapping to have better performance than SMOTE in general. We corroborate these findings with experimental evidence. More broadly, our results provide guidance for choosing among augmentation strategies for imbalanced classification.
|
https://arxiv.org/abs/2601.10630
|
Academic Papers
|
svg
|
ccff44838bfe6c96351f902c7714a77111a41d2068a2d4bb6cae1815febbf438
|
2026-01-16T00:00:00-05:00
|
Adjusted Similarity Measures and a Violation of Expectations
|
arXiv:2601.10641v1 Announce Type: cross Abstract: Adjusted similarity measures, such as Cohen's kappa for inter-rater reliability and the adjusted Rand index used to compare clustering algorithms, are a vital tool for comparing discrete labellings. These measures are intended to have the property of 0 expectation under a null distribution and maximum value 1 under maximal similarity to aid in interpretation. Measures are frequently adjusted with respect to the permutation distribution for historic and analytic reasons. There is currently renewed interest in considering other null models more appropriate for context, such as clustering ensembles permitting a random number of identified clusters. The purpose of this work is two -- fold: (1) to generalize the study of the adjustment operator to general null models and to a more general procedure which includes statistical standardization as a special case and (2) to identify sufficient conditions for the adjustment operator to produce the intended properties, where sufficient conditions are related to whether and how observed data are incorporated into null distributions. We demonstrate how violations of the sufficient conditions may lead to substantial breakdown, such as by producing a non-positive measure under traditional adjustment rather than one with mean 0, or by producing a measure which is deterministically 0 under statistical standardization.
|
https://arxiv.org/abs/2601.10641
|
Academic Papers
|
svg
|
4caf6d906d2e3a257752764075b6ec11c62c58d983e1662110a0ad7c004c215f
|
2026-01-16T00:00:00-05:00
|
Transforming Crises into Opportunities: From Chaos to Urban Antifragility
|
arXiv:2601.10658v1 Announce Type: cross Abstract: Urban crises - floods, pandemics, economic shocks, and conflicts - function as accelerators of urban change, exposing structural vulnerabilities while creating windows for reinvention. Building on a prior theoretical contribution that identified fifteen principles of urban antifragility, this paper tests and operationalizes the framework through an empirical assessment of 26 cities selected for their post-crisis adaptation trajectories. Using a tailored diagnostic methodology, we benchmark cities' Stress Response Strategies (SRS) and then evaluate Urban Development Trajectories (UDT) across four weighted dimensions, positioning each case along a fragility-robustness-resilience-antifragility continuum and applying a balanced-threshold rule to confirm antifragile status. Results show that "resilience enhanced by innovation and technology" is the most effective response typology (86.9/100), and that six cities meet the antifragile trajectory criteria. By mapping best practices to activated principles and analysing co-activations, the study identifies a robust "hard core" of principles - Sustainable Resilience (O), Strategic Diversity (F), Proactive Innovation (I), and Active Prevention (N) - supplemented by operational enablers (e.g., anticipation, mobilization, shock absorption). The paper concludes by proposing an evidence-based, SDG-aligned operational model that links high-impact principle pairings to measurable indicators, offering a practical roadmap for cities seeking to convert crises into sustained transformation. Keywords: Post-crisis strategies, Urban antifragility, Sustainable cities and communities, Disaster resilience and urban regeneration, Risk governance and Black Swan adaptation.
|
https://arxiv.org/abs/2601.10658
|
Academic Papers
|
svg
|
9976fcaa5f77ed901ce40b41fc7f39c27dbc67b46094ddcb1c6015dd002bd1da
|
2026-01-16T00:00:00-05:00
|
Optimal lower bound for quantum channel tomography in away-from-boundary regime
|
arXiv:2601.10683v1 Announce Type: cross Abstract: Consider quantum channels with input dimension $d_1$, output dimension $d_2$ and Kraus rank at most $r$. Any such channel must satisfy the constraint $rd_2\geq d_1$, and the parameter regime $rd_2=d_1$ is called the boundary regime. In this paper, we show an optimal query lower bound $\Omega(rd_1d_2/\varepsilon^2)$ for quantum channel tomography to within diamond norm error $\varepsilon$ in the away-from-boundary regime $rd_2\geq 2d_1$, matching the existing upper bound $O(rd_1d_2/\varepsilon^2)$. In particular, this lower bound fully settles the query complexity for the commonly studied case of equal input and output dimensions $d_1=d_2=d$ with $r\geq 2$, in sharp contrast to the unitary case $r=1$ where Heisenberg scaling $\Theta(d^2/\varepsilon)$ is achievable.
|
https://arxiv.org/abs/2601.10683
|
Academic Papers
|
svg
|
ce75962e20db4ef11b27f3426ac336a1ae4b15058867dbd72d6d54b4d74beead
|
2026-01-16T00:00:00-05:00
|
Constant-Depth Unitary Preparation of Dicke States
|
arXiv:2601.10693v1 Announce Type: cross Abstract: Dicke states serve as a critical resource in quantum metrology, communication, and computation. However, unitary preparation of Dicke states is limited to logarithmic depth in standard circuit models and existing constant-depth protocols require measurement and feed-forward. In this work, we present the first unitary, constant-depth protocols for exact Dicke state preparation. We overcome the logarithmic-depth barrier by moving beyond the standard circuit model and leveraging global interactions (native to architectures such as neutral atoms and trapped ions). Specifically, utilizing unbounded CZ gates (i.e. within the QAC$^0$ circuit class), we offer circuits for exact computation of constant-weight Dicke states, using polynomial ancillae, and approximation of weight-1 Dicke states (i.e. $W$ states), using only constant ancillae. Granted additional access to the quantum FAN-OUT operation (i.e. upgrading to the QAC$_f^0$ circuit class), we also achieve exact preparation of arbitrary-weight Dicke states, with polynomial ancillae. These protocols distinguish the constant-depth capabilities of quantum architectures based on connectivity and offer a novel path toward resolving a long-standing quantum complexity conjecture.
|
https://arxiv.org/abs/2601.10693
|
Academic Papers
|
svg
|
46877642c049ea06cf8458ac299e20e88d3fcce0281efa7b0a93fb78115f1a29
|
2026-01-16T00:00:00-05:00
|
Quantum Maxwell Erasure Decoder for qLDPC codes
|
arXiv:2601.10713v1 Announce Type: cross Abstract: We introduce a quantum Maxwell erasure decoder for CSS quantum low-density parity-check (qLDPC) codes that extends peeling with bounded guessing. Guesses are tracked symbolically and can be eliminated by restrictive checks, giving a tunable tradeoff between complexity and performance via a guessing budget: an unconstrained budget recovers Maximum-Likelihood (ML) performance, while a constant budget yields linear-time decoding and approximates ML. We provide theoretical guarantees on asymptotic performance and demonstrate strong performance on bivariate bicycle and quantum Tanner codes.
|
https://arxiv.org/abs/2601.10713
|
Academic Papers
|
svg
|
6195f4e05bc730dcdb1a497c2d1ed2b978467923c9e82c3d52f0b743cfeee148
|
2026-01-16T00:00:00-05:00
|
Digital Circuits as Moore Machines
|
arXiv:1003.0522v2 Announce Type: replace Abstract: This paper illustrates a technique for specifying the timing, logical operation, and compositional circuit design of digital circuits in terms of ordinary state machines with output (Moore machines). The method is illustrated here with specifications of gates, latches, and other simple circuits and via the construction of devices starting with a SR latch built from gates. The method is based on "classical" automata and recursive functions on strings (sequential functions).
|
https://arxiv.org/abs/1003.0522
|
Academic Papers
|
svg
|
966b35310b3f32ba9765789d43245999dc30c700a26090cd010ef7c38805b209
|
2026-01-16T00:00:00-05:00
|
Parametric equations for temporal style assertions
|
arXiv:1612.01630v3 Announce Type: replace Abstract: Temporal logic provided an appealing approach to specifying properties of operating systems and other "reactive" software by allowing propositions to be qualified by "when" they must be true. This paper shows how to get the same effect, with a finer control over specification and a compositional notion of state, using ordinary working mathematics, without the weight of formal logic, by using sequential functions which are an alternate representation of Moore type state machines.
|
https://arxiv.org/abs/1612.01630
|
Academic Papers
|
svg
|
e02e226abe8ac2d2ae858db91f603d5786d197c6b9e6092498b6391c101aebb0
|
2026-01-16T00:00:00-05:00
|
Spatial As Deep: Spatial CNN for Traffic Scene Understanding
|
arXiv:1712.06080v2 Announce Type: replace Abstract: Convolutional neural networks (CNNs) are usually built by stacking convolutional operations layer-by-layer. Although CNN has shown strong capability to extract semantics from raw pixels, its capacity to capture spatial relationships of pixels across rows and columns of an image is not fully explored. These relationships are important to learn semantic objects with strong shape priors but weak appearance coherences, such as traffic lanes, which are often occluded or not even painted on the road surface as shown in Fig. 1 (a). In this paper, we propose Spatial CNN (SCNN), which generalizes traditional deep layer-by-layer convolutions to slice-byslice convolutions within feature maps, thus enabling message passings between pixels across rows and columns in a layer. Such SCNN is particular suitable for long continuous shape structure or large objects, with strong spatial relationship but less appearance clues, such as traffic lanes, poles, and wall. We apply SCNN on a newly released very challenging traffic lane detection dataset and Cityscapse dataset. The results show that SCNN could learn the spatial relationship for structure output and significantly improves the performance. We show that SCNN outperforms the recurrent neural network (RNN) based ReNet and MRF+CNN (MRFNet) in the lane detection dataset by 8.7% and 4.6% respectively. Moreover, our SCNN won the 1st place on the TuSimple Benchmark Lane Detection Challenge, with an accuracy of 96.53%.
|
https://arxiv.org/abs/1712.06080
|
Academic Papers
|
svg
|
8d952a265bc594f882c364927e0942ac68c2de991fe125eb10abf586fb1f17e8
|
2026-01-16T00:00:00-05:00
|
NFV Platform Design: A Survey
|
arXiv:2002.11059v4 Announce Type: replace Abstract: Due to the intrinsically inefficient service provisioning in traditional networks, Network Function Virtualization (NFV) keeps gaining attention from both industry and academia. By replacing the purpose-built, expensive, proprietary network equipment with software network functions consolidated on commodity hardware, NFV envisions a shift towards a more agile and open service provisioning paradigm with much lower capital expenditure (CapEx) and operational expenditure (OpEx). Nonetheless, just like any complex system, NFV platforms commonly consist of abounding software and hardware components and usually incorporate disparate design choices based on distinct motivations or use cases. This broad collection of convoluted alternatives makes it extremely arduous for network operators to make proper choices. Although numerous efforts have been devoted to investigating different aspects of NFV, none of them specifically focused on NFV platforms or attempted to explore the design space. In this paper, we present a comprehensive survey on NFV platform design. Our study solely targets existing NFV platform implementations. We begin with an architectural view of the standard reference NFV platform and present our taxonomy of existing NFV platforms based on the principal purpose of design. Then we thoroughly explore the design space and elaborate on the implementation choices each platform opts for. We believe that our study gives a detailed guideline for network operators or service providers to choose or implement the most appropriate NFV platforms based on their respective requirements. Note that this report serves as a complementary document for a published IEEE TNSM paper [1]. We will periodically update this document to include the newly proposed NFV platforms and design choices.
|
https://arxiv.org/abs/2002.11059
|
Academic Papers
|
svg
|
f73a088afa9164558c45afdcdcb351d06fed7d2ecd8258f9eefb74334a1ccc00
|
2026-01-16T00:00:00-05:00
|
Data-Driven Feature Tracking for Event Cameras With and Without Frames
|
arXiv:2211.12826v4 Announce Type: replace Abstract: Because of their high temporal resolution, increased resilience to motion blur, and very sparse output, event cameras have been shown to be ideal for low-latency and low-bandwidth feature tracking, even in challenging scenarios. Existing feature tracking methods for event cameras are either handcrafted or derived from first principles but require extensive parameter tuning, are sensitive to noise, and do not generalize to different scenarios due to unmodeled effects. To tackle these deficiencies, we introduce the first data-driven feature tracker for event cameras, which leverages low-latency events to track features detected in an intensity frame. We achieve robust performance via a novel frame attention module, which shares information across feature tracks. Our tracker is designed to operate in two distinct configurations: solely with events or in a hybrid mode incorporating both events and frames. The hybrid model offers two setups: an aligned configuration where the event and frame cameras share the same viewpoint, and a hybrid stereo configuration where the event camera and the standard camera are positioned side-by-side. This side-by-side arrangement is particularly valuable as it provides depth information for each feature track, enhancing its utility in applications such as visual odometry and simultaneous localization and mapping.
|
https://arxiv.org/abs/2211.12826
|
Academic Papers
|
svg
|
51090f20accb286b96caa0e448d1a8a75ea0c3fe84f06e3c54eaf4fdc3a6c486
|
2026-01-16T00:00:00-05:00
|
Genetic Algorithm Based Combinatorial Optimization for the Optimal Design of Water Distribution Network of Gurudeniya Service Zone, Sri Lanka
|
arXiv:2304.09720v4 Announce Type: replace Abstract: This paper brings an in detail Genetic Algorithm (GA) based combinatorial optimization method used for the optimal design of the water distribution network (WDN) of Gurudeniya Service Zone, Sri Lanka. Genetic Algorithm (GA) mimics the survival of the fittest principle of nature to develop a search process. Methodology employs fuzzy combinations of pipe diameters to check their suitability to be considered as the cost effective optimal design solutions. Furthermore, the hydraulic constraints were implicitly evaluated within the GA itself in its aim to reaching the global optimum solution. Upon analysis, the results of this approach delivered agreeable design outputs. In addition, the comparison made between the results obtained by a previous study inspired by the Honey Bee Mating Optimization (HBMO) Algorithm and results obtained by the GA based approach, proves competency of GA for the optimal design of water distribution network in Gurudeniya Service Zone, Sri Lanka.
|
https://arxiv.org/abs/2304.09720
|
Academic Papers
|
svg
|
8ffb2b9525c8bae3f32e1a7eba0305a314ad309c52ec0a63b8bf9b958bf2303b
|
2026-01-16T00:00:00-05:00
|
A Kolmogorov metric embedding for live cell microscopy signaling patterns
|
arXiv:2401.02501v5 Announce Type: replace Abstract: We present a metric embedding that captures spatiotemporal patterns of cell signaling dynamics in 5-D $(x,y,z,channel,time)$ live cell microscopy movies. The embedding uses a metric distance called the normalized information distance (NID) based on Kolmogorov complexity theory, an absolute measure of information content between digital objects. The NID uses statistics of lossless compression to compute a theoretically optimal metric distance between pairs of 5-D movies, requiring no a priori knowledge of expected pattern dynamics, and no training data. The cell signaling structure function (SSF) is defined using a class of metric 3-D image filters that compute at each spatiotemporal cell centroid the voxel intensity configuration of the nucleus w.r.t. the surrounding cytoplasm, or a functional output e.g. velocity. The only parameter is the expected cell radii ($\mu m$). The SSF can be optionally combined with segmentation and tracking algorithms. The resulting lossless compression pipeline represents each 5-D input movie as a single point in a metric embedding space. The utility of a metric embedding follows from Euclidean distance between any points in the embedding space approximating optimally the pattern difference, as measured by the NID, between corresponding pairs of 5-D movies. This is true throughout the embedding space, not only at points corresponding to input images. Examples are shown for synthetic data, for 2-D+time movies of ERK and AKT signaling under different oncogenic mutations in human epithelial (MCF10A) cells, for 3-D MCF10A spheroids under optogenetic manipulation of ERK, and for ERK dynamics during colony differentiation in human stem cells.
|
https://arxiv.org/abs/2401.02501
|
Academic Papers
|
svg
|
f1614bfbd7ce5d8495e34c8a235222801bf6b04317fa6d2c74fff40d493c0378
|
2026-01-16T00:00:00-05:00
|
Shadoks Approach to Knapsack Polygonal Packing
|
arXiv:2403.20123v3 Announce Type: replace Abstract: The 2024 edition of the CG:SHOP Challenge focused on the knapsack polygonal packing problem. Each instance consists of a convex polygon known as the container and a multiset of items, where each item is a simple polygon with an associated integer value. A feasible packing solution places a selection of the items inside the container without overlapping and using only translations. The goal is to achieve a packing that maximizes the total value of the items in the solution. Our approach to win first place is divided into two main steps. First, we generate promising initial solutions using two strategies: one based on integer linear programming and the other on employing a combination of geometric greedy heuristics. In the second step, we enhance these solutions through local search techniques, which involve repositioning items and exploring potential replacements to improve the total value of the packing.
|
https://arxiv.org/abs/2403.20123
|
Academic Papers
|
svg
|
c29180c6d3f8cb843333a1804bc07cb4a74886ebf9dac04ad9f1b71a1ebdae53
|
2026-01-16T00:00:00-05:00
|
Tuning-Free Adaptive Style Incorporation for Structure-Consistent Text-Driven Style Transfer
|
arXiv:2404.06835v2 Announce Type: replace Abstract: In this work, we target the task of text-driven style transfer in the context of text-to-image (T2I) diffusion models. The main challenge is consistent structure preservation while enabling effective style transfer effects. The past approaches in this field directly concatenate the content and style prompts for a prompt-level style injection, leading to unavoidable structure distortions. In this work, we propose a novel solution to the text-driven style transfer task, namely, Adaptive Style Incorporation~(ASI), to achieve fine-grained feature-level style incorporation. It consists of the Siamese Cross-Attention~(SiCA) to decouple the single-track cross-attention to a dual-track structure to obtain separate content and style features, and the Adaptive Content-Style Blending (AdaBlending) module to couple the content and style information from a structure-consistent manner. Experimentally, our method exhibits much better performance in both structure preservation and stylized effects.
|
https://arxiv.org/abs/2404.06835
|
Academic Papers
|
svg
|
079d0fc0c5c7b81378dd4047ac0843458a6d29e6354253274f4e1c85fdf457ea
|
2026-01-16T00:00:00-05:00
|
SSFL: Discovering Sparse Unified Subnetworks at Initialization for Efficient Federated Learning
|
arXiv:2405.09037v2 Announce Type: replace Abstract: In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication. SSFL identifies a sparse subnetwork prior to training, leveraging parameter saliency scores computed separately on local client data in non-IID scenarios, and then aggregated, to determine a global mask. Only the sparse model weights are trained and communicated each round between the clients and the server. On standard benchmarks including CIFAR-10, CIFAR-100, and Tiny-ImageNet, SSFL consistently improves the accuracy sparsity trade off, achieving more than 20\% relative error reduction on CIFAR-10 compared to the strongest sparse baseline, while reducing communication costs by $2 \times$ relative to dense FL. Finally, in a real-world federated learning deployment, SSFL delivers over $2.3 \times$ faster communication time, underscoring its practical efficiency.
|
https://arxiv.org/abs/2405.09037
|
Academic Papers
|
svg
|
54fbf7be35ec0d0846e2b619426d5e1cf8ab78871f9937ca75f3f1bfc604bb57
|
2026-01-16T00:00:00-05:00
|
Jump-teaching: Combating Sample Selection Bias via Temporal Disagreement
|
arXiv:2405.17137v5 Announce Type: replace Abstract: Sample selection is a straightforward technique to combat noisy labels, aiming to prevent mislabeled samples from degrading the robustness of neural networks. However, existing methods mitigate compounding selection bias either by leveraging dual-network disagreement or additional forward propagations, leading to multiplied training overhead. To address this challenge, we introduce $\textit{Jump-teaching}$, an efficient sample selection framework for debiased model update and simplified selection criterion. Based on a key observation that a neural network exhibits significant disagreement across different training iterations, Jump-teaching proposes a jump-manner model update strategy to enable self-correction of selection bias by harnessing temporal disagreement, eliminating the need for multi-network or multi-round training. Furthermore, we employ a sample-wise selection criterion building on the intra variance of a decomposed single loss for a fine-grained selection without relying on batch-wise ranking or dataset-wise modeling. Extensive experiments demonstrate that Jump-teaching outperforms state-of-the-art counterparts while achieving a nearly overhead-free selection procedure, which boosts training speed by up to $4.47\times$ and reduces peak memory footprint by $54\%$.
|
https://arxiv.org/abs/2405.17137
|
Academic Papers
|
svg
|
b7e6e9d0fc622021c16b1b6bdd305561cd8fcc68b76547dae5f06348883c29eb
|
2026-01-16T00:00:00-05:00
|
Leveraging Open-Source Large Language Models for encoding Social Determinants of Health using an Intelligent Router
|
arXiv:2405.19631v2 Announce Type: replace Abstract: Social Determinants of Health (SDOH), also known as Health-Related Social Needs (HSRN), play a significant role in patient health outcomes. The Centers for Disease Control and Prevention (CDC) introduced a subset of ICD-10 codes called Z-codes to recognize and measure SDOH. However, Z-codes are infrequently coded in a patient's Electronic Health Record (EHR), and instead, in many cases, need to be inferred from clinical notes. Previous research has shown that large language models (LLMs) show promise on extracting unstructured data from EHRs, but it can be difficult to identify a single model that performs best on varied coding tasks. Further, clinical notes contain protected health information posing a challenge for the use of closed-source language models from commercial vendors. The identification of open-source LLMs that can be run within health organizations and exhibit high performance on SDOH tasks is an important issue to solve. Here, we introduce an intelligent routing system for SDOH coding that uses a language model router to direct medical record data to open-source LLMs that demonstrate optimal performance on specific SDOH codes. This intelligent routing system exhibits state of the art performance of 96.4% accuracy averaged across 13 codes, including homelessness and food insecurity, outperforming closed models such as GPT-4o. We leveraged a publicly-available, deidentified dataset of medical record notes to run the router, but we also introduce a synthetic data generation and validation paradigm to increase the scale of training data without needing privacy-protected medical records. Together, we demonstrate an architecture for intelligent routing of inputs to task-optimal language models to achieve high performance across a set of medical coding sub-tasks.
|
https://arxiv.org/abs/2405.19631
|
Academic Papers
|
svg
|
33e24869679c5f3afedcc20c43b7d04658c923d9b62840c2a6261ba72f3c1025
|
2026-01-16T00:00:00-05:00
|
AITTI: Learning Adaptive Inclusive Token for Text-to-Image Generation
|
arXiv:2406.12805v3 Announce Type: replace Abstract: Despite the high-quality results of text-to-image generation, stereotypical biases have been spotted in their generated contents, compromising the fairness of generative models. In this work, we propose to learn adaptive inclusive tokens to shift the attribute distribution of the final generative outputs. Unlike existing de-biasing approaches, our method requires neither explicit attribute specification nor prior knowledge of the bias distribution. Specifically, the core of our method is a lightweight adaptive mapping network, which can customize the inclusive tokens for the concepts to be de-biased, making the tokens generalizable to unseen concepts regardless of their original bias distributions. This is achieved by tuning the adaptive mapping network with a handful of balanced and inclusive samples using an anchor loss. Experimental results demonstrate that our method outperforms previous bias mitigation methods without attribute specification while preserving the alignment between generative results and text descriptions. Moreover, our method achieves comparable performance to models that require specific attributes or editing directions for generation. Extensive experiments showcase the effectiveness of our adaptive inclusive tokens in mitigating stereotypical bias in text-to-image generation. The code will be available at https://github.com/itsmag11/AITTI.
|
https://arxiv.org/abs/2406.12805
|
Academic Papers
|
svg
|
dfcf644c1f37beca5520b38982776c2fb3853762c0d447cb1b0e704b396a4432
|
2026-01-16T00:00:00-05:00
|
Machine Unlearning Fails to Remove Data Poisoning Attacks
|
arXiv:2406.17216v3 Announce Type: replace Abstract: We revisit the efficacy of several practical methods for approximate machine unlearning developed for large-scale deep learning. In addition to complying with data deletion requests, one often-cited potential application for unlearning methods is to remove the effects of poisoned data. We experimentally demonstrate that, while existing unlearning methods have been demonstrated to be effective in a number of settings, they fail to remove the effects of data poisoning across a variety of types of poisoning attacks (indiscriminate, targeted, and a newly-introduced Gaussian poisoning attack) and models (image classifiers and LLMs); even when granted a relatively large compute budget. In order to precisely characterize unlearning efficacy, we introduce new evaluation metrics for unlearning based on data poisoning. Our results suggest that a broader perspective, including a wider variety of evaluations, are required to avoid a false sense of confidence in machine unlearning procedures for deep learning without provable guarantees. Moreover, while unlearning methods show some signs of being useful to efficiently remove poisoned data without having to retrain, our work suggests that these methods are not yet ``ready for prime time,'' and currently provide limited benefit over retraining.
|
https://arxiv.org/abs/2406.17216
|
Academic Papers
|
svg
|
9660d8c5daca5d37589f6bb0c1616dca1263d1814fce1aeb7ae260b1c02ae17c
|
2026-01-16T00:00:00-05:00
|
Directed univalence in simplicial homotopy type theory
|
arXiv:2407.09146v2 Announce Type: replace Abstract: Simplicial type theory extends homotopy type theory with a directed path type which internalizes the notion of a homomorphism within a type. This concept has significant applications both within mathematics -- where it allows for synthetic (higher) category theory -- and programming languages -- where it leads to a directed version of the structure identity principle. In this work, we construct the first types in simplicial type theory with non-trivial homomorphisms. We extend simplicial type theory with modalities and new reasoning principles to obtain triangulated type theory in order to construct the universe of discrete types $\mathcal{S}$. We prove that homomorphisms in this type correspond to ordinary functions of types i.e., that $\mathcal{S}$ is directed univalent. The construction of $\mathcal{S}$ is foundational for both of the aforementioned applications of simplicial type theory. We are able to define several crucial examples of categories and to recover important results from category theory. Using $\mathcal{S}$, we are also able to define various types whose usage is guaranteed to be functorial. These provide the first complete examples of the proposed directed structure identity principle.
|
https://arxiv.org/abs/2407.09146
|
Academic Papers
|
svg
|
b7fc324de4b1b19bd423447d61d8c44e61baddb0de37e81ee967ce9af2db61fd
|
2026-01-16T00:00:00-05:00
|
PersonaRAG: Enhancing Retrieval-Augmented Generation Systems with User-Centric Agents
|
arXiv:2407.09394v2 Announce Type: replace Abstract: Large Language Models (LLMs) struggle with generating reliable outputs due to outdated knowledge and hallucinations. Retrieval-Augmented Generation (RAG) models address this by enhancing LLMs with external knowledge, but often fail to personalize the retrieval process. This paper introduces PersonaRAG, a novel framework incorporating user-centric agents to adapt retrieval and generation based on real-time user data and interactions. Evaluated across various question answering datasets, PersonaRAG demonstrates superiority over baseline models, providing tailored answers to user needs. The results suggest promising directions for user-adapted information retrieval systems.
|
https://arxiv.org/abs/2407.09394
|
Academic Papers
|
svg
|
6229e6d23c4640ce5ba59969b81c30e0aed0dee5af8c06de5dbc9ab9c5fc7b4a
|
2026-01-16T00:00:00-05:00
|
Mathematical theory of deep learning
|
arXiv:2407.18384v4 Announce Type: replace Abstract: This book provides an introduction to the mathematical analysis of deep learning. It covers fundamental results in approximation theory, optimization theory, and statistical learning theory, which are the three main pillars of deep neural network theory. Serving as a guide for students and researchers in mathematics and related fields, the book aims to equip readers with foundational knowledge on the topic. It prioritizes simplicity over generality, and presents rigorous yet accessible results to help build an understanding of the essential mathematical concepts underpinning deep learning.
|
https://arxiv.org/abs/2407.18384
|
Academic Papers
|
svg
|
4de3e13afcc87caf801fb0affed385629c149d9033a18f8ed7132d7eef4c6980
|
2026-01-16T00:00:00-05:00
|
Fairness Definitions in Language Models Explained
|
arXiv:2407.18454v3 Announce Type: replace Abstract: Language Models (LMs) have demonstrated exceptional performance across various Natural Language Processing (NLP) tasks. Despite these advancements, LMs can inherit and amplify societal biases related to sensitive attributes such as gender and race, limiting their adoption in real-world applications. Therefore, fairness has been extensively explored in LMs, leading to the proposal of various fairness notions. However, the lack of clear agreement on which fairness definition to apply in specific contexts and the complexity of understanding the distinctions between these definitions can create confusion and impede further progress. To this end, this paper proposes a systematic survey that clarifies the definitions of fairness as they apply to LMs. Specifically, we begin with a brief introduction to LMs and fairness in LMs, followed by a comprehensive, up-to-date overview of existing fairness notions in LMs and the introduction of a novel taxonomy that categorizes these concepts based on their transformer architecture: encoder-only, decoder-only, and encoder-decoder LMs. We further illustrate each definition through experiments, showcasing their practical implications and outcomes. Finally, we discuss current research challenges and open questions, aiming to foster innovative ideas and advance the field. The repository is publicly available online at https://github.com/vanbanTruong/Fairness-in-Large-Language-Models/tree/main/definitions.
|
https://arxiv.org/abs/2407.18454
|
Academic Papers
|
svg
|
ebddb4152961f9e0b9d40cdc747a29585b84a014f292bc48b1d488481def8a46
|
2026-01-16T00:00:00-05:00
|
FiCo-ITR: bridging fine-grained and coarse-grained image-text retrieval for comparative performance analysis
|
arXiv:2407.20114v2 Announce Type: replace Abstract: In the field of Image-Text Retrieval (ITR), recent advancements have leveraged large-scale Vision-Language Pretraining (VLP) for Fine-Grained (FG) instance-level retrieval, achieving high accuracy at the cost of increased computational complexity. For Coarse-Grained (CG) category-level retrieval, prominent approaches employ Cross-Modal Hashing (CMH) to prioritise efficiency, albeit at the cost of retrieval performance. Due to differences in methodologies, FG and CG models are rarely compared directly within evaluations in the literature, resulting in a lack of empirical data quantifying the retrieval performance-efficiency tradeoffs between the two. This paper addresses this gap by introducing the \texttt{FiCo-ITR} library, which standardises evaluation methodologies for both FG and CG models, facilitating direct comparisons. We conduct empirical evaluations of representative models from both subfields, analysing precision, recall, and computational complexity across varying data scales. Our findings offer new insights into the performance-efficiency trade-offs between recent representative FG and CG models, highlighting their respective strengths and limitations. These findings provide the foundation necessary to make more informed decisions regarding model selection for specific retrieval tasks and highlight avenues for future research into hybrid systems that leverage the strengths of both FG and CG approaches.
|
https://arxiv.org/abs/2407.20114
|
Academic Papers
|
svg
|
0c05ace63d1f2ecddaed1d6d286e6acd6c9b4a84ac6a7a094551edde777fb5cb
|
2026-01-16T00:00:00-05:00
|
Controllable Financial Market Generation with Diffusion Guided Meta Agent
|
arXiv:2408.12991v3 Announce Type: replace Abstract: Generative modeling has transformed many fields, such as language and visual modeling, while its application in financial markets remains under-explored. As the minimal unit within a financial market is an order, order-flow modeling represents a fundamental generative financial task. However, current approaches often yield unsatisfactory fidelity in generating order flow, and their generation lacks controllability, thereby limiting their practical applications. In this paper, we formulate the challenge of controllable financial market generation, and propose a Diffusion Guided Meta Agent (DigMA) model to address it. Specifically, we employ a conditional diffusion model to capture the dynamics of the market state represented by time-evolving distribution parameters of the mid-price return rate and the order arrival rate, and we define a meta agent with financial economic priors to generate orders from the corresponding distributions. Extensive experimental results show that DigMA achieves superior controllability and generation fidelity. Moreover, we validate its effectiveness as a generative environment for downstream high-frequency trading tasks and its computational efficiency.
|
https://arxiv.org/abs/2408.12991
|
Academic Papers
|
svg
|
dd4835c77f450207bb30d5aaf36a4586fe76307f99061cbe647f3938f24a3510
|
2026-01-16T00:00:00-05:00
|
Deep learning-based ecological analysis of camera trap images is impacted by training data quality and quantity
|
arXiv:2408.14348v3 Announce Type: replace Abstract: Large image collections generated from camera traps offer valuable insights into species richness, occupancy, and activity patterns, significantly aiding biodiversity monitoring. However, the manual processing of these datasets is time-consuming, hindering analytical processes. To address this, deep neural networks have been widely adopted to automate image labelling, but the impact of classification error on key ecological metrics remains unclear. Here, we analyse data from camera trap collections in an African savannah (82,300 labelled images, 47 species) and an Asian sub-tropical dry forest (40,308 labelled images, 29 species) to compare ecological metrics derived from expert-generated species identifications with those generated by deep learning classification models. We specifically assess the impact of deep learning model architecture, proportion of label noise in the training data, and the size of the training dataset on three key ecological metrics: species richness, occupancy, and activity patterns. We found that predictions of species richness derived from deep neural networks closely match those calculated from expert labels and remained resilient to up to 10% noise in the training dataset (mis-labelled images) and a 50% reduction in the training dataset size. We found that our choice of deep learning model architecture (ResNet vs ConvNext-T) or depth (ResNet18, 50, 101) did not impact predicted ecological metrics. In contrast, species-specific metrics were more sensitive; less common and visually similar species were disproportionately affected by a reduction in deep neural network accuracy, with consequences for occupancy and diel activity pattern estimates. To ensure the reliability of their findings, practitioners should prioritize creating large, clean training sets and account for class imbalance across species over exploring numerous deep learning model architectures.
|
https://arxiv.org/abs/2408.14348
|
Academic Papers
|
svg
|
54ca79a9e57afd1d10fb3533928bfe84a7db1ced7b757bc3b9f4d86f61c26bad
|
2026-01-16T00:00:00-05:00
|
Uniform Approximation of Eigenproblems of a Large-Scale Parameter-Dependent Hermitian Matrix
|
arXiv:2409.05791v4 Announce Type: replace Abstract: We consider the uniform approximation of the smallest eigenvalue of a large parameter-dependent Hermitian matrix by that of a smaller counterpart obtained through projections. The projection subspaces are constructed iteratively by means of a greedy strategy; at each iteration the parameter where a surrogate error is maximal is computed and the eigenvectors associated with the smallest eigenvalues at the maximizing parameter value are added to the subspace. Unlike the classical approaches, such as the successive constraint method, that maximize such surrogate errors over a discrete and finite set, we maximize the surrogate error over the continuum of all permissible parameter values globally. We formally prove that the projected eigenvalue function converges to the actual eigenvalue function uniformly. In the second part, we focus on the uniform approximation of the smallest singular value of a large parameter-dependent matrix, in case it is non-Hermitian. The proposed frameworks on numerical examples, including those arising from discretizations of parametric PDEs, reduce the size of the large matrix-valued function drastically, while retaining a high accuracy over all permissible parameter values.
|
https://arxiv.org/abs/2409.05791
|
Academic Papers
|
svg
|
224e32ce3534ae27d8b1d1de7011ec91d333d1962f0e5c6244e2d0c4c8377ec2
|
2026-01-16T00:00:00-05:00
|
Machine Learning and Theory Ladenness -- A Phenomenological Account
|
arXiv:2409.11277v2 Announce Type: replace Abstract: We provide an analysis of theory ladenness in machine learning in science, where "theory", that we call "domain theory", refers to the domain knowledge of the scientific discipline where ML is used. By constructing an account of ML models based on a comparison with phenomenological models, we show, against recent trends in philosophy of science, that ML model-building is mostly indifferent to domain theory, even if the model remains theory laden in a weak sense, which we call theory infection. These claims, we argue, have far-reaching consequences for the transferability of ML across scientific disciplines, and shift the priorities of the debate on theory ladenness in ML from descriptive to normative.
|
https://arxiv.org/abs/2409.11277
|
Academic Papers
|
svg
|
41e151517683e1704e76d3c21446c318f3874a0b17d8b8d4596ca8f4fa279dbb
|
2026-01-16T00:00:00-05:00
|
Debiased Orthogonal Boundary-Driven Efficient Noise Mitigation
|
arXiv:2410.01944v2 Announce Type: replace Abstract: Mitigating the detrimental effects of noisy labels on the training process has become increasingly critical, as obtaining entirely clean or human-annotated samples for large-scale pre-training tasks is often impractical. Nonetheless, existing noise mitigation methods often encounter limitations in practical applications due to their task-specific design, model dependency, and significant computational overhead. In this work, we exploit the properties of high-dimensional orthogonality to identify a robust and effective boundary in cone space for separating clean and noisy samples. Building on this, we propose One-Step Anti-noise (OSA), a model-agnostic noisy label mitigation paradigm that employs an estimator model and a scoring function to assess the noise level of input pairs through just one-step inference. We empirically validate the superiority of OSA, demonstrating its enhanced training robustness, improved task transferability, streamlined deployment, and reduced computational overhead across diverse benchmarks, models, and tasks. Our code is released at https://github.com/leolee99/OSA.
|
https://arxiv.org/abs/2410.01944
|
Academic Papers
|
svg
|
d2c3020ba58a0eaf197ce16984394895e67fa1c807354f1d3a01cb4b3d6ccb86
|
2026-01-16T00:00:00-05:00
|
Permissive Information-Flow Analysis for Large Language Models
|
arXiv:2410.03055v3 Announce Type: replace Abstract: Large Language Models (LLMs) are rapidly becoming commodity components of larger software systems. This poses natural security and privacy problems: poisoned data retrieved from one component can change the model's behavior and compromise the entire system, including coercing the model to spread confidential data to untrusted components. One promising approach is to tackle this problem at the system level via dynamic information flow (aka taint) tracking. Unfortunately, this approach of propagating the most restrictive input label to the output is too conservative for applications where LLMs operate on inputs retrieved from diverse sources. In this paper, we propose a novel, more permissive approach to propagate information flow labels through LLM queries. The key idea behind our approach is to propagate only the labels of the samples that were influential in generating the model output and to eliminate the labels of unnecessary inputs. We implement and investigate the effectiveness of two variations of this approach, based on (i) prompt-based retrieval augmentation, and (ii) a $k$-nearest-neighbors language model. We compare these with a baseline that uses introspection to predict the output label. Our experimental results in an LLM agent setting show that the permissive label propagator improves over the baseline in more than 85% of the cases, which underscores the practicality of our approach.
|
https://arxiv.org/abs/2410.03055
|
Academic Papers
|
svg
|
29ab091e98f57b4f38e56cb77a174f5f479a13b017cc0f5efebc07dbe6dfb3d5
|
2026-01-16T00:00:00-05:00
|
Finite Element Approximations of Stochastic Linear Schr\"{o}dinger equation driven by additive Wiener noise
|
arXiv:2410.06006v2 Announce Type: replace Abstract: In this article, we have analyzed semi-discrete finite element approximations of the Stochastic linear Schr\"{o}dinger equation in a bounded convex polygonal domain driven by additive Wiener noise. We use the finite element method for spatial discretization and derive an error estimate with respect to the discretization parameter of the finite element approximation. Numerical experiments have also been performed to support theoretical bounds.
|
https://arxiv.org/abs/2410.06006
|
Academic Papers
|
svg
|
ae87aec06a2e33bdbcfb54486ef9b3331d33487f0cb21a568abcff027d898439
|
2026-01-16T00:00:00-05:00
|
Developer Needs and Feasible Features for AI Assistants in IDEs
|
arXiv:2410.08676v3 Announce Type: replace Abstract: Despite the increasing presence of AI assistants in Integrated Development Environments (IDEs), it remains unclear what different groups of developers actually need from these tools and which features are likely to be implemented in practice. To investigate this gap, we conducted a two-phase study. First, we interviewed 35 professional developers from three user groups (Adopters, Churners, and Non-Users) to uncover unmet needs and expectations. Our analysis revealed five key areas of need distinctly distributed across practitioners' groups: Technology Improvement, Interaction, and Customization, as well as Simplifying Skill Building, and Programming Tasks. We then examined the feasibility of addressing selected needs through an internal prediction market involving 102 practitioners. The results demonstrate a strong alignment between the developers' needs and the practitioners' judgment for features focused on implementation and context awareness. However, features related to proactivity and maintenance remain both underestimated and technically unaddressed. Our findings reveal gaps in current AI support and provide practical directions for developing more effective and sustainable in-IDE AI systems
|
https://arxiv.org/abs/2410.08676
|
Academic Papers
|
svg
|
f0c6d5329a0f85d5b0eedf854ceb3d1b4e66d721cbe1f3c79e3860f35d3eac82
|
2026-01-16T00:00:00-05:00
|
CoMAT: Chain of Mathematically Annotated Thought Improves Mathematical Reasoning
|
arXiv:2410.10336v2 Announce Type: replace Abstract: Mathematical reasoning remains a significant challenge for large language models (LLMs), despite progress in prompting techniques such as Chain-of-Thought (CoT). We present **Chain of Mathematically Annotated Thought (CoMAT)**, which enhances reasoning through two stages: *Symbolic Conversion* (converting natural language queries into symbolic form) and *Reasoning Execution* (deriving answers from symbolic representations). CoMAT operates entirely with a single LLM and without external solvers. Across four LLMs, CoMAT outperforms traditional CoT on six out of seven benchmarks, achieving gains of 4.48% on MMLU-Redux (MATH) and 4.58% on GaoKao MCQ. In addition to improved performance, CoMAT ensures faithfulness and verifiability, offering a transparent reasoning process for complex mathematical tasks
|
https://arxiv.org/abs/2410.10336
|
Academic Papers
|
svg
|
7a4b354d3507d88e67b8d8e973697100b5697926efda5993e19885ab4d7e02c4
|
2026-01-16T00:00:00-05:00
|
Learning Quadrotor Control From Visual Features Using Differentiable Simulation
|
arXiv:2410.15979v3 Announce Type: replace Abstract: The sample inefficiency of reinforcement learning (RL) remains a significant challenge in robotics. RL requires large-scale simulation and can still cause long training times, slowing research and innovation. This issue is particularly pronounced in vision-based control tasks where reliable state estimates are not accessible. Differentiable simulation offers an alternative by enabling gradient back-propagation through the dynamics model, providing low-variance analytical policy gradients and, hence, higher sample efficiency. However, its usage for real-world robotic tasks has yet been limited. This work demonstrates the great potential of differentiable simulation for learning quadrotor control. We show that training in differentiable simulation significantly outperforms model-free RL in terms of both sample efficiency and training time, allowing a policy to learn to recover a quadrotor in seconds when providing vehicle states and in minutes when relying solely on visual features. The key to our success is two-fold. First, the use of a simple surrogate model for gradient computation greatly accelerates training without sacrificing control performance. Second, combining state representation learning with policy learning enhances convergence speed in tasks where only visual features are observable. These findings highlight the potential of differentiable simulation for real-world robotics and offer a compelling alternative to conventional RL approaches.
|
https://arxiv.org/abs/2410.15979
|
Academic Papers
|
svg
|
5b1eb1450f5b36416039d139518be93ed1afbbe9d343c1a74ec97626e03d01a0
|
2026-01-16T00:00:00-05:00
|
Adversarial Multi-Agent Reinforcement Learning for Proactive False Data Injection Detection
|
arXiv:2411.12130v2 Announce Type: replace Abstract: Smart inverters are instrumental in the integration of distributed energy resources into the electric grid. Such inverters rely on communication layers for continuous control and monitoring, potentially exposing them to cyber-physical attacks such as false data injection attacks (FDIAs). We propose to construct a defense strategy against a priori unknown FDIAs with a multi-agent reinforcement learning (MARL) framework. The first agent is an adversary that simulates and discovers various FDIA strategies, while the second agent is a defender in charge of detecting and locating FDIAs. This approach enables the defender to be trained against new FDIAs continuously generated by the adversary. In addition, we show that the detection skills of an MARL defender can be combined with those of a supervised offline defender through a transfer learning approach. Numerical experiments conducted on a distribution and transmission system demonstrate that: a) the proposed MARL defender outperforms the offline defender against adversarial attacks; b) the transfer learning approach makes the MARL defender capable against both synthetic and unseen FDIAs.
|
https://arxiv.org/abs/2411.12130
|
Academic Papers
|
svg
|
cb064e5a23345d2e5502f82fdb707cd44e054636aa24148bede0ff4bb7d5800d
|
2026-01-16T00:00:00-05:00
|
The Hatching-Box: A Novel System for Automated Monitoring and Quantification of Drosophila melanogaster Developmental Behavior
|
arXiv:2411.15390v4 Announce Type: replace Abstract: In this paper we propose the Hatching-Box, a novel imaging and analysis system to automatically monitor and quantify the developmental behavior of Drosophila in standard rearing vials and during regular rearing routines, rendering explicit experiments obsolete. This is achieved by combining custom tailored imaging hardware with dedicated detection and tracking algorithms, enabling the quantification of larvae, filled/empty pupae and flies over multiple days. Given the affordable and reproducible design of the Hatching-Box in combination with our generic client/server-based software, the system can easily be scaled to monitor an arbitrary amount of rearing vials simultaneously. We evaluated our system on a curated image dataset comprising nearly 470,000 annotated objects and performed several studies on real world experiments. We successfully reproduced results from well-established circadian experiments by comparing the eclosion periods of wild type flies to the clock mutants $\textit{per}^{short}$, $\textit{per}^{long}$ and $\textit{per}^0$ without involvement of any manual labor. Furthermore we show, that the Hatching-Box is able to extract additional information about group behavior as well as to reconstruct the whole life-cycle of the individual specimens. These results not only demonstrate the applicability of our system for long-term experiments but also indicate its benefits for automated monitoring in the general cultivation process.
|
https://arxiv.org/abs/2411.15390
|
Academic Papers
|
svg
|
a402b2544c858381785959a26515b28774ea049ecad63ce45d480ea2eb3676f1
|
2026-01-16T00:00:00-05:00
|
VICON: Vision In-Context Operator Networks for Multi-Physics Fluid Dynamics Prediction
|
arXiv:2411.16063v5 Announce Type: replace Abstract: In-Context Operator Networks (ICONs) have demonstrated the ability to learn operators across diverse partial differential equations using few-shot, in-context learning. However, existing ICONs process each spatial point as an individual token, severely limiting computational efficiency when handling dense data in higher spatial dimensions. We propose Vision In-Context Operator Networks (VICON), which integrates vision transformer architectures to efficiently process 2D data through patch-wise operations while preserving ICON's adaptability to multiphysics systems and varying timesteps. Evaluated across three fluid dynamics benchmarks, VICON significantly outperforms state-of-the-art baselines: DPOT and MPP, reducing the averaged last-step rollout error by 37.9% compared to DPOT and 44.7% compared to MPP, while requiring only 72.5% and 34.8% of their respective inference times. VICON naturally supports flexible rollout strategies with varying timestep strides, enabling immediate deployment in imperfect measurement systems where sampling frequencies may differ or frames might be dropped - common challenges in real-world settings - without requiring retraining or interpolation. In these realistic scenarios, VICON exhibits remarkable robustness, experiencing only 24.41% relative performance degradation compared to 71.37%-74.49% degradation in baseline methods, demonstrating its versatility for deploying in realistic applications. Our scripts for processing datasets and code are publicly available at https://github.com/Eydcao/VICON.
|
https://arxiv.org/abs/2411.16063
|
Academic Papers
|
svg
|
3d5d4671037aedd56a85ee16dcadc46f93f925662a4b2184217647b3789d383c
|
2026-01-16T00:00:00-05:00
|
Adaptive Querying for Reward Learning from Human Feedback
|
arXiv:2412.07990v2 Announce Type: replace Abstract: Learning from human feedback is a popular approach to train robots to adapt to user preferences and improve safety. Existing approaches typically consider a single querying (interaction) format when seeking human feedback and do not leverage multiple modes of user interaction with a robot. We examine how to learn a penalty function associated with unsafe behaviors using multiple forms of human feedback, by optimizing both the query state and feedback format. Our proposed adaptive feedback selection is an iterative, two-phase approach which first selects critical states for querying, and then uses information gain to select a feedback format for querying across the sampled critical states. The feedback format selection also accounts for the cost and probability of receiving feedback in a certain format. Our experiments in simulation demonstrate the sample efficiency of our approach in learning to avoid undesirable behaviors. The results of our user study with a physical robot highlight the practicality and effectiveness of adaptive feedback selection in seeking informative, user-aligned feedback that accelerate learning. Experiment videos, code and appendices are found on our website: https://tinyurl.com/AFS-learning.
|
https://arxiv.org/abs/2412.07990
|
Academic Papers
|
svg
|
901a8647301b74f69aed6452e7c1cc9b15316e09b848ab7b5bee74c24844f67b
|
2026-01-16T00:00:00-05:00
|
Singularity-Free Guiding Vector Field over B\'ezier's Curves Applied to Rovers Path Planning and Path Following
|
arXiv:2412.13033v2 Announce Type: replace Abstract: This paper presents a guidance algorithm for solving the problem of following parametric paths, as well as a curvature-varying speed setpoint for land-based car-type wheeled mobile robots (WMRs). The guidance algorithm relies on Singularity-Free Guiding Vector Fields SF-GVF. This novel GVF approach expands the desired robot path and the Guiding vector field to a higher dimensional space, in which an angular control function can be found to ensure global asymptotic convergence to the desired parametric path while avoiding field singularities. In SF-GVF, paths should follow a parametric definition. This feature makes using Bezier's curves attractive to define the robot's desired patch. The curvature-varying speed setpoint, combined with the guidance algorithm, eases the convergence to the path when physical restrictions exist, such as minimal turning radius or maximal lateral acceleration. We provide theoretical results, simulations, and outdoor experiments using a WMR platform assembled with off-the-shelf components.
|
https://arxiv.org/abs/2412.13033
|
Academic Papers
|
svg
|
03bf5a4aca1d8d16ad87277c50d327938917a978b7bb701a2d6fe53619988c4b
|
2026-01-16T00:00:00-05:00
|
Adaptive Economic Model Predictive Control: Performance Guarantees for Nonlinear Systems
|
arXiv:2412.13046v3 Announce Type: replace Abstract: We consider the problem of optimizing the economic performance of nonlinear constrained systems subject to uncertain time-varying parameters and bounded disturbances. In particular, we propose an adaptive economic model predictive control (MPC) framework that: (i) directly minimizes transient economic costs, (ii) addresses parametric uncertainty through online model adaptation, (iii) determines optimal setpoints online, and (iv) ensures robustness by using a tube-based approach. The proposed design ensures recursive feasibility, robust constraint satisfaction, and a transient performance bound. In case the disturbances have a finite energy and the parameter variations have a finite path length, the asymptotic average performance is (approximately) not worse than the performance obtained when operating at the best reachable steady-state. We highlight performance benefits in a numerical example involving a chemical reactor with unknown time-invariant and time-varying parameters.
|
https://arxiv.org/abs/2412.13046
|
Academic Papers
|
svg
|
121af327241de97be9020a66c620f2ea961773b17f76b4969d0772f773da2141
|
2026-01-16T00:00:00-05:00
|
MatchMiner-AI: An Open-Source Solution for Cancer Clinical Trial Matching
|
arXiv:2412.17228v3 Announce Type: replace Abstract: Background Clinical trials are essential to advancing cancer treatments, yet fewer than 10% of adults with cancer enroll in trials, and many studies fail to meet accrual targets. Artificial intelligence (AI) could improve identification of appropriate trials for patients, but sharing AI models trained on protected health information remains difficult due to privacy restrictions. Methods We developed MatchMiner-AI, an open-source platform for clinical trial search and ranking trained entirely on synthetic electronic health record (EHR) data. The system extracts core clinical criteria from longitudinal EHR text and embeds patient summaries and trial "spaces" (target populations) in a shared vector space for rapid retrieval. It then applies custom text classifiers to assess whether each patient-trial pairing is a clinically reasonable consideration. The pipeline was evaluated on real clinical data. Results Across retrospective evaluations on real EHR data, the fine-tuned pipeline outperformed baseline text-embedding approaches. For trial-enrolled patients, 90% of the top 20 recommended trials were relevant matches (compared to 17% for the baseline model). Similar improvements were noted for patients who received standard-of-care treatments (88% of the top 20 matches were relevant, compared to 14% for baseline). Text classification modules demonstrated strong discrimination (AUROC 0.94-0.98) for evaluating candidate patient-trial space pair eligibility; incorporating these components consistently increased mean average precision to ~ 0.90 across patient- and trial-centric use cases. Synthetic training data, model weights, inference tools, and demonstration frontends are publicly available. Conclusions MatchMiner-AI demonstrates an openly accessible, privacy-preserving approach to distilling a clinical trial matching AI pipeline from LLM-generated synthetic EHR data.
|
https://arxiv.org/abs/2412.17228
|
Academic Papers
|
svg
|
016f21967aa4078f0db29315eae1d4ee0fb26a37e322b2c146ff82fbdcd61a7f
|
2026-01-16T00:00:00-05:00
|
Sampling-Based Constrained Motion Planning with Products of Experts
|
arXiv:2412.17462v2 Announce Type: replace Abstract: We present a novel approach to enhance the performance of sampling-based Model Predictive Control (MPC) in constrained optimization by leveraging products of experts. Our methodology divides the main problem into two components: one focused on optimality and the other on feasibility. By combining the solutions from each component, represented as distributions, we apply products of experts to implement a project-then-sample strategy. In this strategy, the optimality distribution is projected into the feasible area, allowing for more efficient sampling. This approach contrasts with the traditional sample-then-project and naive sample-then-reject method, leading to more diverse exploration and reducing the accumulation of samples on the boundaries. We demonstrate an effective implementation of this principle using a tensor train-based distribution model, which is characterized by its non-parametric nature, ease of combination with other distributions at the task level, and straightforward sampling technique. We adapt existing tensor train models to suit this purpose and validate the efficacy of our approach through experiments in various tasks, including obstacle avoidance, non-prehensile manipulation, and tasks involving staying in a restricted volume. Our experimental results demonstrate that the proposed method consistently outperforms known baselines, providing strong empirical support for its effectiveness. Sample codes for this project are available at https://github.com/idiap/smpc_poe.
|
https://arxiv.org/abs/2412.17462
|
Academic Papers
|
svg
|
8213dbb0057138ddb1a6a33177f19ee8b4357acd05b8b6e45ced288bb0be9220
|
2026-01-16T00:00:00-05:00
|
Symmetrization Weighted Binary Cross-Entropy: Modeling Perceptual Asymmetry for Human-Consistent Neural Edge Detection
|
arXiv:2501.13365v3 Announce Type: replace Abstract: Edge detection (ED) is a fundamental perceptual process in computer vision, forming the structural basis for high-level reasoning tasks such as segmentation, recognition, and scene understanding. Despite substantial progress achieved by deep neural networks, most ED models attain high numerical accuracy but fail to produce visually sharp and perceptually consistent edges, thereby limiting their reliability in intelligent vision systems. To address this issue, this study introduces the \textit{Symmetrization Weighted Binary Cross-Entropy (SWBCE)} loss, a perception-inspired formulation that extends the conventional WBCE by incorporating prediction-guided symmetry. SWBCE explicitly models the perceptual asymmetry in human edge recognition, wherein edge decisions require stronger evidence than non-edge ones, aligning the optimization process with human perceptual discrimination. The resulting symmetric learning mechanism jointly enhances edge recall and suppresses false positives, achieving a superior balance between quantitative accuracy and perceptual fidelity. Extensive experiments across multiple benchmark datasets and representative ED architectures demonstrate that SWBCE can outperform existing loss functions in both numerical evaluation and visual quality. Particularly with the HED-EES model, the SSIM can be improved by about 15% on BRIND, and in all experiments, training by SWBCE consistently obtains the best perceptual results. Beyond edge detection, the proposed perceptual loss offers a generalizable optimization principle for soft computing and neural learning systems, particularly in scenarios where asymmetric perceptual reasoning plays a critical role.
|
https://arxiv.org/abs/2501.13365
|
Academic Papers
|
svg
|
381a6eb5e19f952c18629cb1d17c00ed208d60950d7c525db394f3c837797926
|
2026-01-16T00:00:00-05:00
|
GreedyPixel: Fine-Grained Black-Box Adversarial Attack Via Greedy Algorithm
|
arXiv:2501.14230v4 Announce Type: replace Abstract: Deep neural networks are highly vulnerable to adversarial examples, which are inputs with small, carefully crafted perturbations that cause misclassification -- making adversarial attacks a critical tool for evaluating robustness. Existing black-box methods typically entail a trade-off between precision and flexibility: pixel-sparse attacks (e.g., single- or few-pixel attacks) provide fine-grained control but lack adaptability, whereas patch- or frequency-based attacks improve efficiency or transferability, but at the cost of producing larger and less precise perturbations. We present GreedyPixel, a fine-grained black-box attack method that performs brute-force-style, per-pixel greedy optimization guided by a surrogate-derived priority map and refined by means of query feedback. It evaluates each coordinate directly without any gradient information, guaranteeing monotonic loss reduction and convergence to a coordinate-wise optimum, while also yielding near white-box-level precision and pixel-wise sparsity and perceptual quality. On the CIFAR-10 and ImageNet datasets, spanning convolutional neural networks (CNNs) and Transformer models, GreedyPixel achieved state-of-the-art success rates with visually imperceptible perturbations, effectively bridging the gap between black-box practicality and white-box performance. The implementation is available at https://github.com/azrealwang/greedypixel.
|
https://arxiv.org/abs/2501.14230
|
Academic Papers
|
svg
|
5d3fee99c3ee25945eea7dc2a905b19e5950ba8645b39bc6e25f9d36f64169da
|
2026-01-16T00:00:00-05:00
|
Robust LLM Alignment via Distributionally Robust Direct Preference Optimization
|
arXiv:2502.01930v4 Announce Type: replace Abstract: A major challenge in aligning large language models (LLMs) with human preferences is the issue of distribution shift. LLM alignment algorithms rely on static preference datasets, assuming that they accurately represent real-world user preferences. However, user preferences vary significantly across geographical regions, demographics, linguistic patterns, and evolving cultural trends. This preference distribution shift leads to catastrophic alignment failures in many real-world applications. We address this problem using the principled framework of distributionally robust optimization, and develop two novel distributionally robust direct preference optimization (DPO) algorithms, namely, Wasserstein DPO (WDPO) and Kullback-Leibler DPO (KLDPO). We characterize the sample complexity of learning the optimal policy parameters for WDPO and KLDPO. Moreover, we propose scalable gradient descent-style learning algorithms by developing suitable approximations for the challenging minimax loss functions of WDPO and KLDPO. Our empirical experiments using benchmark data sets and LLMs demonstrate the superior performance of WDPO and KLDPO in substantially improving the alignment when there is a preference distribution shift.
|
https://arxiv.org/abs/2502.01930
|
Academic Papers
|
svg
|
cc394a7980847e87968d125ae5f6ad600db65a32fca3fd55d2f8db7ccfaeebfa
|
2026-01-16T00:00:00-05:00
|
CleanSurvival: Automated data preprocessing for time-to-event models using reinforcement learning
|
arXiv:2502.03946v2 Announce Type: replace Abstract: Data preprocessing is a critical yet frequently neglected aspect of machine learning, often paid little attention despite its potentially significant impact on model performance. While automated machine learning pipelines are starting to recognize and integrate data preprocessing into their solutions for classification and regression tasks, this integration is lacking for more specialized tasks like survival or time-to-event models. As a result, survival analysis not only faces the general challenges of data preprocessing but also suffers from the lack of tailored, automated solutions in this area. To address this gap, this paper presents 'CleanSurvival', a reinforcement-learning-based solution for optimizing preprocessing pipelines, extended specifically for survival analysis. The framework can handle continuous and categorical variables, using Q-learning to select which combination of data imputation, outlier detection and feature extraction techniques achieves optimal performance for a Cox, random forest, neural network or user-supplied time-to-event model. The package is available on GitHub: https://github.com/datasciapps/CleanSurvival Experimental benchmarks on real-world datasets show that the Q-learning-based data preprocessing results in superior predictive performance to standard approaches, finding such a model up to 10 times faster than undirected random grid search. Furthermore, a simulation study demonstrates the effectiveness in different types and levels of missingness and noise in the data.
|
https://arxiv.org/abs/2502.03946
|
Academic Papers
|
svg
|
7d37015995295bb20af14279172b5094b6c160e737e8c275abbe1335938c393b
|
2026-01-16T00:00:00-05:00
|
Scalable Oversight for Superhuman AI via Recursive Self-Critiquing
|
arXiv:2502.04675v4 Announce Type: replace Abstract: As AI capabilities increasingly surpass human proficiency in complex tasks, current alignment techniques, including SFT and RLHF, face fundamental challenges in ensuring reliable oversight. These methods rely on direct human assessment and become impractical when AI outputs exceed human cognitive thresholds. In response to this challenge, we explore two hypotheses: (1) \textit{Critique of critique can be easier than critique itself}, extending the widely-accepted observation that verification is easier than generation to the critique domain, as critique itself is a specialized form of generation; (2) \textit{This difficulty relationship holds recursively}, suggesting that when direct evaluation is infeasible, performing higher-order critiques (e.g., critique of critique of critique) offers a more tractable supervision pathway. We conduct Human-Human, Human-AI, and AI-AI experiments to investigate the potential of recursive self-critiquing for AI supervision. Our results highlight recursive critique as a promising approach for scalable AI oversight.
|
https://arxiv.org/abs/2502.04675
|
Academic Papers
|
svg
|
222c81f58c3bcc820c8861854bc993f53410587872cfc65a86bdc073b38f8eb6
|
2026-01-16T00:00:00-05:00
|
Beautiful Images, Toxic Words: Understanding and Addressing Offensive Text in Generated Images
|
arXiv:2502.05066v4 Announce Type: replace Abstract: State-of-the-art Diffusion Models (DMs) produce highly realistic images. While prior work has successfully mitigated Not Safe For Work (NSFW) content in the visual domain, we identify a novel threat: the generation of NSFW text embedded within images. This includes offensive language, such as insults, racial slurs, and sexually explicit terms, posing significant risks to users. We show that all state-of-the-art DMs (e.g., SD3, SDXL, Flux, DeepFloyd IF) are vulnerable to this issue. Through extensive experiments, we demonstrate that existing mitigation techniques, effective for visual content, fail to prevent harmful text generation while substantially degrading benign text generation. As an initial step toward addressing this threat, we introduce a novel fine-tuning strategy that targets only the text-generation layers in DMs. Therefore, we construct a safety fine-tuning dataset by pairing each NSFW prompt with two images: one with the NSFW term, and another where that term is replaced with a carefully crafted benign alternative while leaving the image unchanged otherwise. By training on this dataset, the model learns to avoid generating harmful text while preserving benign content and overall image quality. Finally, to advance research in the area, we release ToxicBench, an open-source benchmark for evaluating NSFW text generation in images. It includes our curated fine-tuning dataset, a set of harmful prompts, new evaluation metrics, and a pipeline that assesses both NSFW-ness and text and image quality. Our benchmark aims to guide future efforts in mitigating NSFW text generation in text-to-image models, thereby contributing to their safe deployment.
|
https://arxiv.org/abs/2502.05066
|
Academic Papers
|
svg
|
4747487242c03d42c3272bfbf1f5fc89aa5efa8f8e257d270abcb50f564e872f
|
2026-01-16T00:00:00-05:00
|
Online Scheduling for LLM Inference with KV Cache Constraints
|
arXiv:2502.07115v5 Announce Type: replace Abstract: Large Language Model (LLM) inference, where a trained model generates text one word at a time in response to user prompts, is a computationally intensive process requiring efficient scheduling to optimize latency and resource utilization. A key challenge in LLM inference is the management of the Key-Value (KV) cache, which reduces redundant computations but introduces memory constraints. In this work, we model LLM inference with KV cache constraints theoretically and propose a novel batching and scheduling algorithm that minimizes inference latency while effectively managing the KV cache's memory. More specifically, we make the following contributions. First, to evaluate the performance of online algorithms for scheduling in LLM inference, we introduce a hindsight optimal benchmark, formulated as an integer program that computes the minimum total inference latency under full future information. Second, we prove that no deterministic online algorithm can achieve a constant competitive ratio when the arrival process is arbitrary. Third, motivated by the computational intractability of solving the integer program at scale, we propose a polynomial-time online scheduling algorithm and show that under certain conditions it can achieve a constant competitive ratio. We also demonstrate our algorithm's strong empirical performance by comparing it to the hindsight optimal in a synthetic dataset. Finally, we conduct empirical evaluations on a real-world public LLM inference dataset, simulating the Llama2-70B model on A100 GPUs, and show that our algorithm significantly outperforms the benchmark algorithms. Overall, our results offer a path toward more sustainable and cost-effective LLM deployment.
|
https://arxiv.org/abs/2502.07115
|
Academic Papers
|
svg
|
494f14388f2967a90b19fae6b8a1a3b941abe360fd64293274375cfa50bd90d0
|
2026-01-16T00:00:00-05:00
|
Curvature Tuning: Provable Training-free Model Steering From a Single Parameter
|
arXiv:2502.07783v5 Announce Type: replace Abstract: The scaling of model and data sizes has reshaped the AI landscape, establishing finetuning pretrained models as the standard paradigm for solving downstream tasks. However, dominant finetuning methods typically rely on weight adaptation, often lack interpretability, and depend on heuristically chosen hyperparameters. In this paper, we take a different perspective and shift the focus from weights to activation functions, viewing them through the lens of spline operators. We propose Curvature Tuning (CT), an interpretable and principled steering method that modulates a model's decision boundary by injecting a single hyperparameter into its activation functions. We show that CT provably adjusts model decision boundary curvature and, more fundamentally, projects a model onto a space of smooth functions-thereby complementing current finetuning methods, whose effect lies primarily in feature adaptation. Making this hyperparameter trainable gives rise to a novel and highly parameter-efficient finetuning method. Empirically, CT improves both generalization and robustness. For example, it boosts downstream accuracy of ResNet-50/152 by 8.59%/8.34% over linear probing and 4.64%/1.70% over LoRA across 12 datasets, and improves robust accuracy on the $\ell_\infty$ benchmark from RobustBench by 1032.64%/1494.46%. Our code is available at https://github.com/Leon-Leyang/curvature-tuning.
|
https://arxiv.org/abs/2502.07783
|
Academic Papers
|
svg
|
8396aec986185ae8f0f8254d7184a05cc44d6ca563c01b9389486038d5117882
|
2026-01-16T00:00:00-05:00
|
When Should a Principal Delegate to an Agent in Selection Processes?
|
arXiv:2502.07792v2 Announce Type: replace Abstract: Decision-makers in high-stakes selection processes often face a fundamental choice: whether to make decisions themselves or to delegate authority to another entity whose incentives may only be partially aligned with their own. Such delegation arises naturally in settings like graduate admissions, hiring, or promotion, where a principal (e.g. a professor or worker) either reviews applicants personally or decisions are delegated to an agent (e.g. a committee or boss) that evaluates applicants efficiently, but according to a potentially misaligned objective. We study this trade-off in a stylized selection model with noisy signals. The principal incurs a cost for selecting applicants, but can evaluate applicants based on their fit with a project, team, workplace, etc. In contrast, the agent evaluates applicants solely on the basis of a signal that correlates with the principal's metric, but this comes at no cost to the principal. Our goal is to characterize when delegation is beneficial versus when decision-making should remain with the principal. We compare these regimes along three dimensions: (i) the principal's utility, (ii) the quality of the selected applicants according to the principal's metric, and (iii) the fairness of selection outcomes under disparate signal qualities.
|
https://arxiv.org/abs/2502.07792
|
Academic Papers
|
svg
|
754330a80715b811720fe3e17bd05788d58444ee29667ec871a1f36375b577ec
|
2026-01-16T00:00:00-05:00
|
Privacy amplification by random allocation
|
arXiv:2502.08202v4 Announce Type: replace Abstract: We consider the privacy amplification properties of a sampling scheme in which a user's data is used in k steps chosen randomly and uniformly from a sequence (or set) of t steps. This sampling scheme has been recently applied in the context of differentially private optimization [Chua et al., 2024a, Choquette-Choo et al., 2025] and is also motivated by communication-efficient high-dimensional private aggregation [Asi et al., 2025]. Existing analyses of this scheme either rely on privacy amplification by shuffling which leads to overly conservative bounds or require Monte Carlo simulations that are computationally prohibitive in most practical scenarios. We give the first theoretical guarantees and numerical estimation algorithms for this sampling scheme. In particular, we demonstrate that the privacy guarantees of random k-out-of-t allocation can be upper bounded by the privacy guarantees of the well-studied independent (or Poisson) subsampling in which each step uses the user's data with probability $(1+o(1))k/t$. Further, we provide two additional analysis techniques that lead to numerical improvements in several parameter regimes. Altogether, our bounds give efficiently-computable and nearly tight numerical results for random allocation applied to Gaussian noise addition.
|
https://arxiv.org/abs/2502.08202
|
Academic Papers
|
svg
|
e055f2208602ad7d50779186a7e532c32b7f629e0167bfda9acd09691ba5376d
|
2026-01-16T00:00:00-05:00
|
MixMin: Finding Data Mixtures via Convex Minimization
|
arXiv:2502.10510v3 Announce Type: replace Abstract: Modern machine learning pipelines are increasingly combining and mixing data from diverse and disparate sources, e.g., pre-training large language models. Yet, finding the optimal data mixture is a challenging and open problem. We formalize this data mixing problem as a bi-level objective: the best mixture is the one that would lead to the best model for a downstream objective. Unfortunately, this objective is generally intractable. In this paper, we make the observation that the bi-level data mixing objective becomes convex as our model class becomes larger. We develop and study a gradient-based approach for optimizing this convex objective, which we call MixMin, and test it on language modeling and chemistry tasks. MixMin was the only method that uniformly improved the data mixture in all our experiments. With MixMin, we improved the data mixture using less than 0.2% additional compute for a pythia-410M model trained on 8.2B tokens, resulting between 1-5% relative improvement to negative log likelihood on PIQA, ARC Easy, SciQ, and OpenWebMath. Crucially, we found that MixMin mixtures for smaller models improved training of larger models, suggesting that MixMin mixtures may be scale-invariant. When mixing bioassay data to train an XGBoost model, we saw improvements to average precision scores of 0.03-0.15.
|
https://arxiv.org/abs/2502.10510
|
Academic Papers
|
svg
|
85586efbfecc6cbe50809d954fed8ca5bf3c8c67d883aab941d99f00b2b9efdd
|
2026-01-16T00:00:00-05:00
|
Exploring the Translation Mechanism of Large Language Models
|
arXiv:2502.11806v3 Announce Type: replace Abstract: While large language models (LLMs) demonstrate remarkable success in multilingual translation, their internal core translation mechanisms, even at the fundamental word level, remain insufficiently understood. To address this critical gap, this work introduces a systematic framework for interpreting the mechanism behind LLM translation from the perspective of computational components. This paper first proposes subspace-intervened path patching for precise, fine-grained causal analysis, enabling the detection of components crucial to translation tasks and subsequently characterizing their behavioral patterns in human-interpretable terms. Comprehensive experiments reveal that translation is predominantly driven by a sparse subset of components: specialized attention heads serve critical roles in extracting source language, translation indicators, and positional features, which are then integrated and processed by specific multi-layer perceptrons (MLPs) into intermediary English-centric latent representations before ultimately yielding the final translation. The significance of these findings is underscored by the empirical demonstration that targeted fine-tuning a minimal parameter subset ($<5\%$) enhances translation performance while preserving general capabilities. This result further indicates that these crucial components generalize effectively to sentence-level translation and are instrumental in elucidating more intricate translation tasks.
|
https://arxiv.org/abs/2502.11806
|
Academic Papers
|
svg
|
6d1db810eef38557e9552b241ce198472f488dd3258e13da2e87d99b4d88ed2e
|
2026-01-16T00:00:00-05:00
|
LaM-SLidE: Latent Space Modeling of Spatial Dynamical Systems via Linked Entities
|
arXiv:2502.12128v5 Announce Type: replace Abstract: Generative models are spearheading recent progress in deep learning, showcasing strong promise for trajectory sampling in dynamical systems as well. However, whereas latent space modeling paradigms have transformed image and video generation, similar approaches are more difficult for most dynamical systems. Such systems -- from chemical molecule structures to collective human behavior -- are described by interactions of entities, making them inherently linked to connectivity patterns, entity conservation, and the traceability of entities over time. Our approach, LaM-SLidE (Latent Space Modeling of Spatial Dynamical Systems via Linked Entities), bridges the gap between: (1) keeping the traceability of individual entities in a latent system representation, and (2) leveraging the efficiency and scalability of recent advances in image and video generation, where pre-trained encoder and decoder enable generative modeling directly in latent space. The core idea of LaM-SLidE is the introduction of identifier representations (IDs) that enable the retrieval of entity properties and entity composition from latent system representations, thus fostering traceability. Experimentally, across different domains, we show that LaM-SLidE performs favorably in terms of speed, accuracy, and generalizability. Code is available at https://github.com/ml-jku/LaM-SLidE .
|
https://arxiv.org/abs/2502.12128
|
Academic Papers
|
svg
|
02ab33241879aa72600747c83ec89df56819c705baf78050a082c658bf36050f
|
2026-01-16T00:00:00-05:00
|
Text Classification Under Class Distribution Shift: A Survey
|
arXiv:2502.12965v3 Announce Type: replace Abstract: The basic underlying assumption of machine learning (ML) models is that the training and test data are sampled from the same distribution. However, in daily practice, this assumption is often broken, i.e. the distribution of the test data changes over time, which hinders the application of conventional ML models. One domain where the distribution shift naturally occurs is text classification, since people always find new topics to discuss. To this end, we survey research articles studying open-set text classification and related tasks. We divide the methods in this area based on the constraints that define the kind of distribution shift and the corresponding problem formulation, i.e. learning with the Universum, zero-shot learning, and open-set learning. We next discuss the predominant mitigation approaches for each problem setup. We further identify several future work directions, aiming to push the boundaries beyond the state of the art. Finally, we explain how continual learning can solve many of the issues caused by the shifting class distribution. We maintain a list of relevant papers at https://github.com/Eduard6421/Open-Set-Survey.
|
https://arxiv.org/abs/2502.12965
|
Academic Papers
|
svg
|
9d7d710f18c85ef9c15821472fa8a37f1b5e003f02f020031ccf0238d313faf6
|
2026-01-16T00:00:00-05:00
|
A Taxonomy for Evaluating Generalist Robot Manipulation Policies
|
arXiv:2503.01238v2 Announce Type: replace Abstract: Machine learning for robot manipulation promises to unlock generalization to novel tasks and environments. But how should we measure the progress of these policies towards generalization? Evaluating and quantifying generalization is the Wild West of modern robotics, with each work proposing and measuring different types of generalization in their own, often difficult to reproduce settings. In this work, our goal is (1) to outline the forms of generalization we believe are important for robot manipulation in a comprehensive and fine-grained manner, and (2) to provide reproducible guidelines for measuring these notions of generalization. We first propose STAR-Gen, a taxonomy of generalization for robot manipulation structured around visual, semantic, and behavioral generalization. Next, we instantiate STAR-Gen with two case studies on real-world benchmarking: one based on open-source models and the Bridge V2 dataset, and another based on the bimanual ALOHA 2 platform that covers more dexterous and longer horizon tasks. Our case studies reveal many interesting insights: for example, we observe that open-source vision-language-action models often struggle with semantic generalization, despite pre-training on internet-scale language datasets. We provide videos and other supplementary material at our website stargen-taxonomy.github.io.
|
https://arxiv.org/abs/2503.01238
|
Academic Papers
|
svg
|
e7de395e4cfdd5bdb29b5c9722a0a7d03c89c60ecb2daa14c446ab761a4c1be4
|
2026-01-16T00:00:00-05:00
|
Human-AI Experience in Integrated Development Environments: A Systematic Literature Review
|
arXiv:2503.06195v3 Announce Type: replace Abstract: The integration of Artificial Intelligence (AI) into Integrated Development Environments (IDEs) is reshaping software development, fundamentally altering how developers interact with their tools. This shift marks the emergence of Human-AI Experience in Integrated Development Environment (in-IDE HAX), a field that explores the evolving dynamics of Human-Computer Interaction in AI-assisted coding environments. Despite rapid adoption, research on in-IDE HAX remains fragmented, which highlights the need for a unified overview of current practices, challenges, and opportunities. To provide a structured overview of existing research, we conduct a systematic literature review of 90 studies, summarizing current findings and outlining areas for further investigation. We organize key insights from reviewed studies into three aspects: Impact, Design, and Quality of AI-based systems inside IDEs. Impact findings show that AI-assisted coding enhances developer productivity but also introduces challenges, such as verification overhead and over-reliance. Design studies show that effective interfaces surface context, provide explanations and transparency of suggestion, and support user control. Quality studies document risks in correctness, maintainability, and security. For future research, priorities include productivity studies, design of assistance, and audit of AI-generated code. The agenda calls for larger and longer evaluations, stronger audit and verification assets, broader coverage across the software life cycle, and adaptive assistance under user control.
|
https://arxiv.org/abs/2503.06195
|
Academic Papers
|
svg
|
410de6de63197c554809099cad796533eac8d87b6cca191cd6499c5c2a67f480
|
2026-01-16T00:00:00-05:00
|
RS2-SAM2: Customized SAM2 for Referring Remote Sensing Image Segmentation
|
arXiv:2503.07266v4 Announce Type: replace Abstract: Referring Remote Sensing Image Segmentation (RRSIS) aims to segment target objects in remote sensing (RS) images based on textual descriptions. Although Segment Anything Model 2 (SAM2) has shown remarkable performance in various segmentation tasks, its application to RRSIS presents several challenges, including understanding the text-described RS scenes and generating effective prompts from text. To address these issues, we propose \textbf{RS2-SAM2}, a novel framework that adapts SAM2 to RRSIS by aligning the adapted RS features and textual features while providing pseudo-mask-based dense prompts. Specifically, we employ a union encoder to jointly encode the visual and textual inputs, generating aligned visual and text embeddings as well as multimodal class tokens. A bidirectional hierarchical fusion module is introduced to adapt SAM2 to RS scenes and align adapted visual features with the visually enhanced text embeddings, improving the model's interpretation of text-described RS scenes. To provide precise target cues for SAM2, we design a mask prompt generator, which takes the visual embeddings and class tokens as input and produces a pseudo-mask as the dense prompt of SAM2. Experimental results on several RRSIS benchmarks demonstrate that RS2-SAM2 achieves state-of-the-art performance.
|
https://arxiv.org/abs/2503.07266
|
Academic Papers
|
svg
|
d93ecaea41c879bc9039f9988980da5d7c1a271692902123ae63bb2a210be965
|
2026-01-16T00:00:00-05:00
|
High-Quality 3D Head Reconstruction from Any Single Portrait Image
|
arXiv:2503.08516v3 Announce Type: replace Abstract: In this work, we introduce a novel high-fidelity 3D head reconstruction method from a single portrait image, regardless of perspective, expression, or accessories. Despite significant efforts in adapting 2D generative models for novel view synthesis and 3D optimization, most methods struggle to produce high-quality 3D portraits. The lack of crucial information, such as identity, expression, hair, and accessories, limits these approaches in generating realistic 3D head models. To address these challenges, we construct a new high-quality dataset containing 227 sequences of digital human portraits captured from 96 different perspectives, totalling 21,792 frames, featuring diverse expressions and accessories. To further improve performance, we integrate identity and expression information into the multi-view diffusion process to enhance facial consistency across views. Specifically, we apply identity- and expression-aware guidance and supervision to extract accurate facial representations, which guide the model and enforce objective functions to ensure high identity and expression consistency during generation. Finally, we generate an orbital video around the portrait consisting of 96 multi-view frames, which can be used for 3D portrait model reconstruction. Our method demonstrates robust performance across challenging scenarios, including side-face angles and complex accessories
|
https://arxiv.org/abs/2503.08516
|
Academic Papers
|
svg
|
4aca44a56684259c89cb35e20958794b352b79c10768e4e9a545c674d7b54f42
|
2026-01-16T00:00:00-05:00
|
ShuffleGate: Scalable Feature Optimization for Recommender Systems via Batch-wise Sensitivity Learning
|
arXiv:2503.09315v4 Announce Type: replace Abstract: Feature optimization, specifically Feature Selection (FS) and Dimension Selection (DS), is critical for the efficiency and generalization of large-scale recommender systems. While conceptually related, these tasks are typically tackled with isolated solutions that often suffer from ambiguous importance scores or prohibitive computational costs. In this paper, we propose ShuffleGate, a unified and interpretable mechanism that estimates component importance by measuring the model's sensitivity to information loss. Unlike conventional gating that learns relative weights, ShuffleGate introduces a batch-wise shuffling strategy to effectively erase information in an end-to-end differentiable manner. This paradigm shift yields naturally polarized importance distributions, bridging the long-standing "search-retrain gap" and distinguishing essential signals from noise without complex threshold tuning. ShuffleGate provides a unified solution across granularities. It achieves state-of-the-art performance on feature and dimension selection tasks. Furthermore, to demonstrate its extreme scalability and precision, we extend ShuffleGate to evaluate fine-grained embedding entries. Experiments show it can identify and prune 99.9% of redundant embedding parameters on the Criteo dataset while maintaining competitive AUC, verifying its robustness in massive search spaces. Finally, the method has been successfully deployed in a top-tier industrial video recommendation platform. By compressing the concatenated input dimension from over 10,000 to 1,000+, it achieved a 91% increase in training throughput while serving billions of daily requests without performance degradation.
|
https://arxiv.org/abs/2503.09315
|
Academic Papers
|
svg
|
753efafbf87aeccf65da8098dde791a4d6922d254b21dedab36a9ac3aaf39744
|
2026-01-16T00:00:00-05:00
|
TriDF: Triplane-Accelerated Density Fields for Few-Shot Remote Sensing Novel View Synthesis
|
arXiv:2503.13347v2 Announce Type: replace Abstract: Remote sensing novel view synthesis (NVS) offers significant potential for 3D interpretation of remote sensing scenes, with important applications in urban planning and environmental monitoring. However, remote sensing scenes frequently lack sufficient multi-view images due to acquisition constraints. While existing NVS methods tend to overfit when processing limited input views, advanced few-shot NVS methods are computationally intensive and perform sub-optimally in remote sensing scenes. This paper presents TriDF, an efficient hybrid 3D representation for fast remote sensing NVS from as few as 3 input views. Our approach decouples color and volume density information, modeling them independently to reduce the computational burden on implicit radiance fields and accelerate reconstruction.We explore the potential of the triplane representation in few-shot NVS tasks by mapping high-frequency color information onto this compact structure, and the direct optimization of feature planes significantly speeds up convergence. Volume density is modeled as continuous density fields, incorporating reference features from neighboring views through image-based rendering to compensate for limited input data. Additionally, we introduce depth-guided optimization based on point clouds, which effectively mitigates the overfitting problem in few-shot NVS.Comprehensive experiments across multiple remote sensing scenes demonstrate that our hybrid representation achieves a 30x speed increase compared to NeRF-based methods, while simultaneously improving rendering quality metrics over advanced few-shot methods (7.4% increase in PSNR and 3.4% in SSIM). The code is publicly available at https://github.com/kanehub/TriDF
|
https://arxiv.org/abs/2503.13347
|
Academic Papers
|
svg
|
6e6451dbd740b75300facb106f70aec6402db68b8e1e9f120bd9f0acbf360563
|
2026-01-16T00:00:00-05:00
|
Bayesian Teaching Enables Probabilistic Reasoning in Large Language Models
|
arXiv:2503.17523v3 Announce Type: replace Abstract: Large language models (LLMs) are increasingly used as agents that interact with users and with the world. To do so successfully, LLMs must construct representations of the world and form probabilistic beliefs about them. To provide personalized recommendations, for example, the LLM needs to infer a user's preferences from their behavior over multiple interactions. The Bayesian inference framework lays out the optimal way for an agent to update its beliefs as it receives new information. We first show that LLMs fall far short of the standard defined by the Bayesian framework. We then show that by teaching LLMs to mimic the predictions of the normative Bayesian model, we can dramatically improve their ability to update their beliefs; this ability generalizes to new tasks. We conclude that LLMs can effectively learn reasoning skills from examples and generalize those skills to new domains.
|
https://arxiv.org/abs/2503.17523
|
Academic Papers
|
svg
|
f89f8e838d941a24920e1b88a3a2101b8900746e19172b41186632c300c1a624
|
2026-01-16T00:00:00-05:00
|
CoinFT: A Coin-Sized, Capacitive 6-Axis Force Torque Sensor for Robotic Applications
|
arXiv:2503.19225v3 Announce Type: replace Abstract: We introduce CoinFT, a capacitive 6-axis force/torque (F/T) sensor that is compact, light, low-cost, and robust with an average root-mean-squared error of 0.16N for force and 1.08mNm for moment when the input ranges from 0~14N and 0~5N in normal and shear directions, respectively. CoinFT is a stack of two rigid PCBs with comb-shaped electrodes connected by an array of silicone rubber pillars. The microcontroller interrogates the electrodes in different subsets in order to enhance sensitivity for measuring 6-axis F/T. The combination of features of CoinFT enables various contact-rich robot interactions across different embodiment domains including drones, robot end-effectors, and wearable haptic devices. We demonstrate the utility of CoinFT through two representative applications: a multi-axial contact-probing experiment in which a CoinFT mounted beneath a hemispherical fingertip measures 6-axes of force and torque representative of manipulation scenarios, and an attitude-based force-control task on a drone. The design, fabrication, and firmware of CoinFT are open-sourced at https://coin-ft.github.io/.
|
https://arxiv.org/abs/2503.19225
|
Academic Papers
|
svg
|
4e7f34dfc28af7251476903c88e8e24c988ff2634359e2e73fecd44a6417e53e
|
2026-01-16T00:00:00-05:00
|
Testing Low-Resource Language Support in LLMs Using Language Proficiency Exams: the Case of Luxembourgish
|
arXiv:2504.01667v4 Announce Type: replace Abstract: Large Language Models (LLMs) have become an increasingly important tool in research and society at large. While LLMs are regularly used all over the world by experts and lay-people alike, they are predominantly developed with English-speaking users in mind, performing well in English and other wide-spread languages while less-resourced languages such as Luxembourgish are seen as a lower priority. This lack of attention is also reflected in the sparsity of available evaluation tools and datasets. In this study, we investigate the viability of language proficiency exams as such evaluation tools for the Luxembourgish language. We find that large models such as Claude and DeepSeek-R1 typically achieve high scores, while smaller models show weak performances. We also find that the performances in such language exams can be used to predict performances in other NLP tasks in Luxembourgish.
|
https://arxiv.org/abs/2504.01667
|
Academic Papers
|
svg
|
df319912355c04cbce325b058401caa62f7b20838b72cd7a51dc9b47fbde0638
|
2026-01-16T00:00:00-05:00
|
Barrier Certificates for Unknown Systems with Latent States and Polynomial Dynamics using Bayesian Inference
|
arXiv:2504.01807v3 Announce Type: replace Abstract: Certifying safety in dynamical systems is crucial, but barrier certificates - widely used to verify that system trajectories remain within a safe region - typically require explicit system models. When dynamics are unknown, data-driven methods can be used instead, yet obtaining a valid certificate requires rigorous uncertainty quantification. For this purpose, existing methods usually rely on full-state measurements, limiting their applicability. This paper proposes a novel approach for synthesizing barrier certificates for unknown systems with latent states and polynomial dynamics. A Bayesian framework is employed, where a prior in state-space representation is updated using output data via a targeted marginal Metropolis-Hastings sampler. The resulting samples are used to construct a barrier certificate through a sum-of-squares program. Probabilistic guarantees for its validity with respect to the true, unknown system are obtained by testing on an additional set of posterior samples. The approach and its probabilistic guarantees are illustrated through a numerical simulation.
|
https://arxiv.org/abs/2504.01807
|
Academic Papers
|
svg
|
0862764c277bce35815b160680b56c254e056fb4212cc6192f40f0ab3a147008
|
2026-01-16T00:00:00-05:00
|
A vector bundle approach to Nash equilibria
|
arXiv:2504.03456v2 Announce Type: replace Abstract: We use vector bundles to study the locus of totally mixed Nash equilibria of an $n$-player game in normal form, which we call the Nash equilibrium scheme. When the payoff tensor format is balanced, we study the Nash discriminant variety, i.e., the algebraic variety of games whose Nash equilibrium scheme is nonreduced or has a positive dimensional component. We prove that this variety has codimension one. We classify all possible components of the Nash equilibrium scheme for a binary three-player game. We prove that if the payoff tensor is of boundary format, then the Nash discriminant variety has two components: an irreducible hypersurface and a larger-codimensional component. A generic game with an unbalanced payoff tensor format does not admit totally mixed Nash equilibria. We define the Nash resultant variety of games admitting a positive number of totally mixed Nash equilibria. We prove that it is irreducible and determine its codimension and degree.
|
https://arxiv.org/abs/2504.03456
|
Academic Papers
|
svg
|
5b092ca999c196adbe416d7fc2692630084dc7b38d3c94fbeacfe14a099ada9e
|
2026-01-16T00:00:00-05:00
|
Evaluating Large Language Models for Fair and Reliable Organ Allocation
|
arXiv:2504.03716v2 Announce Type: replace Abstract: Medical institutions are considering the use of LLMs in high-stakes clinical decision-making, such as organ allocation. In such sensitive use cases, evaluating fairness is imperative. However, existing evaluation methods often fall short; benchmarks are too simplistic to capture real-world complexity, and accuracy-based metrics fail to address the absence of a clear ground truth. To realistically and fairly model organ allocation, specifically kidney allocation, we begin by testing the medical knowledge of LLMs to determine whether they understand the clinical factors required to make sound allocation decisions. Building on this foundation, we design two tasks: (1) Choose-One and (2) Rank-All. In Choose-One, LLMs select a single candidate from a list of potential candidates to receive a kidney. In this scenario, we assess fairness across demographics using traditional fairness metrics, such as proportional parity. In Rank-All, LLMs rank all candidates waiting for a kidney, reflecting real-world allocation processes more closely, where an organ is passed down a ranked list until allocated. Our evaluation on three LLMs reveals a divergence between fairness metrics: while exposure-based metrics suggest equitable outcomes, probability-based metrics uncover systematic preferential sorting, where specific groups were clustered in upper-ranking tiers. Furthermore, we observe that demographic preferences are highly task-dependent, showing inverted trends between Choose-One and Rank-All tasks, even when considering the topmost rank. Overall, our results indicate that current LLMs can introduce inequalities in real-world allocation scenarios, underscoring the urgent need for rigorous fairness evaluation and human oversight before their use in high-stakes decision-making.
|
https://arxiv.org/abs/2504.03716
|
Academic Papers
|
svg
|
f91727b72fee72b7e5f7fbdac44671f5cd7b6a52cfedb384e056e5b0587ed673
|
2026-01-16T00:00:00-05:00
|
WebRollback: Enhancing Web Agents with Explicit Rollback Mechanisms
|
arXiv:2504.11788v3 Announce Type: replace Abstract: With recent advancements in large language models, web agents have been greatly improved. However, dealing with complex and dynamic web environments requires more advanced planning and search abilities. Previous studies usually adopt a greedy one-way search strategy, which may struggle to recover from erroneous states. In this work, we enhance web agents with an explicit rollback mechanism, enabling the agent to revert back to a previous state in its navigation trajectory. This mechanism gives models the flexibility to directly control the search process, leading to an effective and efficient web navigation method. We conduct experiments on two live web navigation benchmarks with zero-shot and fine-tuning settings. The results demonstrate the effectiveness of our proposed approach.
|
https://arxiv.org/abs/2504.11788
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.