id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
8c7b225d9eec3a10461b551ba0a18a12dc1b7c128ad9e748bf670d6f943e98dc
2026-01-01T00:00:00-05:00
Towards autonomous time-calibration of large quantum-dot devices: Detection, real-time feedback, and noise spectroscopy
arXiv:2512.24894v1 Announce Type: cross Abstract: The performance and scalability of semiconductor quantum-dot (QD) qubits are limited by electrostatic drift and charge noise that shift operating points and destabilize qubit parameters. As systems expand to large one- and two-dimensional arrays, manual recalibration becomes impractical, creating a need for autonomous stabilization frameworks. Here, we introduce a method that uses the full network of charge-transition lines in repeatedly acquired double-quantum-dot charge stability diagrams (CSDs) as a multidimensional probe of the local electrostatic environment. By accurately tracking the motion of selected transitions in time, we detect voltage drifts, identify abrupt charge reconfigurations, and apply compensating updates to maintain stable operating conditions. We demonstrate our approach on a 10-QD device, showing robust stabilization and real-time diagnostic access to dot-specific noise processes. The high acquisition rate of radio-frequency reflectometry CSD measurements also enables time-domain noise spectroscopy, allowing the extraction of noise power spectral densities, the identification of two-level fluctuators, and the analysis of spatial noise correlations across the array. From our analysis, we find that the background noise at 100~$\mu$\si{\hertz} is dominated by drift with a power law of $1/f^2$, accompanied by a few dominant two-level fluctuators and an average linear correlation length of $(188 \pm 38)$~\si{\nano\meter} in the device. These capabilities form the basis of a scalable, autonomous calibration and characterization module for QD-based quantum processors, providing essential feedback for long-duration, high-fidelity qubit operations.
https://arxiv.org/abs/2512.24894
Academic Papers
svg
b699a5c83021dad2ccd2a994503b9f7a88f281f1c264553ad1c4211e5fb2a280
2026-01-01T00:00:00-05:00
Adaptive Resource Orchestration for Distributed Quantum Computing Systems
arXiv:2512.24902v1 Announce Type: cross Abstract: Scaling quantum computing beyond a single device requires networking many quantum processing units (QPUs) into a coherent quantum-HPC system. We propose the Modular Entanglement Hub (ModEn-Hub) architecture: a hub-and-spoke photonic interconnect paired with a real-time quantum network orchestrator. ModEn-Hub centralizes entanglement sources and shared quantum memory to deliver on-demand, high-fidelity Bell pairs across heterogeneous QPUs, while the control plane schedules teleportation-based non-local gates, launches parallel entanglement attempts, and maintains a small ebit cache. To quantify benefits, we implement a lightweight, reproducible Monte Carlo study under realistic loss and tight round budgets, comparing a naive sequential baseline to an orchestrated policy with logarithmically scaled parallelism and opportunistic caching. Across 1-128 QPUs and 2,500 trials per point, ModEn-Hub-style orchestration sustains about 90% teleportation success while the baseline degrades toward about 30%, at the cost of higher average entanglement attempts (about 10-12 versus about 3). These results provide clear, high-level evidence that adaptive resource orchestration in the ModEn-Hub enables scalable and efficient quantum-HPC operation on near-term hardware.
https://arxiv.org/abs/2512.24902
Academic Papers
svg
f34204e8fa7028350f61ad2563a302af0c67960b35c8ffe793475b1415bc7441
2026-01-01T00:00:00-05:00
No Vision, No Wearables: 5G-based 2D Human Pose Recognition with Integrated Sensing and Communications
arXiv:2512.24923v1 Announce Type: cross Abstract: With the increasing maturity of contactless human pose recognition (HPR) technology, indoor interactive applications have raised higher demands for natural, controller-free interaction methods. However, current mainstream HPR solutions relying on vision or radio-frequency (RF) (including WiFi, radar) still face various challenges in practical deployment, such as privacy concerns, susceptibility to occlusion, dedicated equipment and functions, and limited sensing resolution and range. 5G-based integrated sensing and communication (ISAC) technology, by merging communication and sensing functions, offers a new approach to address these challenges in contactless HPR. We propose a practical 5G-based ISAC system capable of inferring 2D HPR from uplink sounding reference signals (SRS). Specifically, rich features are extracted from multiple domains and employ an encoder to achieve unified alignment and representation in a latent space. Subsequently, low-dimensional features are fused to output the human pose state. Experimental results demonstrate that in typical indoor environments, our proposed 5G-based ISAC HPR system significantly outperforms current mainstream baseline solutions in HPR performance, providing a solid technical foundation for universal human-computer interaction.
https://arxiv.org/abs/2512.24923
Academic Papers
svg
3eab905a9c44d215354e955f42ed56e10b15db0e442f018f240ab51b801f0d4e
2026-01-01T00:00:00-05:00
Are First-Order Diffusion Samplers Really Slower? A Fast Forward-Value Approach
arXiv:2512.24927v1 Announce Type: cross Abstract: Higher-order ODE solvers have become a standard tool for accelerating diffusion probabilistic model (DPM) sampling, motivating the widespread view that first-order methods are inherently slower and that increasing discretization order is the primary path to faster generation. This paper challenges this belief and revisits acceleration from a complementary angle: beyond solver order, the placement of DPM evaluations along the reverse-time dynamics can substantially affect sampling accuracy in the low-neural function evaluation (NFE) regime. We propose a novel training-free, first-order sampler whose leading discretization error has the opposite sign to that of DDIM. Algorithmically, the method approximates the forward-value evaluation via a cheap one-step lookahead predictor. We provide theoretical guarantees showing that the resulting sampler provably approximates the ideal forward-value trajectory while retaining first-order convergence. Empirically, across standard image generation benchmarks (CIFAR-10, ImageNet, FFHQ, and LSUN), the proposed sampler consistently improves sample quality under the same NFE budget and can be competitive with, and sometimes outperform, state-of-the-art higher-order samplers. Overall, the results suggest that the placement of DPM evaluations provides an additional and largely independent design angle for accelerating diffusion sampling.
https://arxiv.org/abs/2512.24927
Academic Papers
svg
99f47dfd81d8f3dc9fd21e9a73e66d6e0d76b34bed4605933bbb73f34d0536f7
2026-01-01T00:00:00-05:00
Geometric characterisation of structural and regular equivalences in undirected (hyper)graphs
arXiv:2512.24961v1 Announce Type: cross Abstract: Similarity notions between vertices in a graph, such as structural and regular equivalence, are one of the main ingredients in clustering tools in complex network science. We generalise structural and regular equivalences for undirected hypergraphs and provide a characterisation of structural and regular equivalences of undirected graphs and hypergraphs through neighbourhood graphs and Ollivier-Ricci curvature. Our characterisation sheds new light on these similarity notions opening a new avenue for their exploration. These characterisations also enable the construction of a possibly wide family of regular partitions, thereby offering a new route to a task that has so far been computationally challenging.
https://arxiv.org/abs/2512.24961
Academic Papers
svg
1e51c27160d67891650fe3fe850d1969c93e15c4c4c556c36f9e14bfb9128471
2026-01-01T00:00:00-05:00
The Impact of LLMs on Online News Consumption and Production
arXiv:2512.24968v1 Announce Type: cross Abstract: Large language models (LLMs) change how consumers acquire information online; their bots also crawl news publishers' websites for training data and to answer consumer queries; and they provide tools that can lower the cost of content creation. These changes lead to predictions of adverse impact on news publishers in the form of lowered consumer demand, reduced demand for newsroom employees, and an increase in news "slop." Consequently, some publishers strategically responded by blocking LLM access to their websites using the robots.txt file standard. Using high-frequency granular data, we document four effects related to the predicted shifts in news publishing following the introduction of generative AI (GenAI). First, we find a consistent and moderate decline in traffic to news publishers occurring after August 2024. Second, using a difference-in-differences approach, we find that blocking GenAI bots can have adverse effects on large publishers by reducing total website traffic by 23% and real consumer traffic by 14% compared to not blocking. Third, on the hiring side, we do not find evidence that LLMs are replacing editorial or content-production jobs yet. The share of new editorial and content-production job listings increases over time. Fourth, regarding content production, we find no evidence that large publishers increased text volume; instead, they significantly increased rich content and use more advertising and targeting technologies. Together, these findings provide early evidence of some unforeseen impacts of the introduction of LLMs on news production and consumption.
https://arxiv.org/abs/2512.24968
Academic Papers
svg
259e2f4ae8f7140fec7bb57b63e7dbcbc64eaa8342d6d0f3c4d65d9ea40cbd02
2026-01-01T00:00:00-05:00
Large language models and the entropy of English
arXiv:2512.24969v1 Announce Type: cross Abstract: We use large language models (LLMs) to uncover long-ranged structure in English texts from a variety of sources. The conditional entropy or code length in many cases continues to decrease with context length at least to $N\sim 10^4$ characters, implying that there are direct dependencies or interactions across these distances. A corollary is that there are small but significant correlations between characters at these separations, as we show from the data independent of models. The distribution of code lengths reveals an emergent certainty about an increasing fraction of characters at large $N$. Over the course of model training, we observe different dynamics at long and short context lengths, suggesting that long-ranged structure is learned only gradually. Our results constrain efforts to build statistical physics models of LLMs or language itself.
https://arxiv.org/abs/2512.24969
Academic Papers
svg
a07c7b889a2cc72e40cd0dedfe6dd295d0e5bd483b42b7885d6d292a78470991
2026-01-01T00:00:00-05:00
SymSeqBench: a unified framework for the generation and analysis of rule-based symbolic sequences and datasets
arXiv:2512.24977v1 Announce Type: cross Abstract: Sequential structure is a key feature of multiple domains of natural cognition and behavior, such as language, movement and decision-making. Likewise, it is also a central property of tasks to which we would like to apply artificial intelligence. It is therefore of great importance to develop frameworks that allow us to evaluate sequence learning and processing in a domain agnostic fashion, whilst simultaneously providing a link to formal theories of computation and computability. To address this need, we introduce two complementary software tools: SymSeq, designed to rigorously generate and analyze structured symbolic sequences, and SeqBench, a comprehensive benchmark suite of rule-based sequence processing tasks to evaluate the performance of artificial learning systems in cognitively relevant domains. In combination, SymSeqBench offers versatility in investigating sequential structure across diverse knowledge domains, including experimental psycholinguistics, cognitive psychology, behavioral analysis, neuromorphic computing and artificial intelligence. Due to its basis in Formal Language Theory (FLT), SymSeqBench provides researchers in multiple domains with a convenient and practical way to apply the concepts of FLT to conceptualize and standardize their experiments, thus advancing our understanding of cognition and behavior through shared computational frameworks and formalisms. The tool is modular, openly available and accessible to the research community.
https://arxiv.org/abs/2512.24977
Academic Papers
svg
6d5b8a5733fa9bd7a6d698c13bcb8b173ab47568692d43900cbee0449f7ba4fc
2026-01-01T00:00:00-05:00
Basic Inequalities for First-Order Optimization with Applications to Statistical Risk Analysis
arXiv:2512.24999v1 Announce Type: cross Abstract: We introduce \textit{basic inequalities} for first-order iterative optimization algorithms, forming a simple and versatile framework that connects implicit and explicit regularization. While related inequalities appear in the literature, we isolate and highlight a specific form and develop it as a well-rounded tool for statistical analysis. Let $f$ denote the objective function to be optimized. Given a first-order iterative algorithm initialized at $\theta_0$ with current iterate $\theta_T$, the basic inequality upper bounds $f(\theta_T)-f(z)$ for any reference point $z$ in terms of the accumulated step sizes and the distances between $\theta_0$, $\theta_T$, and $z$. The bound translates the number of iterations into an effective regularization coefficient in the loss function. We demonstrate this framework through analyses of training dynamics and prediction risk bounds. In addition to revisiting and refining known results on gradient descent, we provide new results for mirror descent with Bregman divergence projection, for generalized linear models trained by gradient descent and exponentiated gradient descent, and for randomized predictors. We illustrate and supplement these theoretical findings with experiments on generalized linear models.
https://arxiv.org/abs/2512.24999
Academic Papers
svg
f09b6404ba30d4cf660ac76bc12763241cc0eb60cb7c5abc2d1f328b802818f8
2026-01-01T00:00:00-05:00
Optimal Approximation -- Smoothness Tradeoffs for Soft-Max Functions
arXiv:2010.11450v2 Announce Type: replace Abstract: A soft-max function has two main efficiency measures: (1) approximation - which corresponds to how well it approximates the maximum function, (2) smoothness - which shows how sensitive it is to changes of its input. Our goal is to identify the optimal approximation-smoothness tradeoffs for different measures of approximation and smoothness. This leads to novel soft-max functions, each of which is optimal for a different application. The most commonly used soft-max function, called exponential mechanism, has optimal tradeoff between approximation measured in terms of expected additive approximation and smoothness measured with respect to R\'enyi Divergence. We introduce a soft-max function, called "piecewise linear soft-max", with optimal tradeoff between approximation, measured in terms of worst-case additive approximation and smoothness, measured with respect to $\ell_q$-norm. The worst-case approximation guarantee of the piecewise linear mechanism enforces sparsity in the output of our soft-max function, a property that is known to be important in Machine Learning applications [Martins et al. '16, Laha et al. '18] and is not satisfied by the exponential mechanism. Moreover, the $\ell_q$-smoothness is suitable for applications in Mechanism Design and Game Theory where the piecewise linear mechanism outperforms the exponential mechanism. Finally, we investigate another soft-max function, called power mechanism, with optimal tradeoff between expected \textit{multiplicative} approximation and smoothness with respect to the R\'enyi Divergence, which provides improved theoretical and practical results in differentially private submodular optimization.
https://arxiv.org/abs/2010.11450
Academic Papers
svg
8bc0089a9924909af2b255208df1561460841b0d5ed5f5052f535a39f0e0a655
2026-01-01T00:00:00-05:00
Pointwise Distance Distributions for detecting near-duplicates in large materials databases
arXiv:2108.04798v4 Announce Type: replace Abstract: Many real objects are modeled as discrete sets of points, such as corners or other salient features. For our main applications in chemistry, points represent atomic centers in a molecule or a solid material. We study the problem of classifying discrete (finite and periodic) sets of unordered points under isometry, which is any transformation preserving distances in a metric space. Experimental noise motivates the new practical requirement to make such invariants Lipschitz continuous so that perturbing every point in its epsilon-neighborhood changes the invariant up to a constant multiple of epsilon in a suitable distance satisfying all metric axioms. Since the given points are unordered, the key challenge is to compute all invariants and metrics in a near-linear time of the input size. We define the Pointwise Distance Distribution (PDD) for any discrete set and prove, in addition to the properties above, the completeness of PDD for all periodic sets in general position. The PDD can compare nearly 2 million crystals from the world's five largest databases within 2 hours on a modest desktop computer. The impact is upholding data integrity in crystallography because the PDD will not allow anyone to claim a `new' material as a noisy disguise of a known crystal.
https://arxiv.org/abs/2108.04798
Academic Papers
svg
2f1674d31488aac08d508f11768a8d212ed3f378905c7f5a1ad8857d986c60b6
2026-01-01T00:00:00-05:00
To ArXiv or not to ArXiv: A Study Quantifying Pros and Cons of Posting Preprints Online
arXiv:2203.17259v4 Announce Type: replace Abstract: Double-blind conferences have engaged in debates over whether to allow authors to post their papers online on arXiv or elsewhere during the review process. Independently, some authors of research papers face the dilemma of whether to put their papers on arXiv due to its pros and cons. We conduct a study to substantiate this debate and dilemma via quantitative measurements. Specifically, we conducted surveys of reviewers in two top-tier double-blind computer science conferences -- ICML 2021 (5361 submissions and 4699 reviewers) and EC 2021 (498 submissions and 190 reviewers). Our three main findings are as follows. First, more than a third of the reviewers self-report searching online for a paper they are assigned to review. Second, conference policies restricting authors from publicising their work on social media or posting preprints before the review process may have only limited effectiveness in maintaining anonymity. Third, outside the review process, we find that preprints from better-ranked institutions experience a very small increase in visibility compared to preprints from other institutions.
https://arxiv.org/abs/2203.17259
Academic Papers
svg
71d817e54b4e592a4ad2a3dc236b20f0b43adb65ddbd0af8335ed49724bb965a
2026-01-01T00:00:00-05:00
A Nonparametric Framework for Online Stochastic Matching with Correlated Arrivals
arXiv:2208.02229v5 Announce Type: replace Abstract: The design of online algorithms for matching markets and revenue management settings is usually bound by the assumption that the demand process is formed by a fixed-length sequence of queries with unknown types, each drawn independently. This notion of serial independence implies that the demand of each type, i.e., the number of queries of a given type, has low variance and is approximately Poisson-distributed. This paper proposes a nonparametric framework for modeling arrival sequences in online stochastic matching that departs from the serial independent assumption. We propose two models, INDEP and CORREL, that capture different forms of serial correlations by combining a nonparametric distribution for the demand with standard assumptions on the arrival patterns -- adversarial or random order. The INDEP model can capture arbitrary serial correlations within each customer type but assumes cross-sectional independence across types, whereas the CORREL model captures common shocks across customer types. We demonstrate that fluid relaxations, which rely solely on demand expectations, have arbitrarily bad performance guarantees. In contrast, we develop new algorithms that achieve optimal (constant-factor) performance guarantees in each model. Our mathematical analysis includes tighter linear programming (LP) relaxations that leverage distribution knowledge, and a new lossless randomized LP rounding scheme for INDEP. We test our new LP relaxations and rounding scheme in simulations on real and synthetic data, and find that they consistently outperform well-established matching algorithms, especially on real data sequences that exhibit greater demand variance.
https://arxiv.org/abs/2208.02229
Academic Papers
svg
3a40d0eb21ad4d0848573827c3d959bb8f8a4d41385e5c907b61bcd8dc432f7f
2026-01-01T00:00:00-05:00
Active Learning with Neural Networks: Insights from Nonparametric Statistics
arXiv:2210.08367v2 Announce Type: replace Abstract: Deep neural networks have great representation power, but typically require large numbers of training examples. This motivates deep active learning methods that can significantly reduce the amount of labeled training data. Empirical successes of deep active learning have been recently reported in the literature, however, rigorous label complexity guarantees of deep active learning have remained elusive. This constitutes a significant gap between theory and practice. This paper tackles this gap by providing the first near-optimal label complexity guarantees for deep active learning. The key insight is to study deep active learning from the nonparametric classification perspective. Under standard low noise conditions, we show that active learning with neural networks can provably achieve the minimax label complexity, up to disagreement coefficient and other logarithmic terms. When equipped with an abstention option, we further develop an efficient deep active learning algorithm that achieves $\mathsf{polylog}(\frac{1}{\epsilon})$ label complexity, without any low noise assumptions. We also provide extensions of our results beyond the commonly studied Sobolev/H\"older spaces and develop label complexity guarantees for learning in Radon $\mathsf{BV}^2$ spaces, which have recently been proposed as natural function spaces associated with neural networks.
https://arxiv.org/abs/2210.08367
Academic Papers
svg
007b0ed9a809c32fbbc460ce3a761f34893467fea0c87f5070e2758a519f6423
2026-01-01T00:00:00-05:00
The Power of Preconditioning in Overparameterized Low-Rank Matrix Sensing
arXiv:2302.01186v4 Announce Type: replace Abstract: We propose $\textsf{ScaledGD($\lambda$)}$, a preconditioned gradient descent method to tackle the low-rank matrix sensing problem when the true rank is unknown, and when the matrix is possibly ill-conditioned. Using overparametrized factor representations, $\textsf{ScaledGD($\lambda$)}$ starts from a small random initialization, and proceeds by gradient descent with a specific form of damped preconditioning to combat bad curvatures induced by overparameterization and ill-conditioning. At the expense of light computational overhead incurred by preconditioners, $\textsf{ScaledGD($\lambda$)}$ is remarkably robust to ill-conditioning compared to vanilla gradient descent ($\textsf{GD}$) even with overprameterization. Specifically, we show that, under the Gaussian design, $\textsf{ScaledGD($\lambda$)}$ converges to the true low-rank matrix at a constant linear rate after a small number of iterations that scales only logarithmically with respect to the condition number and the problem dimension. This significantly improves over the convergence rate of vanilla $\textsf{GD}$ which suffers from a polynomial dependency on the condition number. Our work provides evidence on the power of preconditioning in accelerating the convergence without hurting generalization in overparameterized learning.
https://arxiv.org/abs/2302.01186
Academic Papers
svg
e74ce723cea6c774a4caac67f1cfc51153898c7b25e1a511887759bae542be84
2026-01-01T00:00:00-05:00
Maximum Independent Set when excluding an induced minor: $K_1 + tK_2$ and $tC_3 \uplus C_4$
arXiv:2302.08182v2 Announce Type: replace Abstract: Dallard, Milani\v{c}, and \v{S}torgel [arXiv '22] ask if for every class excluding a fixed planar graph $H$ as an induced minor, Maximum Independent Set can be solved in polynomial time, and show that this is indeed the case when $H$ is any planar complete bipartite graph, or the 5-vertex clique minus one edge, or minus two disjoint edges. A positive answer would constitute a far-reaching generalization of the state-of-the-art, when we currently do not know if a polynomial-time algorithm exists when $H$ is the 7-vertex path. Relaxing tractability to the existence of a quasipolynomial-time algorithm, we know substantially more. Indeed, quasipolynomial-time algorithms were recently obtained for the $t$-vertex cycle, $C_t$ [Gartland et al., STOC '21] and the disjoint union of $t$ triangles, $tC_3$ [Bonamy et al., SODA '23]. We give, for every integer $t$, a polynomial-time algorithm running in $n^{O(t^5)}$ when $H$ is the friendship graph $K_1 + tK_2$ ($t$ disjoint edges plus a vertex fully adjacent to them), and a quasipolynomial-time algorithm running in $n^{O(t^2 \log n)+f(t)}$, with $f$ a single-exponential function, when $H$ is $tC_3 \uplus C_4$ (the disjoint union of $t$ triangles and a 4-vertex cycle). The former extends a classical result on graphs excluding $tK_2$ as an induced subgraph [Alekseev, DAM '07], while the latter extends Bonamy et al.'s result.
https://arxiv.org/abs/2302.08182
Academic Papers
svg
7338b01ebb6a379701770f941236bf6b5598087381706e5d6d53cbfc744e436d
2026-01-01T00:00:00-05:00
SymX: Energy-based Simulation from Symbolic Expressions
arXiv:2303.02156v2 Announce Type: replace Abstract: Optimization time integrators are effective at solving complex multi-physics problems including deformable solids with non-linear material models, contact with friction, strain limiting, etc. For challenging problems, Newton-type optimizers are often used, which necessitates first- and second-order derivatives of the global non-linear objective function. Manually differentiating, implementing, testing, optimizing, and maintaining the resulting code is extremely time-consuming, error-prone, and precludes quick changes to the model, even when using tools that assist with parts of such pipeline. We present SymX, an open source framework that computes the required derivatives of the different energy contributions by symbolic differentiation, generates optimized code, compiles it on-the-fly, and performs the global assembly. The user only has to provide the symbolic expression of each energy for a single representative element in its corresponding discretization and our system will determine the assembled derivatives for the whole simulation. We demonstrate the versatility of SymX in complex simulations featuring different non-linear materials, high-order finite elements, rigid body systems, adaptive discretizations, frictional contact, and coupling of multiple interacting physical systems. SymX's derivatives offer performance on par with SymPy, an established off-the-shelf symbolic engine, and produces simulations at least one order of magnitude faster than TinyAD, an alternative state-of-the-art integral solution.
https://arxiv.org/abs/2303.02156
Academic Papers
svg
d510c6b26e7495a16b12697c87e13f622d15f8df4f7681a01e70d76a9d62a86d
2026-01-01T00:00:00-05:00
CascadeNS: Confidence-Cascaded Neurosymbolic Model for Sarcasm Detection
arXiv:2304.01424v2 Announce Type: replace Abstract: Sarcasm detection in product reviews requires balancing domain-specific symbolic pattern recognition with deep semantic understanding. Symbolic representations capture explicit linguistic phenomena that are often decisive for sarcasm detection. Existing work either favors interpretable symbolic representation or semantic neural modeling, but rarely achieves both effectively. Prior hybrid methods typically combine these paradigms through feature fusion or ensembling, which can degrade performance. We propose CascadeNS, a confidence-calibrated neurosymbolic architecture that integrates symbolic and neural reasoning through selective activation rather than fusion. A symbolic semigraph handles pattern-rich instances with high confidence, while semantically ambiguous cases are delegated to a neural module based on pre-trained LLM embeddings. At the core of CascadeNS is a calibrated confidence measure derived from polarity-weighted semigraph scores. This measure reliably determines when symbolic reasoning is sufficient and when neural analysis is needed. Experiments on product reviews show that CascadeNS outperforms the strong baselines by 7.44%.
https://arxiv.org/abs/2304.01424
Academic Papers
svg
eaa4aa90527d265ce4e779717d5661f28e8631d085a2061b802a1453ada926a4
2026-01-01T00:00:00-05:00
HiGen: Hierarchical Graph Generative Networks
arXiv:2305.19337v3 Announce Type: replace Abstract: Most real-world graphs exhibit a hierarchical structure, which is often overlooked by existing graph generation methods. To address this limitation, we propose a novel graph generative network that captures the hierarchical nature of graphs and successively generates the graph sub-structures in a coarse-to-fine fashion. At each level of hierarchy, this model generates communities in parallel, followed by the prediction of cross-edges between communities using separate neural networks. This modular approach enables scalable graph generation for large and complex graphs. Moreover, we model the output distribution of edges in the hierarchical graph with a multinomial distribution and derive a recursive factorization for this distribution. This enables us to generate community graphs with integer-valued edge weights in an autoregressive manner. Empirical studies demonstrate the effectiveness and scalability of our proposed generative model, achieving state-ofthe-art performance in terms of graph quality across various benchmark datasets. The code is available at https://github.com/Karami-m/HiGen_main.
https://arxiv.org/abs/2305.19337
Academic Papers
svg
80319e6b7082641310257fe8816b8f0a9564a09ab1203445d505b5edaf6f3708
2026-01-01T00:00:00-05:00
HIDFlowNet: A Flow-Based Deep Network for Hyperspectral Image Denoising
arXiv:2306.17797v2 Announce Type: replace Abstract: Hyperspectral image (HSI) denoising is essentially ill-posed since a noisy HSI can be degraded from multiple clean HSIs. However, existing deep learning (DL)-based approaches only restore one clean HSI from the given noisy HSI with a deterministic mapping, thus ignoring the ill-posed issue and always resulting in an over-smoothing problem. Additionally, these DL-based methods often neglect that noise is part of the high-frequency component and their network architectures fail to decouple the learning of low-frequency and high-frequency. To alleviate these issues, this paper proposes a flow-based HSI denoising network (HIDFlowNet) to directly learn the conditional distribution of the clean HSI given the noisy HSI and thus diverse clean HSIs can be sampled from the conditional distribution. Overall, our HIDFlowNet is induced from the generative flow model and is comprised of an invertible decoder and a conditional encoder, which can explicitly decouple the learning of low-frequency and high-frequency information of HSI. Specifically, the invertible decoder is built by staking a succession of invertible conditional blocks (ICBs) to capture the local high-frequency details. The conditional encoder utilizes down-sampling operations to obtain low-resolution images and uses transformers to capture correlations over a long distance so that global low-frequency information can be effectively extracted. Extensive experiments on simulated and real HSI datasets verify that our proposed HIDFlowNet can obtain better or comparable results compared with other state-of-the-art methods.
https://arxiv.org/abs/2306.17797
Academic Papers
svg
5af93af60f6ddbbc809762d2d7240817cca2b004ef62c56bda650f2bb47c86d1
2026-01-01T00:00:00-05:00
An analysis on stochastic Lanczos quadrature with asymmetric quadrature nodes
arXiv:2307.00847v3 Announce Type: replace Abstract: The stochastic Lanczos quadrature method has garnered significant attention recently. Upon examination of the error analyses given by Ubaru, Chen and Saad and Cortinovis and Kressner, certain notable inconsistencies arise. It turns out that the former's results are valid for cases with symmetric quadrature nodes and may not be adequate for many practical cases such as estimating log determinant of matrices. This paper analyzes probabilistic error bound of the stochastic Lanczos quadrature method for cases with asymmetric quadrature nodes. Besides, an optimized error allocation technique is employed to minimize the overall number of matrix vector multiplications required by the stochastic Lanczos quadrature method.
https://arxiv.org/abs/2307.00847
Academic Papers
svg
3863fd71950427b77b6f23ebfe15ec2909fd201a77079f53b65618abd0e5873b
2026-01-01T00:00:00-05:00
Almost perfect nonlinear power functions with exponents expressed as fractions
arXiv:2307.15657v2 Announce Type: replace Abstract: Let $F$ be a finite field, let $f$ be a function from $F$ to $F$, and let $a$ be a nonzero element of $F$. The discrete derivative of $f$ in direction $a$ is $\Delta_a f \colon F \to F$ with $(\Delta_a f)(x)=f(x+a)-f(x)$. The differential spectrum of $f$ is the multiset of cardinalities of all the fibers of all the derivatives $\Delta_a f$ as $a$ runs through $F^*$. An almost perfect nonlinear (APN) function is one for which the largest cardinality in its differential spectrum is $2$. Almost perfect nonlinear functions are of interest as cryptographic primitives. If $d$ is a positive integer, then the power function over $F$ with exponent $d$ is the function $f \colon F \to F$ with $f(x)=x^d$ for every $x \in F$. There is a small number of known infinite families of APN power functions. In this paper, we re-express the exponents for one such family in a more convenient form. This enables us not only to obtain the differential spectrum of each power function $f$ with an exponent in our family, but also to determine the elements that lie in an arbitrary fiber of the discrete derivative of $f$. This differential analysis, which is far more detailed than previous results, is achieved by composing the discrete derivative of $f$ with some permutations and a double covering of its domain to obtain a function whose fibers can more readily be analyzed.
https://arxiv.org/abs/2307.15657
Academic Papers
svg
b5c7ee77ffcbbf9baef69971a62d467325d6768f614f4e887f7a224625d371ec
2026-01-01T00:00:00-05:00
Content-based Recommendation Engine for Video Streaming Platform
arXiv:2308.08406v2 Announce Type: replace Abstract: Recommendation engines suggest content, products, or services to the user by using machine learning algorithms. This paper proposes a content-based recommendation engine that provides personalized video suggestions based on users' previous interactions and preferences. The engine uses TF-IDF (Term Frequency-Inverse Document Frequency) text vectorization technique to evaluate the relevance of words in video descriptions, followed by the computation of cosine similarity between content items to determine their degree of similarity. The system's performance is evaluated using precision, recall, and F1-score metrics. Experimental results demonstrate the effectiveness of content-based filtering in delivering relevant and personalized video recommendations to users. This approach can enhance user engagement on video streaming platforms and reduce search time, providing a more intuitive, preference-based viewing experience.
https://arxiv.org/abs/2308.08406
Academic Papers
svg
b63a49ef766c0e4baf28eb825a53840e3a4714a3d8a29479ed6dbeaf0d861e98
2026-01-01T00:00:00-05:00
Enumeration and updates for conjunctive linear algebra queries through expressibility
arXiv:2310.04118v5 Announce Type: replace Abstract: Due to the importance of linear algebra and matrix operations in data analytics, there is significant interest in using relational query optimization and processing techniques for evaluating (sparse) linear algebra programs. In particular, in recent years close connections have been established between linear algebra programs and relational algebra that allow transferring optimization techniques of the latter to the former. In this paper, we ask ourselves which linear algebra programs in MATLANG correspond to the free-connex and q-hierarchical fragments of conjunctive first-order logic. Both fragments have desirable query processing properties: free-connex conjunctive queries support constant-delay enumeration after a linear-time preprocessing phase, and q-hierarchical conjunctive queries further allow constant-time updates. By characterizing the corresponding fragments of MATLANG, we hence identify the fragments of linear algebra programs that one can evaluate with constant-delay enumeration after linear-time preprocessing and with constant-time updates. To derive our results, we improve and generalize previous correspondences between MATLANG and relational algebra evaluated over semiring-annotated relations. In addition, we identify properties on semirings that allow to generalize the complexity bounds for free-connex and q-hierarchical conjunctive queries from Boolean annotations to general semirings.
https://arxiv.org/abs/2310.04118
Academic Papers
svg
70569819d161e9e4a9530b4915aee42e078cc87b97fa37660333973ed1a08652
2026-01-01T00:00:00-05:00
Multi-fidelity Bayesian Optimization: A Review
arXiv:2311.13050v3 Announce Type: replace Abstract: Resided at the intersection of multi-fidelity optimization (MFO) and Bayesian optimization (BO), MF BO has found a niche in solving expensive engineering design optimization problems, thanks to its advantages in incorporating physical and mathematical understandings of the problems, saving resources, addressing exploitation-exploration trade-off, considering uncertainty, and processing parallel computing. The increasing number of works dedicated to MF BO suggests the need for a comprehensive review of this advanced optimization technique. In this paper, we survey recent developments of two essential ingredients of MF BO: Gaussian process (GP) based MF surrogates and acquisition functions. We first categorize the existing MF modeling methods and MFO strategies to locate MF BO in a large family of surrogate-based optimization and MFO algorithms. We then exploit the common properties shared between the methods from each ingredient of MF BO to describe important GP-based MF surrogate models and review various acquisition functions. By doing so, we expect to provide a structured understanding of MF BO. Finally, we attempt to reveal important aspects that require further research for applications of MF BO in solving intricate yet important design optimization problems, including constrained optimization, high-dimensional optimization, optimization under uncertainty, and multi-objective optimization.
https://arxiv.org/abs/2311.13050
Academic Papers
svg
cb67c812ece0f353503ed6058954f1a4f55761943346259cf138930cb396dbcd
2026-01-01T00:00:00-05:00
Ricci-Notation Tensor Framework for Model-based Approaches to Imaging
arXiv:2312.04018v4 Announce Type: replace Abstract: Model-based approaches to imaging, like specialized image enhancements in astronomy, facilitate explanations of relationships between observed inputs and computed outputs. These models may be expressed with extended matrix-vector (EMV) algebra, especially when they involve only scalars, vectors, and matrices, and with n-mode or index notations, when they involve multidimensional arrays, also called numeric tensors or, simply, tensors. While this paper features an example, inspired by exoplanet imaging, that employs tensors to reveal (inverse) 2D fast Fourier transforms in an image enhancement model, the work is actually about the tensor algebra and software, or tensor frameworks, available for model-based imaging. The paper proposes a Ricci-notation tensor (RT) framework, comprising a dual-variant index notation, with Einstein summation convention, and codesigned object-oriented software, called the RTToolbox for MATLAB. Extensions to Ricci notation offer novel representations for entrywise, pagewise, and broadcasting operations popular in EMV frameworks for imaging. Complementing the EMV algebra computable with MATLAB, the RTToolbox demonstrates programmatic and computational efficiency via careful design of numeric tensor and dual-variant index classes. Compared to its closest competitor, also a numeric tensor framework that uses index notation, the RT framework enables superior ways to model imaging problems and, thereby, to develop solutions.
https://arxiv.org/abs/2312.04018
Academic Papers
svg
68842da6becaf25f9047d33314939c2085cec9e36ded5567ff501d508766e848
2026-01-01T00:00:00-05:00
ODIN: Object Density Aware Index for CkNN Queries over Moving Objects on Road Networks
arXiv:2312.12688v2 Announce Type: replace Abstract: We study the problem of processing continuous k nearest neighbor (CkNN) queries over moving objects on road networks, which is an essential operation in a variety of applications. We are particularly concerned with scenarios where the object densities in different parts of the road network evolve over time as the objects move. Existing methods on CkNN query processing are ill-suited for such scenarios as they utilize index structures with fixed granularities and are thus unable to keep up with the evolving object densities. In this paper, we directly address this problem and propose an object density aware index structure called ODIN that is an elastic tree built on a hierarchical partitioning of the road network. It is equipped with the unique capability of dynamically folding/unfolding its nodes, thereby adapting to varying object densities. We further present the ODIN-KNN-Init and ODIN-KNN-Inc algorithms for the initial identification of the kNNs and the incremental update of query result as objects move. Thorough experiments on both real and synthetic datasets confirm the superiority of our proposal over several baseline methods.
https://arxiv.org/abs/2312.12688
Academic Papers
svg
ebc4f40dda575b614a41a07ec1814fc11f52d0ab0da6154e8fcfc92aabf61789
2026-01-01T00:00:00-05:00
Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling
arXiv:2402.18508v3 Announce Type: replace Abstract: In the rapidly evolving field of deep learning, the demand for models that are both expressive and computationally efficient has never been more critical. This paper introduces Orchid, a novel architecture designed to address the quadratic complexity of traditional attention mechanisms without compromising the ability to capture long-range dependencies and in-context learning. At the core of this architecture lies a new data-dependent global convolution layer, which contextually adapts its kernel conditioned on input sequence using a dedicated conditioning neural network. We design two simple conditioning networks that maintain shift equivariance in our data-dependent convolution operation. The dynamic nature of the proposed convolution kernel grants Orchid high expressivity while maintaining quasilinear scalability for long sequences. We evaluate the proposed model across multiple domains, including language modeling and image classification, to highlight its performance and generality. Our experiments demonstrate that this architecture not only outperforms traditional attention-based architectures such as BERT and Vision Transformers with smaller model sizes, but also extends the feasible sequence length beyond the limitations of the dense attention layers. This achievement represents a significant step towards more efficient and scalable deep learning models for sequence modeling. The code is available at https://github.com/Karami-m/orchid.
https://arxiv.org/abs/2402.18508
Academic Papers
svg
be53b8136f865ddacdf39e0ada4d3fa7acff6b4025898ea2967cf8563668c87f
2026-01-01T00:00:00-05:00
Subsequence Matching and LCS under Cartesian-Tree Equivalence
arXiv:2402.19146v4 Announce Type: replace Abstract: Two strings of the same length are said to Cartesian-tree match (CT-match) if their Cartesian-trees are isomorphic [Park et al., TCS 2020]. Cartesian-tree matching is a natural model that allows for capturing similarities of numerical sequences. Oizumi et al. [CPM 2022] showed that subsequence pattern matching under CT-matching model (CT-MSeq) can be solved in $O(nm \log \log n)$ time, where $n$ and $m$ are text and pattern lengths, respectively. This current article follows this line of research, and gives the following new results: (1) An $O(nm)$-time CT-MSeq algorithm for binary alphabets; (2) An $O((nm)^{1-\epsilon})$-time conditional lower bound for the CT-MSeq problem on alphabets of size 4, for any constant $\epsilon > 0$, under the Orthogonal Vector Hypothesis (OVH). Further, we introduce the new problem of longest common subsequence under CT-matching (CT-LCS) for two given strings $S$ and $T$ of length $n$, and present the following results: (3) An $O(n^6)$-time CT-LCS algorithm for general ordered alphabets; (4) An $O(n^2 / \log n)$-time CT-LCS algorithm for binary alphabets; (5) An $O(n^{2-\epsilon})$-time conditional lower bound for the CT-LCS problem on alphabets of size 5, for any constant $\epsilon > 0$, under OVH.
https://arxiv.org/abs/2402.19146
Academic Papers
svg
5497db96e6e473f2a2a8e867cde0be7f60c2fc8ad4407c952072722c6dad2bab
2026-01-01T00:00:00-05:00
Structuring Concept Space with the Musical Circle of Fifths by Utilizing Music Grammar Based Activations
arXiv:2403.00790v4 Announce Type: replace Abstract: We propose a neural coding framework harmonic toroidal codes in which abstract cognitive operations are implemented through dynamical activity on manifolds derived from music theoretic structures.
https://arxiv.org/abs/2403.00790
Academic Papers
svg
430beec081b82f5b40f5f0a40e2254db3dc44fa8054f50d499654b941777d630
2026-01-01T00:00:00-05:00
Maxwell's Demon at Work: Efficient Pruning by Leveraging Saturation of Neurons
arXiv:2403.07688v2 Announce Type: replace Abstract: When training neural networks, dying neurons -- units becoming inactive or saturated -- are traditionally seen as harmful. This paper sheds new light on this phenomenon. By exploring the impact of various hyperparameter configurations on dying neurons during training, we gather insights on how to improve upon sparse training approaches to pruning. We introduce Demon Pruning (DemP), a method that controls the proliferation of dead neurons through a combination of noise injection on active units and a one-cycle schedule regularization strategy, dynamically leading to network sparsity. Experiments on CIFAR-10 and ImageNet datasets demonstrate that DemP outperforms existing dense-to-sparse structured pruning methods, achieving better accuracy-sparsity tradeoffs and accelerating training by up to 3.56$\times$. These findings provide a novel perspective on dying neurons as a resource for efficient model compression and optimization.
https://arxiv.org/abs/2403.07688
Academic Papers
svg
1f3c978526061083dfad227dad0ae4d405d7df2c6960cb7e16a498d76cd51750
2026-01-01T00:00:00-05:00
Matching Semantically Similar Non-Identical Objects
arXiv:2403.08227v4 Announce Type: replace Abstract: Not identical but similar objects are ubiquitous in our world, ranging from four-legged animals such as dogs and cats to cars of different models and flowers of various colors. This study addresses a novel task of matching such non-identical objects at the pixel level. We propose a weighting scheme of descriptors, Semantic Enhancement Weighting (SEW), that incorporates semantic information from object detectors into existing sparse feature matching methods, extending their targets from identical objects captured from different perspectives to semantically similar objects. The experiments show successful matching between non-identical objects in various cases, including in-class design variations, class discrepancy, and domain shifts (e.g., photo vs. drawing and image corruptions). The code is available at https://github.com/Circ-Leaf/NIOM .
https://arxiv.org/abs/2403.08227
Academic Papers
svg
a5a2c9b93171d89d78c0cd1a4a2d381680f20ba03d28a0834910f30ae0f9dabf
2026-01-01T00:00:00-05:00
Reconstructing Hand-Held Objects in 3D from Images and Videos
arXiv:2404.06507v4 Announce Type: replace Abstract: Objects manipulated by the hand (i.e., manipulanda) are particularly challenging to reconstruct from Internet videos. Not only does the hand occlude much of the object, but also the object is often only visible in a small number of image pixels. At the same time, two strong anchors emerge in this setting: (1) estimated 3D hands help disambiguate the location and scale of the object, and (2) the set of manipulanda is small relative to all possible objects. With these insights in mind, we present a scalable paradigm for hand-held object reconstruction that builds on recent breakthroughs in large language/vision models and 3D object datasets. Given a monocular RGB video, we aim to reconstruct hand-held object geometry in 3D, over time. In order to obtain the best performing single frame model, we first present MCC-Hand-Object (MCC-HO), which jointly reconstructs hand and object geometry given a single RGB image and inferred 3D hand as inputs. Subsequently, we prompt a text-to-3D generative model using GPT-4(V) to retrieve a 3D object model that matches the object in the image(s); we call this alignment Retrieval-Augmented Reconstruction (RAR). RAR provides unified object geometry across all frames, and the result is rigidly aligned with both the input images and 3D MCC-HO observations in a temporally consistent manner. Experiments demonstrate that our approach achieves state-of-the-art performance on lab and Internet image/video datasets. We make our code and models available on the project website: https://janehwu.github.io/mcc-ho
https://arxiv.org/abs/2404.06507
Academic Papers
svg
fdb1e08749c00f2eb45cf0d20fef12fa974c10587d01982dd14a3b0eccf73ce0
2026-01-01T00:00:00-05:00
FEDSTR: Money-In AI-Out | A Decentralized Marketplace for Federated Learning and LLM Training on the NOSTR Protocol
arXiv:2404.15834v2 Announce Type: replace Abstract: The NOSTR is a communication protocol for the social web, based on the w3c websockets standard. Although it is still in its infancy, it is well known as a social media protocol, with thousands of trusted users and multiple user interfaces, offering a unique experience and enormous capabilities. To name a few, the NOSTR applications include but are not limited to direct messaging, file sharing, audio/video streaming, collaborative writing, blogging and data processing through distributed AI directories. In this work, we propose an approach that builds upon the existing protocol structure with end goal a decentralized marketplace for federated learning and LLM training. In this proposed design there are two parties: on one side there are customers who provide a dataset that they want to use for training an AI model. On the other side, there are service providers, who receive (parts of) the dataset, train the AI model, and for a payment as an exchange, they return the optimized AI model. To demonstrate viability, we present a proof-of-concept implementation over public NOSTR relays. The decentralized and censorship resistant features of the NOSTR enable the possibility of designing a fair and open marketplace for training AI models and LLMs.
https://arxiv.org/abs/2404.15834
Academic Papers
svg
87d6317d88f7ce8d615b05b164a61795d6e6f3857ec9b48ef7d17f06c431950e
2026-01-01T00:00:00-05:00
Myopically Verifiable Probabilistic Certificates for Safe Control and Learning
arXiv:2404.16883v2 Announce Type: replace Abstract: This paper addresses the design of safety certificates for stochastic systems, with a focus on ensuring long-term safety through fast real-time control. In stochastic environments, set invariance-based methods that restrict the probability of risk events in infinitesimal time intervals may exhibit significant long-term risks due to cumulative uncertainties/risks. On the other hand, reachability-based approaches that account for the long-term future may require prohibitive computation in real-time decision making. To overcome this challenge involving stringent long-term safety vs. computation tradeoffs, we first introduce a novel technique termed `probabilistic invariance'. This technique characterizes the invariance conditions of the probability of interest. When the target probability is defined using long-term trajectories, this technique can be used to design myopic conditions/controllers with assured long-term safe probability. Then, we integrate this technique into safe control and learning. The proposed control methods efficiently assure long-term safety using neural networks or model predictive controllers with short outlook horizons. The proposed learning methods can be used to guarantee long-term safety during and after training. Finally, we demonstrate the performance of the proposed techniques in numerical simulations.
https://arxiv.org/abs/2404.16883
Academic Papers
svg
f246f5ffceaf8297df10a207e859940823572df0b7aac892692a622aa286e2bb
2026-01-01T00:00:00-05:00
Are Biological Systems More Intelligent Than Artificial Intelligence?
arXiv:2405.02325v5 Announce Type: replace Abstract: Are biological self-organising systems more `intelligent' than artificial intelligence (AI)? If so, why? I explore this through a mathematical lens which frames intelligence in terms of adaptability. I model systems as stacks of abstraction layers (\emph{Stack Theory}) and compare them by how they delegate agentic control down their stacks, illustrating with examples of computational, biological, human military, governmental and economic systems. Contemporary AI rests on a static, human-engineered stack in which lower layers are static during deployment. Put provocatively, static stacks resemble inflexible bureaucracies, adapting only top-down. Biological stacks are more `intelligent' because they delegate adaptation. Formally, I prove a theorem (\emph{The Law of the Stack}) showing adaptability in higher layers requires sufficient adaptability in lower layers. Generalising bio-electric explanations of cancer as isolation from collective informational structures, I explore how cancer-like failures occur in non-biological systems when delegation is inadequate. This helps explain how to build more robust systems, by delegating control like the military doctrine of mission command. It also provides a design perspective on hybrid agents (e.g. organoids, systems involving both humans and AI): hybrid creation is a boundary-condition design problem in which human-imposed constraints prune low-level policy spaces to yield desired collective behaviour while preserving collective identity.
https://arxiv.org/abs/2405.02325
Academic Papers
svg
239e33f158493986c48c594887ecbf53ee5ea8eb99106f9ea4a6ff29ac82ba5c
2026-01-01T00:00:00-05:00
Finding Diverse Solutions Parameterized by Cliquewidth
arXiv:2405.20931v2 Announce Type: replace Abstract: Finding a few solutions for a given problem that are diverse, as opposed to finding a single best solution to solve the problem, has recently become a notable topic in theoretical computer science. Recently, Baste, Fellows, Jaffke, Masa\v{r}\'ik, Oliveira, Philip, and Rosamond showed that under a standard structural parameterization by treewidth, one can find a set of diverse solutions for many problems with only a very small additional cost [Artificial Intelligence 2022]. In this paper, we investigate a much stronger graph parameter, the cliquewidth, which can additionally describe some dense graph classes. Broadly speaking, it describes graphs that can be recursively constructed by a few operations defined on graphs whose vertices are divided into a bounded number of groups while each such group behaves uniformly with respect to any operation. We show that for any vertex problem, if we are given a dynamic program solving that problem on cliquewidth decomposition, we can modify it to produce a few solutions that are as diverse as possible with as little overhead as in the above-mentioned treewidth paper. As a consequence, we prove that a diverse version of any MSO$_1$ expressible problem can be solved in linear FPT time parameterized by the cliquewidth, the number of sought solutions, and the number of quantifiers in the formula, which was a natural missing piece in the complexity landscape of structural graph parameters and logic for the diverse problems. We prove our results allowing for a more general natural collection of diversity functions compared to only two mostly studied diversity functions previously. That might be of independent interest as a larger pool of different diversity functions can highlight various aspects of different solutions to a problem.
https://arxiv.org/abs/2405.20931
Academic Papers
svg
2d4d74cf2cd311729a1aea7479fc3d39161b0d45bd3a05fb4a7f626d9dbd4d03
2026-01-01T00:00:00-05:00
Jacobian-Enhanced Neural Networks
arXiv:2406.09132v3 Announce Type: replace Abstract: Jacobian-Enhanced Neural Networks (JENN) are densely connected multi-layer perceptrons, whose training process is modified to predict partial derivatives accurately. Their main benefit is better accuracy with fewer training points compared to standard neural networks. These attributes are particularly desirable in the field of computer-aided design, where there is often the need to replace computationally expensive, physics-based models with fast running approximations, known as surrogate models or meta-models. Since a surrogate emulates the original model accurately in near-real time, it yields a speed benefit that can be used to carry out orders of magnitude more function calls quickly. However, in the special case of gradient-enhanced methods, there is the additional value proposition that partial derivatives are accurate, which is a critical property for one important use-case: surrogate-based optimization. This work derives the complete theory and exemplifies its superiority over standard neural nets for surrogate-based optimization.
https://arxiv.org/abs/2406.09132
Academic Papers
svg
871f16ddaf481ae3d32439d9c3666705f441236534233081da80abaa416b184b
2026-01-01T00:00:00-05:00
ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts
arXiv:2406.10973v5 Announce Type: replace Abstract: Parameter-efficient fine-tuning (PEFT) techniques such as low-rank adaptation (LoRA) can effectively adapt large pre-trained foundation models to downstream tasks using only a small fraction (0.1%-10%) of the original trainable weights. An under-explored question of PEFT is in extending the pre-training phase without supervised labels; that is, can we adapt a pre-trained foundation model to a new domain via efficient self-supervised pre-training on this domain? In this work, we introduce ExPLoRA, a highly effective technique to improve transfer learning of pre-trained vision transformers (ViTs) under domain shifts. Initializing a ViT with pre-trained weights on large, natural-image datasets such as from DinoV2 or MAE, ExPLoRA continues the unsupervised pre-training objective on a new domain, unfreezing 1-2 pre-trained ViT blocks and tuning all other layers with LoRA. We then fine-tune the resulting model only with LoRA on this new domain for supervised learning. Our experiments demonstrate state-of-the-art results on satellite imagery, even outperforming fully pre-training and fine-tuning ViTs. Using the DinoV2 training objective, we demonstrate up to 8% improvement in linear probing top-1 accuracy on downstream tasks while using <10% of the number of parameters that are used in prior fully-tuned state-of-the-art approaches. Our ablation studies confirm the efficacy of our approach over other baselines such as PEFT. Code is available on the project website: https://samar-khanna.github.io/ExPLoRA/
https://arxiv.org/abs/2406.10973
Academic Papers
svg
65131e988370f27c4f73c31f082693051696ad2ca5fdfaaab8634fd154e30043
2026-01-01T00:00:00-05:00
MM-SpuBench: Towards Better Understanding of Spurious Biases in Multimodal LLMs
arXiv:2406.17126v2 Announce Type: replace Abstract: Spurious bias, a tendency to exploit spurious correlations between superficial input attributes and prediction targets, has revealed a severe robustness pitfall in classical machine learning problems. Multimodal Large Language Models (MLLMs), which leverage pretrained vision and language models, have recently demonstrated strong capability in joint vision-language understanding. However, both the presence and severity of spurious biases in MLLMs remain poorly understood. In this work, we address this gap by analyzing the spurious biases in the multimodal setting and uncovering the specific inference-time data patterns that can manifest this problem. To support this analysis, we introduce MM-SpuBench, a comprehensive, human-verified benchmark dataset consisting of image-class pairs annotated with core and spurious attributes, grounded in our taxonomy of nine distinct types of spurious correlations. The benchmark is constructed using human-interpretable attribute information to capture a wide range of spurious patterns reflective of real-world knowledge. Leveraging this benchmark, we conduct a comprehensive evaluation of the state-of-the-art open-source and proprietary MLLMs with both standard accuracy and the proposed Conditional Generation Likelihood Advantage (CGLA). Our findings highlight the persistence of reliance on spurious correlations and the difficulty of mitigation on our benchmark. We hope this work can inspire new technical strides to mitigate these biases. Our benchmark is publicly available at https://huggingface.co/datasets/mmbench/MM-SpuBench.
https://arxiv.org/abs/2406.17126
Academic Papers
svg
933fcdd772bf9e962f122816301de6d148cc96b44f029d9abf02dc742f2e872a
2026-01-01T00:00:00-05:00
DiffIR2VR-Zero: Zero-Shot Video Restoration with Diffusion-based Image Restoration Models
arXiv:2407.01519v5 Announce Type: replace Abstract: We present DiffIR2VR-Zero, a zero-shot framework that enables any pre-trained image restoration diffusion model to perform high-quality video restoration without additional training. While image diffusion models have shown remarkable restoration capabilities, their direct application to video leads to temporal inconsistencies, and existing video restoration methods require extensive retraining for different degradation types. Our approach addresses these challenges through two key innovations: a hierarchical latent warping strategy that maintains consistency across both keyframes and local frames, and a hybrid token merging mechanism that adaptively combines optical flow and feature matching. Through extensive experiments, we demonstrate that our method not only maintains the high-quality restoration of base diffusion models but also achieves superior temporal consistency across diverse datasets and degradation conditions, including challenging scenarios like 8$\times$ super-resolution and severe noise. Importantly, our framework works with any image restoration diffusion model, providing a versatile solution for video enhancement without task-specific training or modifications. Project page: https://jimmycv07.github.io/DiffIR2VR_web/
https://arxiv.org/abs/2407.01519
Academic Papers
svg
86bf6237e9b91ef36baefa4d9c36afeb785faf0b45e74d42ef69247ce30ba6c4
2026-01-01T00:00:00-05:00
The Fr\'echet Distance Unleashed: Approximating a Dog with a Frog
arXiv:2407.03101v4 Announce Type: replace Abstract: We show that a variant of the continuous Frechet distance between polygonal curves can be computed using essentially the same algorithm used to solve the discrete version. The new variant is not necessarily monotone, but this shortcoming can be easily handled via refinement. Combined with a Dijkstra/Prim type algorithm, this leads to a realization of the Frechet distance (i.e., a morphing) that is locally optimal (aka locally correct), that is both easy to compute, and in practice, takes near linear time on many inputs. The new morphing has the property that the leash is always as short as possible. These matchings/morphings are more natural and are better than the ones computed by standard algorithms -- in particular, they handle noise more graciously. This approach should make the Frechet distance more useful for real-world applications. We implemented the new algorithm and various strategies to obtain reasonably fast practical performance. We performed extensive experiments on our new algorithm, and released publicly available (and easily installable and usable) Julia and Python packages. Our algorithms can be used to compute the almost-exact Frechet distance between polygonal curves. Implementations and numerous examples are available here: https://frechet.xyz. We emphasize, however, that the existing state-of-the-art algorithm/implementation in C++ is faster, by several orders of magnitude, than our current algorithm/implementation.
https://arxiv.org/abs/2407.03101
Academic Papers
svg
0d5717c78f469a15ef63b585e8d77df99042a4b05fba535c539a17642add05de
2026-01-01T00:00:00-05:00
LTLBench: Towards Benchmarks for Evaluating Temporal Logic Reasoning in Large Language Models
arXiv:2407.05434v2 Announce Type: replace Abstract: Temporal Reasoning (TR) is a critical ability for LLMs to understand and reason over temporal information and relationships between events. To study the TR ability in LLMs, prior works provide different ways for evaluating various aspects of TR ability. In this work, we propose an alternative perspective for evaluating TR ability by leveraging Linear Temporal Logic (LTL), and develop a pipeline to automatically synthesize challenges for assessing the TR ability of LLMs. Based on this pipeline, we construct a dataset, namely \LTL, consisting of $2000$ TR challenges, and benchmark 12 LLMs across 5 different methods. Furthermore, we conduct additional experiments to investigate the impact of increasing the number of formula operators and events on both LLM performance and the complexity of TR problems. We also perform qualitative analyses of their reasoning processes and the effects of varying the number of events and formula operators, which reveal 3 main issues in their temporal reasoning processes and the unexpected performance changes observed as problem complexity increases. We expect this work to provide valuable insights into the TR ability of LLMs.
https://arxiv.org/abs/2407.05434
Academic Papers
svg
e1d059837282863b065e9e7c99671086b331fbf43ca444269973258336a90788
2026-01-01T00:00:00-05:00
UnPaSt: unsupervised patient stratification by biclustering of omics data
arXiv:2408.00200v2 Announce Type: replace Abstract: Unsupervised patient stratification is essential for disease subtype discovery, yet, despite growing evidence of molecular heterogeneity of non-oncological diseases, popular methods are benchmarked primarily using cancers with mutually exclusive molecular subtypes well-differentiated by numerous biomarkers. Evaluating 22 unsupervised methods, including clustering and biclustering, using simulated and real transcriptomics data revealed their inefficiency in scenarios with non-mutually exclusive subtypes or subtypes discriminated only by few biomarkers. To address these limitations and advance precision medicine, we developed UnPaSt, a novel biclustering algorithm for unsupervised patient stratification based on differentially expressed biclusters. UnPaSt outperformed widely used patient stratification approaches in the de novo identification of known subtypes of breast cancer and asthma. In addition, it detected many biologically insightful patterns across bulk transcriptomics, proteomics, single-cell, spatial transcriptomics, and multi-omics datasets, enabling a more nuanced and interpretable view of high-throughput data heterogeneity than traditionally used methods.
https://arxiv.org/abs/2408.00200
Academic Papers
svg
0adc4e75a44502fa664cdebbf236924a7da655815cbcf1ff5cf1883d16c6779f
2026-01-01T00:00:00-05:00
Transfer learning of state-based potential games for process optimization in decentralized manufacturing systems
arXiv:2408.05992v3 Announce Type: replace Abstract: This paper presents a novel online transfer learning approach in state-based potential games (TL-SbPGs) for distributed self-optimization in manufacturing systems. The approach targets practical industrial scenarios where knowledge sharing among similar players enhances learning in large-scale and decentralized environments. TL-SbPGs enable players to reuse learned policies from others, which improves learning outcomes and accelerates convergence. To accomplish this goal, we develop transfer learning concepts and similarity criteria for players, which offer two distinct settings: (a) predefined similarities between players and (b) dynamically inferred similarities between players during training. The applicability of the SbPG framework to transfer learning is formally established. Furthermore, we present a method to optimize the timing and weighting of knowledge transfer. Experimental results from a laboratory-scale testbed show that TL-SbPGs improve production efficiency and reduce power consumption compared to vanilla SbPGs.
https://arxiv.org/abs/2408.05992
Academic Papers
svg
ea40e6f79278d0af28535f77d198a00cca4cfb161d10c9efd7bbe40b1c61522e
2026-01-01T00:00:00-05:00
[Draft] High-order estimation-based properties and high-order observers for labeled finite-state automata
arXiv:2408.06141v3 Announce Type: replace Abstract: In this paper, we consider labeled finite-state automata (LFSAs), extend some state estimation-based properties from a single agent to a finite ordered set of agents. We also extend the notion of observer to \emph{high-order observer} using our \emph{concurrent composition}. As a result, a general framework for characterizing high-order estimation-based properties is built, in which each agent infers its preceding agent's estimation via all agents in front. The high-order observer plays the role of a basic tool to verify such properties. In more detail, in our general framework, the system's structure is publicly known to all agents $A_1,\dots,A_n$; each agent $A_i$ has its own observable event set $E_i$, and additionally knows all its preceding agents' observable events but can only observe its own observable events. The intuitive meaning of our high-order observer is to characterize what agent $A_n$ knows about what $A_{n-1}$ knows about \dots what $A_2$ knows about $A_1$'s state estimate of the system. This general framework can be regarded as an automata representation of dynamic epistemic logic. Compared with the classical representation of dynamic epistemic logic based on fragments of logic, our representation has advantages in property verification and flexibly changing agents to enforce properties. As case studies, this general framework applies to basic properties such as current-state opacity, strong current-state opacity, regular-language-based opacity, critical observability, high-order opacity, etc. Special cases for which verification can be done more efficiently are also discussed.
https://arxiv.org/abs/2408.06141
Academic Papers
svg
b170bce0fee7c92e7e200fec4fa589babdb7f810440521b9e139cd51df20c261
2026-01-01T00:00:00-05:00
Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities
arXiv:2408.07666v5 Announce Type: replace Abstract: Model merging is an efficient empowerment technique in the machine learning community that does not require the collection of raw training data and does not require expensive computation. As model merging becomes increasingly prevalent across various fields, it is crucial to understand the available model merging techniques comprehensively. However, there is a significant gap in the literature regarding a systematic and thorough review of these techniques. This survey provides a comprehensive overview of model merging methods and theories, their applications in various domains and settings, and future research directions. Specifically, we first propose a new taxonomic approach that exhaustively discusses existing model merging methods. Secondly, we discuss the application of model merging techniques in large language models, multimodal large language models, and more than ten machine learning subfields, including continual learning, multi-task learning, few-shot learning, etc. Finally, we highlight the remaining challenges of model merging and discuss future research directions. A comprehensive list of papers about model merging is available at https://github.com/EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications.
https://arxiv.org/abs/2408.07666
Academic Papers
svg
3e4119d4255094c1da573b4fa11be7de4929eb7194cd6c3d86c1a75b88c59124
2026-01-01T00:00:00-05:00
Stock Price Responses to Firm-Level News in Supply Chain Networks
arXiv:2409.06255v4 Announce Type: replace Abstract: This study examines how positive and negative news about firms are associated with stock prices and whether these associations extend to suppliers and clients linked via supply chain relationships, using large samples of publicly listed firms worldwide and in Japan. News sentiment is measured using FinBERT, a natural language processing model fine-tuned for financial text, and supply chain links are identified from financial statements for global firms and from large-scale firm-level surveys for Japanese firms. We find that stock prices exhibit systematic associations with positive and negative news even before public disclosure. These associations are also observed for suppliers and clients before and after disclosure. In general, post-disclosure associations are larger than pre-disclosure associations, with the difference concentrated around the time of public news disclosure relative to the pre-disclosure period. However, for Japanese firms, the post-disclosure associations for suppliers and clients are smaller than the pre-disclosure associations, in contrast to the pattern observed for firms outside Japan.
https://arxiv.org/abs/2409.06255
Academic Papers
svg
3cb58777b1ac233990445918bc4fe739c2d41d19e5ce08815277aaa7b2397958
2026-01-01T00:00:00-05:00
Proactive Recommendation in Social Networks: Steering User Interest with Causal Inference
arXiv:2409.08934v2 Announce Type: replace Abstract: Recommending items that solely cater to users' historical interests narrows users' horizons. Recent works have considered steering target users beyond their historical interests by directly adjusting items exposed to them. However, the recommended items for direct steering might not align perfectly with the evolution of users' interests, detrimentally affecting the target users' experience. To avoid this issue, we propose a new task named Proactive Recommendation in Social Networks (PRSN) that indirectly steers users' interest by utilizing the influence of social neighbors, i.e., indirect steering by adjusting the exposure of a target item to target users' neighbors. The key to PRSN lies in answering an interventional question: what would a target user' s feedback be on a target item if the item is exposed to the user' s different neighbors? To answer this question, we resort to causal inference and formalize PRSN as: (1) estimating the potential feedback of a user on an item, under the network interference by the item' s exposure to the user' s neighbors; and (2) adjusting the exposure of a target item to target users' neighbors to trade off steering performance and the damage to the neighbors' experience. To this end, we propose a Neighbor Interference Recommendation (NIRec) framework with two modules: (1) an interference representation-based estimation module for modeling potential feedback; (2) a post-learning-based optimization module for adjusting a target item' s exposure to trade off steering performance and the neighbors' experience through greedy search. We conduct extensive semi-simulation experiments on real-world datasets, validating the steering effectiveness of NIRec.
https://arxiv.org/abs/2409.08934
Academic Papers
svg
acb9fea492dbab0b344c085118ed3ee7c0572b2eac9ec3afb900556c187bc5bd
2026-01-01T00:00:00-05:00
Semantic Parsing with Candidate Expressions for Knowledge Base Question Answering
arXiv:2410.00414v4 Announce Type: replace Abstract: Semantic parsers convert natural language to logical forms, which can be evaluated on knowledge bases (KBs) to produce denotations. Recent semantic parsers have been developed with sequence-to-sequence (seq2seq) pre-trained language models (PLMs) or large language models, where the models treat logical forms as sequences of tokens. For syntactic and semantic validity, the semantic parsers use grammars that enable constrained decoding. However, the grammars lack the ability to utilize large information of KBs, although logical forms contain representations of KB elements, such as entities or relations. In this work, we propose a grammar augmented with candidate expressions for semantic parsing on a large KB with a seq2seq PLM. The grammar defines actions as production rules, and our semantic parser predicts actions during inference under the constraints by types and candidate expressions. We apply the grammar to knowledge base question answering, where the constraints by candidate expressions assist a semantic parser to generate valid KB elements. We also introduce two special rules, sub-type inference and union types, and a mask caching algorithm. In particular, sub-type inference and the mask caching algorithm greatly increase the decoding speed of our semantic parser. We experimented on two benchmarks, KQA Pro and Overnight, where the constraints by candidate expressions increased the accuracy of our semantic parser, whether it was trained with strong supervision or weak supervision. In addition, our semantic parser had a fast decoding speed in the experiments. Our source code is publicly available at https://github.com/daehwannam/candexpr-sp.git.
https://arxiv.org/abs/2410.00414
Academic Papers
svg
9933a124569bafd0d2cb9c21e4d1f99bf9178b16f64c3642e652ee00921f9664
2026-01-01T00:00:00-05:00
Beyond Firms and Industries: Shock Propagation through Establishment- and Product-Level Supply Chains
arXiv:2410.05595v4 Announce Type: replace Abstract: This paper investigates how the granularity of supply-chain data affects the propagation of economic shocks through production networks. Using newly constructed establishment-level supply chains with product-level information links for Japan, we simulate disruption dynamics under alternative definitions of network nodes and input classifications. We show that defining inputs at the product level generates substantially larger propagation effects than industry-based classifications, indicating that coarse industry measures overstate input substitutability and underestimate systemic vulnerability. While establishment-level networks generally amplify shock propagation relative to firm-level networks, this effect is quantitatively modest, reflecting opposing forces of increased network complexity and greater substitution possibilities. We further demonstrate that incorporating establishment-level geographic information is critical for assessing region-specific shocks, as firm-level networks tend to overstate the impact of shocks originating in major metropolitan areas. Overall, our results highlight the importance of granular information on products, establishments, and geography for accurately evaluating supply-chain resilience and systemic risk.
https://arxiv.org/abs/2410.05595
Academic Papers
svg
d32a23746ef3bcf4fb95888a539163d73a6c2ea85acbf451744cb4d63fa2bfeb
2026-01-01T00:00:00-05:00
Faster and Simpler Online Computation of String Net Frequency
arXiv:2410.06837v3 Announce Type: replace Abstract: An occurrence of a repeated substring $u$ in a string $S$ is called a net occurrence if extending the occurrence to the left or to the right decreases the number of occurrences to 1. The net frequency (NF) of a repeated substring $u$ in a string $S$ is the number of net occurrences of $u$ in $S$. Very recently, Guo et al. [SPIRE 2024] proposed an online $O(n \log \sigma)$-time algorithm that maintains a data structure of $O(n)$ space which answers Single-NF queries in $O(m\log \sigma + \sigma^2)$ time and reports all answers of the All-NF problem in $O(n\sigma^2)$ time. Here, $n$ is the length of the input string $S$, $m$ is the query pattern length, and $\sigma$ is the alphabet size. The $\sigma^2$ term is a major drawback of their method since computing string net frequencies is originally motivated for Chinese language processing where $\sigma$ can be thousands large. This paper presents an improved online $O(n \log \sigma)$-time algorithm, which answers Single-NF queries in $O(m \log \sigma)$ time and reports all answers to the All-NF problem in output-optimal $O(|\mathsf{NF}^+(S)|)$ time, where $\mathsf{NF}^+(S)$ is the set of substrings of $S$ paired with their positive NF values. We note that $|\mathsf{NF}^+(S)| = O(n)$ always holds. In contract to Guo et al.'s algorithm that is based on Ukkonen's suffix tree construction, our algorithm is based on Weiner's suffix tree construction.
https://arxiv.org/abs/2410.06837
Academic Papers
svg
b1184135899d267a86d210ae77384d2053f608044f9fa111c75eab06cd061283
2026-01-01T00:00:00-05:00
SwitchFS: Asynchronous Metadata Updates for Distributed Filesystems with In-Network Coordination
arXiv:2410.08618v3 Announce Type: replace Abstract: Distributed filesystem metadata updates are typically synchronous. This creates inherent challenges for access efficiency, load balancing, and directory contention, especially under dynamic and skewed workloads. This paper argues that synchronous updates are overly conservative. We propose SwitchFS with asynchronous metadata updates that allow operations to return early and defer directory updates until reads, both hiding latency and amortizing overhead. The key challenge lies in efficiently maintaining the synchronous POSIX semantics of metadata updates. To address this, SwitchFS is co-designed with a programmable switch, leveraging the limited on-switch resources to track directory states with negligible overhead. This allows SwitchFS to aggregate and apply delayed updates efficiently, using batching and consolidation before directory reads. Evaluation shows that SwitchFS achieves up to 13.34$\times$ and 3.85$\times$ higher throughput, and 61.6% and 57.3% lower latency than two state-of-the-art distributed filesystems, Emulated-InfiniFS and Emulated-CFS, respectively, under skewed workloads. For real-world workloads, SwitchFS improves end-to-end throughput by 21.1$\times$, 1.1$\times$, and 0.3$\times$ over CephFS, Emulated-InfiniFS, and Emulated-CFS, respectively.
https://arxiv.org/abs/2410.08618
Academic Papers
svg
a151e84e11fdabb33b01a7a51b170b944e6f9216d84f607e783034cddda01073
2026-01-01T00:00:00-05:00
A Systematic Survey on Large Language Models for Algorithm Design
arXiv:2410.14716v4 Announce Type: replace Abstract: Algorithm design is crucial for effective problem-solving across various domains. The advent of Large Language Models (LLMs) has notably enhanced the automation and innovation within this field, offering new perspectives and promising solutions. In just a few years, this integration has yielded remarkable progress in areas ranging from combinatorial optimization to scientific discovery. Despite this rapid expansion, a holistic understanding of the field is hindered by the lack of a systematic review, as existing surveys either remain limited to narrow sub-fields or with different objectives. This paper seeks to provide a systematic review of algorithm design with LLMs. We introduce a taxonomy that categorises the roles of LLMs as optimizers, predictors, extractors and designers, analyzing the progress, advantages, and limitations within each category. We further synthesize literature across the three phases of the algorithm design pipeline and across diverse algorithmic applications that define the current landscape. Finally, we outline key open challenges and opportunities to guide future research.
https://arxiv.org/abs/2410.14716
Academic Papers
svg
fd17f74906a6344576e3bc199b189ac646967c5991c2b8e3de967b627a037864
2026-01-01T00:00:00-05:00
Automatic identification of diagnosis from hospital discharge letters via weakly-supervised Natural Language Processing
arXiv:2410.15051v2 Announce Type: replace Abstract: Identifying patient diagnoses from discharge letters is essential to enable large-scale cohort selection and epidemiological research, but traditional supervised approaches rely on extensive manual annotation, which is often impractical for large textual datasets. In this study, we present a novel weakly-supervised Natural Language Processing pipeline designed to classify Italian discharge letters without requiring manual labelling. After extracting diagnosis-related sentences, the method leverages a transformer-based model with an additional pre-training on Italian medical documents to generate semantic embeddings. A two-level clustering procedure is applied to these embeddings, and the resulting clusters are mapped to the diseases of interest to derive weak labels for a subset of data, eventually used to train a transformer-based classifier. We evaluate the approach on a real-world case study on bronchiolitis in a corpus of 33,176 Italian discharge letters of children admitted to 44 emergency rooms or hospitals in the Veneto Region between 2017 and 2020. The pipeline achieves an area under the curve (AUC) of 77.68% ($\pm 4.30\%)$ and an F1-score of 78.14% ($\pm 4.89\%$) against manual annotations. Its performance surpasses other unsupervised methods and approaches fully supervised models, maintaining robustness to cluster selection and promising generalizability across different disease types. It allows saving approximately 3 minutes of expert time per discharge letter, resulting in more than 1,500 hours for a dataset like ours. This study demonstrates the feasibility of a weakly-supervised strategy for identifying diagnoses from Italian discharge letters. The pipeline achieves strong performance, is adaptable to various diseases, and offers a scalable solution for clinical text classification, reducing the need for manual annotation while maintaining reliable accuracy.
https://arxiv.org/abs/2410.15051
Academic Papers
svg
5ccc53020acf504d4fc9624638adef947481094d3f8df8b1b7df1c9bfcebf63e
2026-01-01T00:00:00-05:00
EON: A practical energy-preserving rough diffuse BRDF
arXiv:2410.18026v3 Announce Type: replace Abstract: We introduce the "Energy-preserving Oren--Nayar" (EON) model for reflection from rough surfaces. Unlike the popular qualitative Oren--Nayar model (QON) and its variants, our model is energy-preserving via analytical energy compensation. We include self-contained GLSL source code for efficient evaluation of the new model and importance sampling based on a novel technique we term "Clipped Linearly Transformed Cosine" (CLTC) sampling.
https://arxiv.org/abs/2410.18026
Academic Papers
svg
453c88a538039425a9fe66e9562d627e35ee20d6dac94cb5c279fa55aabc125a
2026-01-01T00:00:00-05:00
Bielik 7B v0.1: A Polish Language Model -- Development, Insights, and Evaluation
arXiv:2410.18565v2 Announce Type: replace Abstract: We introduce Bielik 7B v0.1, a 7-billion-parameter generative text model for Polish language processing. Trained on curated Polish corpora, this model addresses key challenges in language model development through innovative techniques. These include Weighted Instruction Cross-Entropy Loss, which balances the learning of different instruction types, and Adaptive Learning Rate, which dynamically adjusts the learning rate based on training progress. To evaluate performance, we created the Open PL LLM Leaderboard and Polish MT-Bench, novel frameworks assessing various NLP tasks and conversational abilities. Bielik 7B v0.1 demonstrates significant improvements, achieving a 9 percentage point increase in average score compared to Mistral-7B-v0.1 on the RAG Reader task. It also excels in the Polish MT-Bench, particularly in Reasoning (6.15/10) and Role-playing (7.83/10) categories. This model represents a substantial advancement in Polish language AI, offering a powerful tool for diverse linguistic applications and setting new benchmarks in the field.
https://arxiv.org/abs/2410.18565
Academic Papers
svg
d43048b24b9c32190e7c2c052a8fb027c0c0722bf4d62f3c4276a5379b6e98eb
2026-01-01T00:00:00-05:00
Computing the bridge length: the key ingredient in a continuous isometry classification of periodic point sets
arXiv:2410.23288v2 Announce Type: replace Abstract: The fundamental model of any periodic crystal is a periodic set of points at all atomic centres. Since crystal structures are determined in a rigid form, their strongest equivalence is rigid motion (composition of translations and rotations) or isometry (also including reflections). The recent classification of periodic point sets under rigid motion used a complete invariant isoset whose size essentially depends on the bridge length, defined as the minimum `jump' that suffices to connect any points in the given set. We propose a practical algorithm to compute the bridge length of any periodic point set given by a motif of points in a periodically translated unit cell. The algorithm has been tested on a large crystal dataset and is required for an efficient continuous classification of all periodic crystals. The exact computation of the bridge length is a key step to realising the inverse design of materials from new invariant values.
https://arxiv.org/abs/2410.23288
Academic Papers
svg
cb386362c2e31b2401990e411a3abd3706548c9ef52ca32c147ab8b9e7818b53
2026-01-01T00:00:00-05:00
Minibatch Optimal Transport and Perplexity Bound Estimation in Discrete Flow Matching
arXiv:2411.00759v3 Announce Type: replace Abstract: Discrete flow matching, a recent framework for modeling categorical data, has shown competitive performance with autoregressive models. However, unlike continuous flow matching, the rectification strategy cannot be applied due to the stochasticity of discrete paths, necessitating alternative methods to minimize state transitions. We propose a dynamic-optimal-transport-like minimization objective and derive its Kantorovich formulation for discrete flows with convex interpolants, where transport cost depends solely on inter-state similarity and can be optimized via minibatch strategies. In the case of bag-of-words (BoW) sourced flows, we show that such methods can reduce the number of transitions up to 8 times (1024 to 128) to reach the same generative perplexity without compromising diversity. Additionally, path nondeterminism in discrete flows precludes an instantaneous change-of-variables analogue, preventing precise probability estimation available to continuous flows. We therefore propose two upper bounds on perplexity, enabling principled training, evaluation and model comparison. Finally, we introduce Multimask Flows which outperform masked flows in generative perplexity, particularly when utilizing minibatch Optimal Transport, without sacrificing diversity.
https://arxiv.org/abs/2411.00759
Academic Papers
svg
43e4679417e62f21f06b5b2cd636d1e04b8f0112d8f41a360133f31858b165eb
2026-01-01T00:00:00-05:00
Two-Stage Robust Optimal Operation of Distribution Networks Considering Renewable Energy and Demand Asymmetric Uncertainties
arXiv:2411.10166v3 Announce Type: replace Abstract: This paper presents a confidence level-based distributionally information gap decision theory (CL-DIGDT) framework for the two-stage robust optimal operation of distribution networks, aiming at deriving an optimal operational scheme capable of addressing asymmetric uncertainties related to renewable energy and load demands. Building on conventional IGDT, the proposed framework utilizes the confidence level to capture the asymmetric characteristics of uncertainties and maximize the risk-averse capability of the solution in a probabilistic manner. To account for the probabilistic consideration, the imprecise Dirichlet model is employed to construct the ambiguity sets of uncertainties, reducing reliance on precise probability distributions. Consequently, a two-stage robust optimal operation model for distribution networks using CL-DIGDT is developed. An iterative method is proposed to solve the model and determine the upper and lower bounds of the objective function. Case study demonstrates that the proposed approach yields a more robust and statistically optimized solution with required accuracy compared to existing method, contributing to a reduction in first-stage cost by 0.84%, second-stage average cost by 6.7%, and significantly increasing the reliability of the solution by 8%.
https://arxiv.org/abs/2411.10166
Academic Papers
svg
7c46bf67c3dfbb9b62ffa80c2a5c604a271666ba771af9b790096665a28d852f
2026-01-01T00:00:00-05:00
The Generalization Error of Supervised Machine Learning Algorithms
arXiv:2411.12030v2 Announce Type: replace Abstract: In this paper, the method of gaps, a technique for deriving closed-form expressions in terms of information measures for the generalization error of supervised machine learning algorithms is introduced. The method relies on the notion of \emph{gaps}, which characterize the variation of the expected empirical risk (when either the model or dataset is kept fixed) with respect to changes in the probability measure on the varying parameter (either the dataset or the model, respectively). This distinction results in two classes of gaps: Algorithm-driven gaps (fixed dataset) and data-driven gaps (fixed model). In general, the method relies on two central observations: $(i)$~The generalization error is the expectation of an algorithm-driven gap or a data-driven gap. In the first case, the expectation is with respect to a measure on the datasets; and in the second case, with respect to a measure on the models. $(ii)$~Both, algorithm-driven gaps and data-driven gaps exhibit closed-form expressions in terms of relative entropies. In particular, algorithm-driven gaps involve a Gibbs probability measure on the set of models, which represents a supervised Gibbs algorithm. Alternatively, data-driven gaps involve a worst-case data-generating (WCDG) probability measure on the set of data points, which is also a Gibbs probability measure. Interestingly, such Gibbs measures, which are exogenous to the analysis of generalization, place both the supervised Gibbs algorithm and the WCDG probability measure as natural references for the analysis of supervised learning algorithms. All existing exact expressions for the generalization error of supervised machine learning algorithms can be obtained with the proposed method. Also, this method allows obtaining numerous new exact expressions, which allows establishing connections with other areas in statistics.
https://arxiv.org/abs/2411.12030
Academic Papers
svg
e4ac5737fb6b015e80ab879cea5df37e8701a9f38870b35757d61cbd5b4d2d97
2026-01-01T00:00:00-05:00
Strong Linearizability without Compare&Swap: The Case of Bags
arXiv:2411.19365v3 Announce Type: replace Abstract: Because strongly-linearizable objects provide stronger guarantees than linearizability, they serve as valuable building blocks for the design of concurrent data structures. Yet, many objects that have linearizable implementations from base objects weaker than compare&amp;swap objects do not have strongly-linearizable implementations from the same base objects. We focus on one such object: the bag, a multiset from which processes can take unspecified elements. We present the first lock-free, strongly-linearizable implementation of a bag from interfering objects (specifically, registers, and test&amp;set objects). This may be surprising, since there are provably no such implementations of stacks or queues. Since a bag can contain arbitrarily many elements, an unbounded amount of space must be used to implement it. Hence, it makes sense to also consider a bag with a bound on its capacity. However, like stacks and queues, a bag with capacity $b$ shared by more than $2b$ processes has no lock-free, strongly-linearizable implementation from interfering objects. If we further restrict a bounded bag so that only one process can insert into it, we are able to obtain a lock-free, strongly-linearizable implementation from $O(b + n)$ interfering objects, where $n$ is the number of processes. Our goal is to understand the circumstances under which strongly-linearizable implementations of bags exist and, more generally, to understand the power of interfering objects.
https://arxiv.org/abs/2411.19365
Academic Papers
svg
138b93ae1bdb2cc0b1a84ea66ac50a44e10dd03d7c646afd09bfcd84ccad9b8b
2026-01-01T00:00:00-05:00
Explaining Object Detectors via Collective Contribution of Pixels
arXiv:2412.00666v3 Announce Type: replace Abstract: Visual explanations for object detectors are crucial for enhancing their reliability. Object detectors identify and localize instances by assessing multiple visual features collectively. When generating explanations, overlooking these collective influences in detections may lead to missing compositional cues or capturing spurious correlations. However, existing methods typically focus solely on individual pixel contributions, neglecting the collective contribution of multiple pixels. To address this limitation, we propose a game-theoretic method based on Shapley values and interactions to explicitly capture both individual and collective pixel contributions. Our method provides explanations for both bounding box localization and class determination, highlighting regions crucial for detection. Extensive experiments demonstrate that the proposed method identifies important regions more accurately than state-of-the-art methods. The code is available at https://github.com/tttt-0814/VX-CODE
https://arxiv.org/abs/2412.00666
Academic Papers
svg
d4a2ff19f5cd7cf5e6e7d74bea0fa403809c7b028ed9a754c447079a7376bcfc
2026-01-01T00:00:00-05:00
Private Linear Regression with Differential Privacy and PAC Privacy
arXiv:2412.02578v2 Announce Type: replace Abstract: Linear regression is a fundamental tool for statistical analysis, which has motivated the development of linear regression methods that satisfy provable privacy guarantees so that the learned model reveals little about any one data point used to construct it. Most existing privacy-preserving linear regression methods rely on the well-established framework of differential privacy, while the newly proposed PAC Privacy has not yet been explored in this context. In this paper, we systematically compare linear regression models trained with differential privacy and PAC privacy across three real-world datasets, observing several key findings that impact the performance of privacy-preserving linear regression.
https://arxiv.org/abs/2412.02578
Academic Papers
svg
b9619b06315956f76c8779fdc072b06832c6e67762276b34b21d458f7920dc23
2026-01-01T00:00:00-05:00
SoundnessBench: A Soundness Benchmark for Neural Network Verifiers
arXiv:2412.03154v3 Announce Type: replace Abstract: Neural network (NN) verification aims to formally verify properties of NNs, which is crucial for ensuring the behavior of NN-based models in safety-critical applications. In recent years, the community has developed many NN verifiers and benchmarks to evaluate them. However, existing benchmarks typically lack ground-truth for hard instances where no current verifier can verify the property and no counterexample can be found. This makes it difficult to validate the soundness of a verifier, when it claims verification on such challenging instances that no other verifier can handle. In this work, we develop a new benchmark for NN verification, named SoundnessBench, specifically for testing the soundness of NN verifiers. SoundnessBench consists of instances with deliberately inserted counterexamples that are hidden from adversarial attacks commonly used to find counterexamples. Thereby, it can identify false verification claims when hidden counterexamples are known to exist. We design a training method to produce NNs with hidden counterexamples and systematically construct our SoundnessBench with instances across various model architectures, activation functions, and input data. We demonstrate that our training effectively produces hidden counterexamples and our SoundnessBench successfully identifies bugs in state-of-the-art NN verifiers. Our code is available at https://github.com/mvp-harry/SoundnessBench and our dataset is available at https://huggingface.co/datasets/SoundnessBench/SoundnessBench.
https://arxiv.org/abs/2412.03154
Academic Papers
svg
d26dde2a8b01266ac202e5a00040c447e0c53283bc3a2599ac3328b5a6b8c22b
2026-01-01T00:00:00-05:00
INST-IT: Boosting Instance Understanding via Explicit Visual Prompt Instruction Tuning
arXiv:2412.03565v2 Announce Type: replace Abstract: Large Multimodal Models (LMMs) have made significant breakthroughs with the advancement of instruction tuning. However, while existing models can understand images and videos at a holistic level, they still struggle with instance-level understanding that requires a more fine-grained comprehension and alignment. Instance-level understanding is crucial for LMMs, as it focuses on the specific elements that we are most interested in. Excitingly, existing works find that the SOTA LMMs exhibit strong instance understanding capabilities when provided with explicit visual cues. Motivated by this, we proposed Inst-IT, a solution to enhance LMMs in Instance understanding via explicit visual prompt Instruction Tuning for instance guidance. Inst-IT consists of a benchmark to diagnose multimodal instance-level understanding, a large-scale instruction-tuning dataset, and a continuous instruction-tuning training paradigm to effectively enhance spatial-temporal instance understanding capabilities of existing LMMs. Experimental results show that, enhanced by Inst-IT, our models not only achieve outstanding performance on Inst-IT Bench and other instance understanding benchmarks, but also demonstrate significant improvements across various generic image and video understanding benchmarks. This highlights that our method not only boosts instance-level understanding but also strengthens the overall capabilities of generic image and video comprehension.
https://arxiv.org/abs/2412.03565
Academic Papers
svg
fd5c5c91841b32b385ae721659e070dba539b64700f0817294483c32532b8dee
2026-01-01T00:00:00-05:00
Addressing Hallucinations with RAG and NMISS in Italian Healthcare LLM Chatbots
arXiv:2412.04235v3 Announce Type: replace Abstract: I combine detection and mitigation techniques to addresses hallucinations in Large Language Models (LLMs). Mitigation is achieved in a question-answering Retrieval-Augmented Generation (RAG) framework while detection is obtained by introducing the Negative Missing Information Scoring System (NMISS), which accounts for contextual relevance in responses. While RAG mitigates hallucinations by grounding answers in external data, NMISS refines the evaluation by identifying cases where traditional metrics incorrectly flag contextually accurate responses as hallucinations. I use Italian health news articles as context to evaluate LLM performance. Results show that Gemma2 and GPT-4 outperform the other models, with GPT-4 producing answers closely aligned with reference responses. Mid-tier models, such as Llama2, Llama3, and Mistral benefit significantly from NMISS, highlighting their ability to provide richer contextual information. This combined approach offers new insights into the reduction and more accurate assessment of hallucinations in LLMs, with applications in real-world healthcare tasks and other domains.
https://arxiv.org/abs/2412.04235
Academic Papers
svg
e761772c709f49153acad35364544ba9025245eb8f65dc05a26e0d9b07040f0c
2026-01-01T00:00:00-05:00
The Oracle Complexity of Simplex-based Matrix Games: Linear Separability and Nash Equilibria
arXiv:2412.06990v3 Announce Type: replace Abstract: We study the problem of solving matrix games of the form $\max_{\mathbf{w}\in\mathcal{W}}\min_{\mathbf{p}\in\Delta}\mathbf{p}^{\top}A\mathbf{w}$, where $A$ is some matrix and $\Delta$ is the probability simplex. This problem encapsulates canonical tasks such as finding a linear separator and computing Nash equilibria in zero-sum games. However, perhaps surprisingly, its inherent complexity (as formalized in the standard framework of oracle complexity [Nemirovski and Yudin, 1983]) is not well-understood. In this work, we first identify different oracle models which are implicitly used by prior algorithms, amounting to multiplying the matrix $A$ by a vector from either one or both sides. We then prove complexity lower bounds for algorithms under both access models, which in particular imply a separation between them. Specifically, we start by showing that algorithms for linear separability based on one-sided multiplications must require $\Omega(\gamma_A^{-2})$ iterations, where $\gamma_A$ is the margin, as matched by the Perceptron algorithm. We then prove that accelerated algorithms for this task, which utilize multiplications from both sides, must require $\tilde{\Omega}(\gamma_{A}^{-2/3})$ iterations, establishing the first oracle complexity barrier for such algorithms. Finally, by adapting our lower bound to $\ell_1$ geometry, we prove that computing an $\epsilon$-approximate Nash equilibrium requires $\tilde{\Omega}(\epsilon^{-2/3})$ iterations, which is an exponential improvement over the previously best-known lower bound due to Hadiji et al. [2024].
https://arxiv.org/abs/2412.06990
Academic Papers
svg
c56f668805b853c452608a288d329d214f6a79b44cd79aa363c0e3d1cc529ebb
2026-01-01T00:00:00-05:00
Tazza: Shuffling Neural Network Parameters for Secure and Private Federated Learning
arXiv:2412.07454v3 Announce Type: replace Abstract: Federated learning enables decentralized model training without sharing raw data, preserving data privacy. However, its vulnerability towards critical security threats, such as gradient inversion and model poisoning by malicious clients, remain unresolved. Existing solutions often address these issues separately, sacrificing either system robustness or model accuracy. This work introduces Tazza, a secure and efficient federated learning framework that simultaneously addresses both challenges. By leveraging the permutation equivariance and invariance properties of neural networks via weight shuffling and shuffled model validation, Tazza enhances resilience against diverse poisoning attacks, while ensuring data confidentiality and high model accuracy. Comprehensive evaluations on various datasets and embedded platforms show that Tazza achieves robust defense with up to 6.7x improved computational efficiency compared to alternative schemes, without compromising performance.
https://arxiv.org/abs/2412.07454
Academic Papers
svg
2093d2da7d158e50c1e2075a2f3e5f21d7db05990a25ae758ed508c925e9cb92
2026-01-01T00:00:00-05:00
Hierarchical Context Alignment with Disentangled Geometric and Temporal Modeling for Semantic Occupancy Prediction
arXiv:2412.08243v2 Announce Type: replace Abstract: Camera-based 3D Semantic Occupancy Prediction (SOP) is crucial for understanding complex 3D scenes from limited 2D image observations. Existing SOP methods typically aggregate contextual features to assist the occupancy representation learning, alleviating issues like occlusion or ambiguity. However, these solutions often face misalignment issues wherein the corresponding features at the same position across different frames may have different semantic meanings during the aggregation process, which leads to unreliable contextual fusion results and an unstable representation learning process. To address this problem, we introduce a new Hierarchical context alignment paradigm for a more accurate SOP (Hi-SOP). Hi-SOP first disentangles the geometric and temporal context for separate alignment, which two branches are then composed to enhance the reliability of SOP. This parsing of the visual input into a local-global alignment hierarchy includes: (I) disentangled geometric and temporal separate alignment, within each leverages depth confidence and camera pose as prior for relevant feature matching respectively; (II) global alignment and composition of the transformed geometric and temporal volumes based on semantics consistency. Our method outperforms SOTAs for semantic scene completion on the SemanticKITTI & NuScenes-Occupancy datasets and LiDAR semantic segmentation on the NuScenes dataset. The project website is available at https://arlo0o.github.io/hisop.github.io/.
https://arxiv.org/abs/2412.08243
Academic Papers
svg
6394aa18c66555d5aa1994ae4523bd3ce53d72612daabf07974b36584c7d05b7
2026-01-01T00:00:00-05:00
Lagrangian Index Policy for Restless Bandits with Average Reward
arXiv:2412.12641v3 Announce Type: replace Abstract: We study the Lagrangian Index Policy (LIP) for restless multi-armed bandits with long-run average reward. In particular, we compare the performance of LIP with the performance of the Whittle Index Policy (WIP), both heuristic policies known to be asymptotically optimal under certain natural conditions. Even though in most cases their performances are very similar, in the cases when WIP shows bad performance, LIP continues to perform very well. We then propose reinforcement learning algorithms, both tabular and NN-based, to obtain online learning schemes for LIP in the model-free setting. The proposed reinforcement learning schemes for LIP require significantly less memory than the analogous schemes for WIP. We calculate analytically the Lagrangian index for the restart model, which applies to the optimal web crawling and the minimization of the weighted age of information. We also give a new proof of asymptotic optimality in case of homogeneous arms as the number of arms goes to infinity, based on exchangeability and de Finetti's theorem.
https://arxiv.org/abs/2412.12641
Academic Papers
svg
39cf67ed2e5c1e47ad3513ca3bbd1e38e28555e8343173b694ebec62e98ad74a
2026-01-01T00:00:00-05:00
OnlineVPO: Align Video Diffusion Model with Online Video-Centric Preference Optimization
arXiv:2412.15159v2 Announce Type: replace Abstract: Video diffusion models (VDMs) have demonstrated remarkable capabilities in text-to-video (T2V) generation. Despite their success, VDMs still suffer from degraded image quality and flickering artifacts. To address these issues, some approaches have introduced preference learning to exploit human feedback to enhance the video generation. However, these methods primarily adopt the routine in the image domain without an in-depth investigation into video-specific preference optimization. In this paper, we reexamine the design of the video preference learning from two key aspects: feedback source and feedback tuning methodology, and present OnlineVPO, a more efficient preference learning framework tailored specifically for VDMs. On the feedback source, we found that the image-level reward model commonly used in existing methods fails to provide a human-aligned video preference signal due to the modality gap. In contrast, video quality assessment (VQA) models show superior alignment with human perception of video quality. Building on this insight, we propose leveraging VQA models as a proxy of humans to provide more modality-aligned feedback for VDMs. Regarding the preference tuning methodology, we introduce an online DPO algorithm tailored for VDMs. It not only enjoys the benefits of superior scalability in optimizing videos with higher resolution and longer duration compared with the existing method, but also mitigates the insufficient optimization issue caused by off-policy learning via online preference generation and curriculum preference update designs. Extensive experiments on the open-source video-diffusion model demonstrate OnlineVPO as a simple yet effective and, more importantly, scalable preference learning algorithm for video diffusion models.
https://arxiv.org/abs/2412.15159
Academic Papers
svg
81f88e37a743a95eea586c91d368bfc247d63f7c3c66d13600cb78cb6222852b
2026-01-01T00:00:00-05:00
Quantifying Positional Biases in Text Embedding Models
arXiv:2412.15241v4 Announce Type: replace Abstract: Embedding models are crucial for tasks in Information Retrieval (IR) and semantic similarity measurement, yet their handling of longer texts and associated positional biases remains underexplored. In this study, we investigate the impact of content position and input size on text embeddings. Our experiments reveal that embedding models, irrespective of their positional encoding mechanisms, disproportionately prioritize the beginning of an input. Ablation studies demonstrate that insertion of irrelevant text or removal at the start of a document reduces cosine similarity between altered and original embeddings by up to 12.3% more than ablations at the end. Regression analysis further confirms this bias, with sentence importance declining as position moves further from the start, even with with content-agnosticity. We hypothesize that this effect arises from pre-processing strategies and chosen positional encoding techniques. These findings quantify the sensitivity of retrieval systems and suggest a new lens towards embedding model robustness.
https://arxiv.org/abs/2412.15241
Academic Papers
svg
cf4b4ebad6f3ed90d8d8aa4db4cb48ed0dfdde415f570b5f2ed99a9c29ce804d
2026-01-01T00:00:00-05:00
Quantum $(r,\delta)$-locally recoverable codes
arXiv:2412.16590v3 Announce Type: replace Abstract: Classical $(r,\delta)$-locally recoverable codes are designed for avoiding loss of information in large scale distributed and cloud storage systems. We introduce the quantum counterpart of those codes by defining quantum $(r,\delta)$-locally recoverable codes which are quantum error-correcting codes capable of correcting $\delta -1$ qudit erasures from sets of at most $r+ \delta -1$ qudits. We give a necessary and sufficient condition for a quantum stabilizer code $Q(C)$ to be $(r,\delta)$-locally recoverable. Our condition depends only on the puncturing and shortening at suitable sets of both the symplectic self-orthogonal code $C$ used for constructing $Q(C)$ and its symplectic dual $C^{\perp_s}$. When $Q(C)$ comes from a Hermitian or Euclidean dual-containing code, and under an extra condition, we show that there is an equivalence between the classical and quantum concepts of $(r,\delta)$-local recoverability. A Singleton-like bound is stated in this case and examples attaining the bound are given.
https://arxiv.org/abs/2412.16590
Academic Papers
svg
845de599922816df0fe347e0242c620c0fda01818145f3ce029e302a93f03306
2026-01-01T00:00:00-05:00
A Gas-Kinetic Scheme for Maxwell Equations
arXiv:2412.16845v2 Announce Type: replace Abstract: The Gas-Kinetic Scheme (GKS), widely used in computational fluid dynamics for simulating hypersonic and other complicated flow phenomena, is extended in this work to electromagnetic problems by solving Maxwell's equations. In contrast to the classical GKS formulation, the proposed scheme employs a discrete rather than a continuous velocity space. By evaluating a time-accurate numerical flux at cell interfaces, the proposed scheme attains second-order accuracy within a single step. Its kinetic formulation provides an inherently multidimensional framework, while the finite-volume formulation ensures straightforward extension to unstructured meshes. Through the incorporation of a collision process, the scheme exhibits lower numerical dissipation than classical flux-vector splitting (FVS) methods. Furthermore, the kinetic decomposition enables direct implementation of non-reflecting boundary conditions. The proposed scheme is validated against several benchmark problems and compared with established methods, including the Finite-Difference Time-Domain (FDTD) method and FVS. A lattice Boltzmann method (LBM) implementation is also included for comparative analysis. Finally, the technique is applied to simulate electromagnetic wave propagation in a realistic aircraft configuration, demonstrating its ability to model complex geometries.
https://arxiv.org/abs/2412.16845
Academic Papers
svg
6d3e538a8be6e80d52a9f71499c0129719fc00a4bf9515e74393054426f08d04
2026-01-01T00:00:00-05:00
Distributed Graph Algorithms with Predictions
arXiv:2501.05267v2 Announce Type: replace Abstract: We initiate the study of deterministic distributed graph algorithms with predictions in synchronous message passing systems. The process at each node in the graph is given a prediction, which is some extra information about the problem instance that may be incorrect. The processes may use the predictions to help them solve the problem. The overall goal is to develop algorithms that both work faster when predictions are good and do not work much worse than algorithms without predictions when predictions are bad. Concepts from the more general area of algorithms with predictions, such as error measures, consistency, robustness, and smoothness, are adapted to distributed graph algorithms with predictions. We consider algorithms with predictions for distributed graph problems, where each node is given a prediction for its output. We present a framework for evaluating distributed graph algorithms with predictions and methods for transforming existing algorithms without predictions to effectively use predictions. Our approach is illustrated by developing algorithms with predictions for the Maximal Independent Set problem. We also include a discussion of error measures and demonstrate how fine-tuning an error measure towards a particular problem can yield stronger results about the performance of algorithms for that problem.
https://arxiv.org/abs/2501.05267
Academic Papers
svg
c1c1a718968c4d43bdedf38bccdb8a35b0cfaae9a143aba22c90595a3b296564
2026-01-01T00:00:00-05:00
EmotiCrafter: Text-to-Emotional-Image Generation based on Valence-Arousal Model
arXiv:2501.05710v3 Announce Type: replace Abstract: Recent research shows that emotions can enhance users' cognition and influence information communication. While research on visual emotion analysis is extensive, limited work has been done on helping users generate emotionally rich image content. Existing work on emotional image generation relies on discrete emotion categories, making it challenging to capture complex and subtle emotional nuances accurately. Additionally, these methods struggle to control the specific content of generated images based on text prompts. In this work, we introduce the new task of continuous emotional image content generation (C-EICG) and present EmotiCrafter, an emotional image generation model that generates images based on text prompts and Valence-Arousal values. Specifically, we propose a novel emotion-embedding mapping network that embeds Valence-Arousal values into textual features, enabling the capture of specific emotions in alignment with intended input prompts. Additionally, we introduce a loss function to enhance emotion expression. The experimental results show that our method effectively generates images representing specific emotions with the desired content and outperforms existing techniques.
https://arxiv.org/abs/2501.05710
Academic Papers
svg
aca8967c5753a0ed1608f2507ee74f53bd4c2acf25e5842d4748951f0a97d596
2026-01-01T00:00:00-05:00
Detection of AI Deepfake and Fraud in Online Payments Using GAN-Based Models
arXiv:2501.07033v3 Announce Type: replace Abstract: This study explores the use of Generative Adversarial Networks (GANs) to detect AI deepfakes and fraudulent activities in online payment systems. With the growing prevalence of deepfake technology, which can manipulate facial features in images and videos, the potential for fraud in online transactions has escalated. Traditional security systems struggle to identify these sophisticated forms of fraud. This research proposes a novel GAN-based model that enhances online payment security by identifying subtle manipulations in payment images. The model is trained on a dataset consisting of real-world online payment images and deepfake images generated using advanced GAN architectures, such as StyleGAN and DeepFake. The results demonstrate that the proposed model can accurately distinguish between legitimate transactions and deepfakes, achieving a high detection rate above 95%. This approach significantly improves the robustness of payment systems against AI-driven fraud. The paper contributes to the growing field of digital security, offering insights into the application of GANs for fraud detection in financial services. Keywords- Payment Security, Image Recognition, Generative Adversarial Networks, AI Deepfake, Fraudulent Activities
https://arxiv.org/abs/2501.07033
Academic Papers
svg
624e8e3ca6d02cd6199f97663be6c832145c49e8fc5c25feaedc48d608aefab6
2026-01-01T00:00:00-05:00
Towards autonomous photogrammetric forest inventory using a lightweight under-canopy robotic drone
arXiv:2501.12073v4 Announce Type: replace Abstract: Drones are increasingly used in forestry to capture high-resolution remote sensing data, supporting enhanced monitoring, assessment, and decision-making processes. While operations above the forest canopy are already highly automated, flying inside forests remains challenging, primarily relying on manual piloting. In dense forests, relying on the Global Navigation Satellite System (GNSS) for localization is not feasible. In addition, the drone must autonomously adjust its flight path to avoid collisions. Recently, advancements in robotics have enabled autonomous drone flights in GNSS-denied obstacle-rich areas. In this article, a step towards autonomous forest data collection is taken by building a prototype of a robotic under-canopy drone utilizing state-of-the-art open source methods and validating its performance for data collection inside forests. Specifically, the study focused on camera-based autonomous flight under the forest canopy and photogrammetric post-processing of the data collected with the low-cost onboard stereo camera. The autonomous flight capability of the prototype was evaluated through multiple test flights in boreal forests. The tree parameter estimation capability was studied by performing diameter at breast height (DBH) estimation. The prototype successfully carried out flights in selected challenging forest environments, and the experiments showed promising performance in forest 3D modeling with a miniaturized stereoscopic photogrammetric system. The DBH estimation achieved a root mean square error (RMSE) of 3.33 - 3.97 cm (10.69 - 12.98 %) across all trees. For trees with a DBH less than 30 cm, the RMSE was 1.16 - 2.56 cm (5.74 - 12.47 %). The results provide valuable insights into autonomous under-canopy forest mapping and highlight the critical next steps for advancing lightweight robotic drone systems for mapping complex forest environments.
https://arxiv.org/abs/2501.12073
Academic Papers
svg
79156ea53e59d6926c31c45fee44544a70d3fa07e0baf8441ea3d6fb1fb2cb88
2026-01-01T00:00:00-05:00
Knowledge-Driven Federated Graph Learning on Model Heterogeneity
arXiv:2501.12624v4 Announce Type: replace Abstract: Federated graph learning (FGL) has emerged as a promising paradigm for collaborative graph representation learning, enabling multiple parties to jointly train models while preserving data privacy. However, most existing approaches assume homogeneous client models and largely overlook the challenge of model-centric heterogeneous FGL (MHtFGL), which frequently arises in practice when organizations employ graph neural networks (GNNs) of different scales and architectures.Such architectural diversity not only undermines smooth server-side aggregation, which presupposes a unified representation space shared across clients' updates, but also further complicates the transfer and integration of structural knowledge across clients. To address this issue, we propose the Federated Graph Knowledge Collaboration (FedGKC) framework. FedGKC introduces a lightweight Copilot Model on each client to facilitate knowledge exchange while local architectures are heterogeneous across clients, and employs two complementary mechanisms: Client-side Self-Mutual Knowledge Distillation, which transfers effective knowledge between local and copilot models through bidirectional distillation with multi-view perturbation; and Server-side Knowledge-Aware Model Aggregation, which dynamically assigns aggregation weights based on knowledge provided by clients. Extensive experiments on eight benchmark datasets demonstrate that FedGKC achieves an average accuracy gain of 3.88% over baselines in MHtFGL scenarios, while maintaining excellent performance in homogeneous settings.
https://arxiv.org/abs/2501.12624
Academic Papers
svg
31c3e00823e8c281a1d2e2cf6296e7ba08418374552bc9a000996795792e772b
2026-01-01T00:00:00-05:00
Illusions of Relevance: Arbitrary Content Injection Attacks Deceive Retrievers, Rerankers, and LLM Judges
arXiv:2501.18536v2 Announce Type: replace Abstract: This work considers a black-box threat model in which adversaries attempt to propagate arbitrary non-relevant content in search. We show that retrievers, rerankers, and LLM relevance judges are all highly vulnerable to attacks that enable arbitrary content to be promoted to the top of search results and to be assigned perfect relevance scores. We investigate how attackers may achieve this via content injection, injecting arbitrary sentences into relevant passages or query terms into arbitrary passages. Our study analyzes how factors such as model class and size, the balance between relevant and non-relevant content, injection location, toxicity and severity of injected content, and the role of LLM-generated content influence attack success, yielding novel, concerning, and often counterintuitive results. Our results reveal a weakness in embedding models, LLM-based scoring models, and generative LLMs, raising concerns about the general robustness, safety, and trustworthiness of language models regardless of the type of model or the role in which they are employed. We also emphasize the challenges of robust defenses against these attacks. Classifiers and more carefully prompted LLM judges often fail to recognize passages with content injection, especially when considering diverse text topics and styles. Our findings highlight the need for further research into arbitrary content injection attacks. We release our code for further study.
https://arxiv.org/abs/2501.18536
Academic Papers
svg
0afc40fb5285a2b9b4b08a29f8303802b08f5f23b48753c228e392ef81893ebb
2026-01-01T00:00:00-05:00
MaxInfo: A Training-Free Key-Frame Selection Method Using Maximum Volume for Enhanced Video Understanding
arXiv:2502.03183v3 Announce Type: replace Abstract: Modern Video Large Language Models (VLLMs) often rely on uniform frame sampling for video understanding, but this approach frequently fails to capture critical information due to frame redundancy and variations in video content. We propose MaxInfo, the first training-free method based on the maximum volume principle, which is available in Fast and Slow versions and a Chunk-based version that selects and retains the most representative frames from a video. By maximizing the geometric volume formed by selected embeddings, MaxInfo ensures that the chosen frames cover the most informative regions of the embedding space, effectively reducing redundancy while preserving diversity. This method enhances the quality of input representations and improves long video comprehension performance across benchmarks. For instance, MaxInfo achieves a 3.28% improvement on LongVideoBench and a 6.4% improvement on EgoSchema for LLaVA-Video-7B. Moreover, MaxInfo boosts LongVideoBench performance by 3.47% on LLaVA-Video-72B and 3.44% on MiniCPM4.5. The approach is simple to implement and works with existing VLLMs without the need for additional training and very lower latency, making it a practical and effective alternative to traditional uniform sampling methods. Our code are available at https://github.com/FusionBrainLab/MaxInfo.git
https://arxiv.org/abs/2502.03183
Academic Papers
svg
50e30f4e86a7caeb8e17322568066dbcc633efb7d1fc9fe4a52d26e6daa91865
2026-01-01T00:00:00-05:00
An Empirical Study of Methods for Small Object Detection from Satellite Imagery
arXiv:2502.03674v2 Announce Type: replace Abstract: This paper reviews object detection methods for finding small objects from remote sensing imagery and provides an empirical evaluation of four state-of-the-art methods to gain insights into method performance and technical challenges. In particular, we use car detection from urban satellite images and bee box detection from satellite images of agricultural lands as application scenarios. Drawing from the existing surveys and literature, we identify several top-performing methods for the empirical study. Public, high-resolution satellite image datasets are used in our experiments.
https://arxiv.org/abs/2502.03674
Academic Papers
svg
ad99b3c0887cabaa3d058d8d6bc01a23a0b341bfef7a93e2bb71c0ea1f2b4f93
2026-01-01T00:00:00-05:00
Large Multimodal Models for Low-Resource Languages: A Survey
arXiv:2502.05568v2 Announce Type: replace Abstract: In this survey, we systematically analyze techniques used to adapt large multimodal models (LMMs) for low-resource (LR) languages, examining approaches ranging from visual enhancement and data creation to cross-modal transfer and fusion strategies. Through a comprehensive analysis of 117 studies across 96 LR languages, we identify key patterns in how researchers tackle the challenges of limited data and computational resources. We categorize works into resource-oriented and method-oriented contributions, further dividing contributions into relevant sub-categories. We compare method-oriented contributions in terms of performance and efficiency, discussing benefits and limitations of representative studies. We find that visual information often serves as a crucial bridge for improving model performance in LR settings, though significant challenges remain in areas such as hallucination mitigation and computational efficiency. In summary, we provide researchers with a clear understanding of current approaches and remaining challenges in making LMMs more accessible to speakers of LR (understudied) languages. We complement our survey with an open-source repository available at: https://github.com/marianlupascu/LMM4LRL-Survey.
https://arxiv.org/abs/2502.05568
Academic Papers
svg
2f5c632eb1e43dc20dc6a1da0593a0c363f2f16c0dfc8b4b4a5cadb77d359103
2026-01-01T00:00:00-05:00
Local-Cloud Inference Offloading for LLMs in Multi-Modal, Multi-Task, Multi-Dialogue Settings
arXiv:2502.11007v4 Announce Type: replace Abstract: Compared to traditional machine learning models, recent large language models (LLMs) can exhibit multi-task-solving capabilities through multiple dialogues and multi-modal data sources. These unique characteristics of LLMs, together with their large model size, make their deployment more challenging. Specifically, (i) deploying LLMs on local devices faces computational, memory, and energy resource issues, while (ii) deploying them in the cloud cannot guarantee real-time service and incurs communication/usage costs. In this paper, we design TMO, a local-cloud LLM inference system with Three-M Offloading: Multi-modal, Multi-task, and Multi-dialogue. TMO incorporates (i) a lightweight local LLM that can process simple tasks at high speed and (ii) a large-scale cloud LLM that can handle multi-modal data sources. We develop a resource-constrained reinforcement learning (RCRL) strategy for TMO that optimizes the inference location (i.e., local vs. cloud) and multi-modal data sources to use for each task/dialogue, aiming to maximize the long-term reward (response quality, latency, and usage cost) while adhering to resource constraints. We also contribute M4A1, a new dataset we curated that contains reward and cost metrics across multiple modality, task, dialogue, and LLM configurations, enabling evaluation of offloading decisions. We demonstrate the effectiveness of TMO compared to several exploration-decision and LLM-as-Agent baselines, showing significant improvements in latency, cost, and response quality.
https://arxiv.org/abs/2502.11007
Academic Papers
svg
31fea3e78d7a96d10cc3e31791dee92ec240e488236859063fcd7a4b864a8017
2026-01-01T00:00:00-05:00
Daily Land Surface Temperature Reconstruction in Landsat Cross-Track Areas Using Deep Ensemble Learning With Uncertainty Quantification
arXiv:2502.14433v2 Announce Type: replace Abstract: Many real-world applications rely on land surface temperature (LST) data at high spatiotemporal resolution. In complex urban areas, LST exhibits significant variations, fluctuating dramatically within and across city blocks. Landsat provides high spatial resolution data at 100 meters but is limited by long revisit time, with cloud cover further disrupting data collection. Here, we propose DELAG, a deep ensemble learning method that integrates annual temperature cycles and Gaussian processes, to reconstruct Landsat LST in complex urban areas. Leveraging the cross-track characteristics and dual-satellite operation of Landsat since 2021, we further enhance data availability to 4 scenes every 16 days. We select New York City, London and Hong Kong from three different continents as study areas. Experiments show that DELAG successfully reconstructed LST in the three cities under clear-sky (RMSE = 0.73-0.96 K) and heavily-cloudy (RMSE = 0.84-1.62 K) situations, superior to existing methods. Additionally, DELAG can quantify uncertainty that enhances LST reconstruction reliability. We further tested the reconstructed LST to estimate near-surface air temperature, achieving results (RMSE = 1.48-2.11 K) comparable to those derived from clear-sky LST (RMSE = 1.63-2.02 K). The results demonstrate the successful reconstruction through DELAG and highlight the broader applications of LST reconstruction for estimating accurate air temperature. Our study thus provides a novel and practical method for Landsat LST reconstruction, particularly suited for complex urban areas within Landsat cross-track areas, taking one step toward addressing complex climate events at high spatiotemporal resolution. Code and data are available at https://skrisliu.com/delag
https://arxiv.org/abs/2502.14433
Academic Papers
svg
f9a906429766f90c7e7e6686cace4a98835b2a1cc3fdef93b0cf716ff5b621a4
2026-01-01T00:00:00-05:00
ReVision: A Dataset and Baseline VLM for Privacy-Preserving Task-Oriented Visual Instruction Rewriting
arXiv:2502.14780v2 Announce Type: replace Abstract: Efficient and privacy-preserving multimodal interaction is essential as AR, VR, and modern smartphones with powerful cameras become primary interfaces for human-computer communication. Existing powerful large vision-language models (VLMs) enabling multimodal interaction often rely on cloud-based processing, raising significant concerns about (1) visual privacy by transmitting sensitive vision data to servers, and (2) their limited real-time, on-device usability. This paper explores Visual Instruction Rewriting, a novel approach that transforms multimodal instructions into text-only commands, allowing seamless integration of lightweight on-device instruction rewriter VLMs (250M parameters) with existing conversational AI systems, enhancing vision data privacy. To achieve this, we present a dataset of over 39,000 examples across 14 domains and develop a compact VLM, pretrained on image captioning datasets and fine-tuned for instruction rewriting. Experimental results, evaluated through NLG metrics such as BLEU, METEOR, and ROUGE, along with semantic parsing analysis, demonstrate that even a quantized version of the model (<500MB storage footprint) can achieve effective instruction rewriting, thus enabling privacy-focused, multimodal AI applications.
https://arxiv.org/abs/2502.14780
Academic Papers
svg
0adbbc69c694d323568db0fa8f0c96981475496d7f198fec55f12532c5a7682e
2026-01-01T00:00:00-05:00
CAML: Collaborative Auxiliary Modality Learning for Multi-Agent Systems
arXiv:2502.17821v3 Announce Type: replace Abstract: Multi-modal learning has emerged as a key technique for improving performance across domains such as autonomous driving, robotics, and reasoning. However, in certain scenarios, particularly in resource-constrained environments, some modalities available during training may be absent during inference. While existing frameworks effectively utilize multiple data sources during training and enable inference with reduced modalities, they are primarily designed for single-agent settings. This poses a critical limitation in dynamic environments such as connected autonomous vehicles (CAV), where incomplete data coverage can lead to decision-making blind spots. Conversely, some works explore multi-agent collaboration but without addressing missing modality at test time. To overcome these limitations, we propose Collaborative Auxiliary Modality Learning (CAML), a novel multi-modal multi-agent framework that enables agents to collaborate and share multi-modal data during training, while allowing inference with reduced modalities during testing. Experimental results in collaborative decision-making for CAV in accident-prone scenarios demonstrate that CAML achieves up to a 58.1% improvement in accident detection. Additionally, we validate CAML on real-world aerial-ground robot data for collaborative semantic segmentation, achieving up to a 10.6% improvement in mIoU.
https://arxiv.org/abs/2502.17821
Academic Papers
svg
60a74768c6ecd5696f34a11631d14a7d04f72badffb3c0910a28dd7380b22bab
2026-01-01T00:00:00-05:00
SciceVPR: Stable Cross-Image Correlation Enhanced Model for Visual Place Recognition
arXiv:2502.20676v2 Announce Type: replace Abstract: Visual Place Recognition (VPR) is a major challenge for robotics and autonomous systems, with the goal of predicting the location of an image based solely on its visual features. State-of-the-art (SOTA) models extract global descriptors using the powerful foundation model DINOv2 as backbone. These models either explore the cross-image correlation or propose a time-consuming two-stage re-ranking strategy to achieve better performance. However, existing works only utilize the final output of DINOv2, and the current cross-image correlation causes unstable retrieval results. To produce both discriminative and constant global descriptors, this paper proposes stable cross-image correlation enhanced model for VPR called SciceVPR. This model explores the full potential of DINOv2 in providing useful feature representations that implicitly encode valuable contextual knowledge. Specifically, SciceVPR first uses a multi-layer feature fusion module to capture increasingly detailed task-relevant channel and spatial information from the multi-layer output of DINOv2. Secondly, SciceVPR considers the invariant correlation between images within a batch as valuable knowledge to be distilled into the proposed self-enhanced encoder. In this way, SciceVPR can acquire fairly robust global features regardless of domain shifts (e.g., changes in illumination, weather and viewpoint between pictures taken in the same place). Experimental results demonstrate that the base variant, SciceVPR-B, outperforms SOTA one-stage methods with single input on multiple datasets with varying domain conditions. The large variant, SciceVPR-L, performs on par with SOTA two-stage models, scoring over 3% higher in Recall@1 compared to existing models on the challenging Tokyo24/7 dataset. Our code will be released at https://github.com/shuimushan/SciceVPR.
https://arxiv.org/abs/2502.20676
Academic Papers
svg
e5f43cf93b8072ca2645d72be6ae227f7a9b0a2c59c021b59ac65997d299f12b
2026-01-01T00:00:00-05:00
Recent Advances in Numerical Solutions for Hamilton-Jacobi PDEs
arXiv:2502.20833v2 Announce Type: replace Abstract: Hamilton-Jacobi partial differential equations (HJ PDEs) play a central role in many applications such as economics, physics, and engineering. These equations describe the evolution of a value function which encodes valuable information about the system, such as action, cost, or level sets of a dynamic process. Their importance lies in their ability to model diverse phenomena, ranging from the propagation of fronts in computational physics to optimal decision-making in control systems. This paper provides a review of some recent advances in numerical methods to address challenges such as high-dimensionality, nonlinearity, and computational efficiency. By examining these developments, this paper sheds light on important techniques and emerging directions in the numerical solution of HJ PDEs.
https://arxiv.org/abs/2502.20833
Academic Papers
svg
4f1e853eb08ae5977868c6f3666f758520ff1e05ffb86685ba7ea8f1e4029316
2026-01-01T00:00:00-05:00
OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction
arXiv:2503.03734v4 Announce Type: replace Abstract: Vision-Language-Action (VLA) models aim to predict robotic actions based on visual observations and language instructions. Existing approaches require fine-tuning pre-trained visionlanguage models (VLMs) as visual and language features are independently fed into downstream policies, degrading the pre-trained semantic alignments. We propose OTTER, a novel VLA architecture that leverages these existing alignments through explicit, text-aware visual feature extraction. Instead of processing all visual features, OTTER selectively extracts and passes only task-relevant visual features that are semantically aligned with the language instruction to the policy transformer. This allows OTTER to keep the pre-trained vision-language encoders frozen. Thereby, OTTER preserves and utilizes the rich semantic understanding learned from large-scale pre-training, enabling strong zero-shot generalization capabilities. In simulation and real-world experiments, OTTER significantly outperforms existing VLA models, demonstrating strong zeroshot generalization to novel objects and environments. Video, code, checkpoints, and dataset: https://ottervla.github.io/.
https://arxiv.org/abs/2503.03734
Academic Papers
svg
3bcbe76b1f9705b4eb7c4b7881fd2d3eadf3bf4b23bc653216d92ac66fe58d37
2026-01-01T00:00:00-05:00
Simple Self Organizing Map with Visual Transformer
arXiv:2503.04121v2 Announce Type: replace Abstract: Vision Transformers (ViTs) have demonstrated exceptional performance in various vision tasks. However, they tend to underperform on smaller datasets due to their inherent lack of inductive biases. Current approaches address this limitation implicitly-often by pairing ViTs with pretext tasks or by distilling knowledge from convolutional neural networks (CNNs) to strengthen the prior. In contrast, Self-Organizing Maps (SOMs), a widely adopted self-supervised framework, are inherently structured to preserve topology and spatial organization, making them a promising candidate to directly address the limitations of ViTs in limited or small training datasets. Despite this potential, equipping SOMs with modern deep learning architectures remains largely unexplored. In this study, we conduct a novel exploration on how Vision Transformers (ViTs) and Self-Organizing Maps (SOMs) can empower each other, aiming to bridge this critical research gap. Our findings demonstrate that these architectures can synergistically enhance each other, leading to significantly improved performance in both unsupervised and supervised tasks. Code is publicly available on GitHub.
https://arxiv.org/abs/2503.04121
Academic Papers
svg
04bd8f45f4981107dca21e52f34950387ff3b84163d13aa3e05782a370483d38
2026-01-01T00:00:00-05:00
Illuminating Darkness: Learning to Enhance Low-light Images In-the-Wild
arXiv:2503.06898v3 Announce Type: replace Abstract: Single-shot low-light image enhancement (SLLIE) remains challenging due to the limited availability of diverse, real-world paired datasets. To bridge this gap, we introduce the Low-Light Smartphone Dataset (LSD), a large-scale, high-resolution (4K+) dataset collected in the wild across a wide range of challenging lighting conditions (0.1 to 200 lux). LSD contains 6,425 precisely aligned low and normal-light image pairs, selected from over 8,000 dynamic indoor and outdoor scenes through multi-frame acquisition and expert evaluation. To evaluate generalization and aesthetic quality, we collect 2,117 unpaired low-light images from previously unseen devices. To fully exploit LSD, we propose TFFormer, a hybrid model that encodes luminance and chrominance (LC) separately to reduce color-structure entanglement. We further propose a cross-attention-driven joint decoder for context-aware fusion of LC representations, along with LC refinement and LC-guided supervision to significantly enhance perceptual fidelity and structural consistency. TFFormer achieves state-of-the-art results on LSD (+2.45 dB PSNR) and substantially improves downstream vision tasks, such as low-light object detection (+6.80 mAP on ExDark).
https://arxiv.org/abs/2503.06898
Academic Papers
svg
fcff1f62a7a128271714ffaff1390bb522dbbe373815accd09a14999f458ee12
2026-01-01T00:00:00-05:00
Effective and Efficient Jailbreaks of Black-Box LLMs with Cross-Behavior Attacks
arXiv:2503.08990v2 Announce Type: replace Abstract: Despite recent advancements in Large Language Models (LLMs) and their alignment, they can still be jailbroken, i.e., harmful and toxic content can be elicited from them. While existing red-teaming methods have shown promise in uncovering such vulnerabilities, these methods struggle with limited success and high computational and monetary costs. To address this, we propose a black-box Jailbreak method with Cross-Behavior attacks (JCB), that can automatically and efficiently find successful jailbreak prompts. JCB leverages successes from past behaviors to help jailbreak new behaviors, thereby significantly improving the attack efficiency. Moreover, JCB does not rely on time- and/or cost-intensive calls to auxiliary LLMs to discover/optimize the jailbreak prompts, making it highly efficient and scalable. Comprehensive experimental evaluations show that JCB significantly outperforms related baselines, requiring up to 94% fewer queries while still achieving 12.9% higher average attack success. JCB also achieves a notably high 37% attack success rate on Llama-2-7B, one of the most resilient LLMs, and shows promising zero-shot transferability across different LLMs.
https://arxiv.org/abs/2503.08990
Academic Papers
svg
7ce605256ee1e48699ee125b667c8d6c19fda2d7aaf71bf059eafa9c17b1646a
2026-01-01T00:00:00-05:00
Revisiting Agnostic Boosting
arXiv:2503.09384v3 Announce Type: replace Abstract: Boosting is a key method in statistical learning, allowing for converting weak learners into strong ones. While well studied in the realizable case, the statistical properties of weak-to-strong learning remain less understood in the agnostic setting, where there are no assumptions on the distribution of the labels. In this work, we propose a new agnostic boosting algorithm with substantially improved sample complexity compared to prior works under very general assumptions. Our approach is based on a reduction to the realizable case, followed by a margin-based filtering of high-quality hypotheses. Furthermore, we show a nearly-matching lower bound, settling the sample complexity of agnostic boosting up to logarithmic factors.
https://arxiv.org/abs/2503.09384
Academic Papers
svg
870dc69c5fa2b51e0ed164b1350fc8eb40be0ff95bd1689b5bdf0feb2988cdd1
2026-01-01T00:00:00-05:00
Adjusted Count Quantification Learning on Graphs
arXiv:2503.09395v2 Announce Type: replace Abstract: Quantification learning is the task of predicting the label distribution of a set of instances. We study this problem in the context of graph-structured data, where the instances are vertices. Previously, this problem has only been addressed via node clustering methods. In this paper, we extend the popular Adjusted Classify & Count (ACC) method to graphs. We show that the prior probability shift assumption upon which ACC relies is often not applicable to graph quantification problems. To address this issue, we propose structural importance sampling (SIS), the first graph quantification method that is applicable under (structural) covariate shift. Additionally, we propose Neighborhood-aware ACC, which improves quantification in the presence of non-homophilic edges. We show the effectiveness of our techniques on multiple graph quantification tasks.
https://arxiv.org/abs/2503.09395
Academic Papers
svg
b4c81ace253c61bd8ed11d58042aaae37dc27c0144fec221290629a88dfe9c8c
2026-01-01T00:00:00-05:00
Redefining non-IID Data in Federated Learning for Computer Vision Tasks: Migrating from Labels to Embeddings for Task-Specific Data Distributions
arXiv:2503.14553v4 Announce Type: replace Abstract: Federated Learning (FL) has emerged as one of the prominent paradigms for distributed machine learning (ML). However, it is well-established that its performance can degrade significantly under non-IID (non-independent and identically distributed) data distributions across clients. To study this effect, the existing works predominantly emulate data heterogeneity by imposing label distribution skew across clients. In this paper, we show that label distribution skew fails to fully capture the data heterogeneity in computer vision tasks beyond classification, exposing an overlooked gap in the literature. Motivated by this, by utilizing pre-trained deep neural networks to extract task-specific data embeddings, we define task-specific data heterogeneity through the lens of each vision task and introduce a new level of data heterogeneity called embedding-based data heterogeneity. Our methodology involves clustering data points based on embeddings and distributing them among clients using the Dirichlet distribution. Through extensive experiments, we evaluate the performance of different FL methods under our revamped notion of data heterogeneity, introducing new benchmark performance measures to the literature. For instance, across seven representative computer vision tasks, our embedding-based heterogeneity formulation leads to up to around 60% increase in the observed loss under FedAvg, indicating that it more accurately exposes the performance degradation caused by data heterogeneity. We further unveil a series of open research directions that can be pursued.
https://arxiv.org/abs/2503.14553
Academic Papers
svg
a36bc8fb8b8a4439f3844e02945843aa6169ed6e115d449881ec5ec75b583260
2026-01-01T00:00:00-05:00
A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyond
arXiv:2503.21614v2 Announce Type: replace Abstract: Recent Large Reasoning Models (LRMs), such as DeepSeek-R1 and OpenAI o1, have demonstrated strong performance gains by scaling up the length of Chain-of-Thought (CoT) reasoning during inference. However, a growing concern lies in their tendency to produce excessively long reasoning traces, which are often filled with redundant content (e.g., repeated definitions), over-analysis of simple problems, and superficial exploration of multiple reasoning paths for harder tasks. This inefficiency introduces significant challenges for training, inference, and real-world deployment (e.g., in agent-based systems), where token economy is critical. In this survey, we provide a comprehensive overview of recent efforts aimed at improving reasoning efficiency in LRMs, with a particular focus on the unique challenges that arise in this new paradigm. We identify common patterns of inefficiency, examine methods proposed across the LRM lifecycle, i.e., from pretraining to inference, and discuss promising future directions for research. To support ongoing development, we also maintain a real-time GitHub repository tracking recent progress in the field. We hope this survey serves as a foundation for further exploration and inspires innovation in this rapidly evolving area.
https://arxiv.org/abs/2503.21614
Academic Papers
svg
44d99b3b44b4b96f52f66d36c661b5315a8344c81e3f57e52a7ccbf226ec58e9
2026-01-01T00:00:00-05:00
AINav: Large Language Model-Based Adaptive Interactive Navigation
arXiv:2503.22942v2 Announce Type: replace Abstract: Robotic navigation in complex environments remains a critical research challenge. Traditional navigation methods focus on optimal trajectory generation within fixed free workspace, therefore struggling in environments lacking viable paths to the goal, such as disaster zones or cluttered warehouses. To address this problem, we propose AINav, an adaptive interactive navigation approach that proactively interacts with environments to create feasible paths to achieve originally unreachable goals. Specifically, we present a primitive skill tree for task planning with large language models (LLMs), facilitating effective reasoning to determine interaction objects and sequences. To ensure robust subtask execution, we adopt reinforcement learning to pre-train a comprehensive skill library containing versatile locomotion and interaction behaviors for motion planning. Furthermore, we introduce an adaptive replanning approach featuring two LLM-based modules: an advisor serving as a flexible replanning trigger and an arborist for autonomous plan adjustment. Integrated with the tree structure, the replanning mechanism allows for convenient node addition and pruning, enabling rapid plan adaptation in a priori unknown environments. Comprehensive simulations and experiments have demonstrated AINav's effectiveness and adaptivity in diverse scenarios. The supplementary video is available at: https://youtu.be/CjXm5KFx9AI.
https://arxiv.org/abs/2503.22942
Academic Papers
svg
100891f251769823402b1e39d7b0299380e5829c50c8ec4a797fabbb70536f71
2026-01-01T00:00:00-05:00
Lattice: Learning to Efficiently Compress the Memory
arXiv:2504.05646v2 Announce Type: replace Abstract: Attention mechanisms have revolutionized sequence learning but suffer from quadratic computational complexity. This paper introduces \model, a novel recurrent neural network (RNN) mechanism that leverages the inherent low-rank structure of K-V matrices to efficiently compress the cache into a fixed number of memory slots, achieving sub-quadratic complexity. We formulate this compression as an online optimization problem and derive a dynamic memory update rule based on a single gradient descent step. The resulting recurrence features a state- and input-dependent gating mechanism, offering an interpretable memory update process. The core innovation is the orthogonal update: each memory slot is updated exclusively with information orthogonal to its current state, hence incorporating only novel, non-redundant data to minimize interference with previously stored information. We derive an efficient computation for this orthogonal update rule and further approximate it with chunk-wise parallelization to ensure training scalability. Empirically, Lattice outperforms strong baselines on language modeling and associative recall tasks across diverse context lengths and model sizes, achieving superior memory efficiency with significantly reduced memory sizes.
https://arxiv.org/abs/2504.05646
Academic Papers
svg