new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Apr 7

Limitations of Quantum Hardware for Molecular Energy Estimation Using VQE

Variational quantum eigensolvers (VQEs) are among the most promising quantum algorithms for solving electronic structure problems in quantum chemistry, particularly during the Noisy Intermediate-Scale Quantum (NISQ) era. In this study, we investigate the capabilities and limitations of VQE algorithms implemented on current quantum hardware for determining molecular ground-state energies, focusing on the adaptive derivative-assembled pseudo-Trotter ansatz VQE (ADAPT-VQE). To address the significant computational challenges posed by molecular Hamiltonians, we explore various strategies to simplify the Hamiltonian, optimize the ansatz, and improve classical parameter optimization through modifications of the COBYLA optimizer. These enhancements are integrated into a tailored quantum computing implementation designed to minimize the circuit depth and computational cost. Using benzene as a benchmark system, we demonstrate the application of these optimizations on an IBM quantum computer. Despite these improvements, our results highlight the limitations imposed by current quantum hardware, particularly the impact of quantum noise on state preparation and energy measurement. The noise levels in today's devices prevent meaningful evaluations of molecular Hamiltonians with sufficient accuracy to produce reliable quantum chemical insights. Finally, we extrapolate the requirements for future quantum hardware to enable practical and scalable quantum chemistry calculations using VQE algorithms. This work provides a roadmap for advancing quantum algorithms and hardware toward achieving quantum advantage in molecular modeling.

  • 3 authors
·
Jun 4, 2025

Recurrent Quantum Neural Networks

Recurrent neural networks are the foundation of many sequence-to-sequence models in machine learning, such as machine translation and speech synthesis. In contrast, applied quantum computing is in its infancy. Nevertheless there already exist quantum machine learning models such as variational quantum eigensolvers which have been used successfully e.g. in the context of energy minimization tasks. In this work we construct a quantum recurrent neural network (QRNN) with demonstrable performance on non-trivial tasks such as sequence learning and integer digit classification. The QRNN cell is built from parametrized quantum neurons, which, in conjunction with amplitude amplification, create a nonlinear activation of polynomials of its inputs and cell state, and allow the extraction of a probability distribution over predicted classes at each step. To study the model's performance, we provide an implementation in pytorch, which allows the relatively efficient optimization of parametrized quantum circuits with thousands of parameters. We establish a QRNN training setup by benchmarking optimization hyperparameters, and analyse suitable network topologies for simple memorisation and sequence prediction tasks from Elman's seminal paper (1990) on temporal structure learning. We then proceed to evaluate the QRNN on MNIST classification, both by feeding the QRNN each image pixel-by-pixel; and by utilising modern data augmentation as preprocessing step. Finally, we analyse to what extent the unitary nature of the network counteracts the vanishing gradient problem that plagues many existing quantum classifiers and classical RNNs.

  • 1 authors
·
Jun 25, 2020

Learning Eigenstructures of Unstructured Data Manifolds

We introduce a novel framework that directly learns a spectral basis for shape and manifold analysis from unstructured data, eliminating the need for traditional operator selection, discretization, and eigensolvers. Grounded in optimal-approximation theory, we train a network to decompose an implicit approximation operator by minimizing the reconstruction error in the learned basis over a chosen distribution of probe functions. For suitable distributions, they can be seen as an approximation of the Laplacian operator and its eigendecomposition, which are fundamental in geometry processing. Furthermore, our method recovers in a unified manner not only the spectral basis, but also the implicit metric's sampling density and the eigenvalues of the underlying operator. Notably, our unsupervised method makes no assumption on the data manifold, such as meshing or manifold dimensionality, allowing it to scale to arbitrary datasets of any dimension. On point clouds lying on surfaces in 3D and high-dimensional image manifolds, our approach yields meaningful spectral bases, that can resemble those of the Laplacian, without explicit construction of an operator. By replacing the traditional operator selection, construction, and eigendecomposition with a learning-based approach, our framework offers a principled, data-driven alternative to conventional pipelines. This opens new possibilities in geometry processing for unstructured data, particularly in high-dimensional spaces.