Papers
arxiv:2601.22296

ParalESN: Enabling parallel information processing in Reservoir Computing

Published on Jan 29
· Submitted by
Matteo Pinna
on Feb 3
Authors:
,
,

Abstract

Parallel Echo State Network (ParalESN) addresses reservoir computing limitations by enabling parallel temporal processing through diagonal linear recurrence, maintaining theoretical guarantees while achieving significant computational efficiency gains.

AI-generated summary

Reservoir Computing (RC) has established itself as an efficient paradigm for temporal processing. However, its scalability remains severely constrained by (i) the necessity of processing temporal data sequentially and (ii) the prohibitive memory footprint of high-dimensional reservoirs. In this work, we revisit RC through the lens of structured operators and state space modeling to address these limitations, introducing Parallel Echo State Network (ParalESN). ParalESN enables the construction of high-dimensional and efficient reservoirs based on diagonal linear recurrence in the complex space, enabling parallel processing of temporal data. We provide a theoretical analysis demonstrating that ParalESN preserves the Echo State Property and the universality guarantees of traditional Echo State Networks while admitting an equivalent representation of arbitrary linear reservoirs in the complex diagonal form. Empirically, ParalESN matches the predictive accuracy of traditional RC on time series benchmarks, while delivering substantial computational savings. On 1-D pixel-level classification tasks, ParalESN achieves competitive accuracy with fully trainable neural networks while reducing computational costs and energy consumption by orders of magnitude. Overall, ParalESN offers a promising, scalable, and principled pathway for integrating RC within the deep learning landscape.

Community

Paper author Paper submitter

TL;DR: We revisit the Reservoir Computing paradigm through the lens of structured operators and state space modelling, introducing Parallel Echo State Networks (ParalESN). ParalESN enables the construction of high-dimensional, efficient, and parallelizable randomized Recurrent Neural Networks.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.22296 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.22296 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.22296 in a Space README.md to link it from this page.

Collections including this paper 1