Papers
arxiv:2111.12877
A Letter on Convergence of In-Parameter-Linear Nonlinear Neural Architectures with Gradient Learnings
Published on Nov 25, 2021
Authors:
Abstract
Weight convergence stability is proven for a class of nonlinear neural architectures under incremental gradient learning algorithms through bounded-input bounded-state stability analysis.
AI-generated summary
This letter summarizes and proves the concept of bounded-input bounded-state (BIBS) stability for weight convergence of a broad family of in-parameter-linear nonlinear neural architectures as it generally applies to a broad family of incremental gradient learning algorithms. A practical BIBS convergence condition results from the derived proofs for every individual learning point or batches for real-time applications.
Models citing this paper 0
No model linking this paper
Cite arxiv.org/abs/2111.12877 in a model README.md to link it from this page.
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/2111.12877 in a Space README.md to link it from this page.
Collections including this paper 0
No Collection including this paper
Add this paper to a collection to link it from this page.