Theory of Temporal Pattern Learning in Echo State Networks

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Echo state networks are well-known for their ability to learn temporal patterns through simple feedback to a large recurrent network with random connections. However, the learning process itself remains poorly understood. We develop a quantitative theory that explains learning in a regime where the network dynamics is stable and the feedback is weak. We show that the dynamics is governed by a finite number of master modes whose nonlinear interactions can be described by a normal form. This formulation provides a simple picture of learning as a Fourier decomposition of the target pattern with amplitudes determined by nonlinear interactions that, remarkably, become independent of the network randomness in the limit of large network size. We further show that the description extends to moderate feedback and recurrent networks with multiple unstable modes.

Article activity feed