Universal Forward Training and Structure-free Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The research of high-dimensional nonlinear systems has broad theoretical and engineering application prospects. As the structure complexity and depth of AI neural networks increase, there is an urgent need to work out forward training methods to deal with gradient vanishing, large storage, structure and deep constraints. Using the hypotheses and treatments of local linearization(LL) and isomorphism comparability(IC), here we present a novel systematic theory LL-IC and a universal forward training method LIFT for any stable and smooth parametric system even with a black-box structure. Experiments on DNN, SaNN, SaKAN, RAN, MLP, and IIR filter, proved the feasibility, effectiveness, and applicability. LIFT is a structure-free learning and universal complete forward training method, that has universality, simplicity, equality, and freedom features. It has important engineering significance in AI networks because of its potentially diverse designs like security isolation, energy-saving structure, distributed architecture, or parallel computation. It’s also attractive in math or other engineering fields.

Article activity feed