Local Back-Propagation: Layer-wise Unsupervised Learning in Forward-Forward Algorithms

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent deep learning models, such as GPT-4, use the back-propagation algorithm (BP) and have achieved impressive performance. However, there is a noticeable difference between how BP operates and how the human brain learns. In response to this, the Forward-Forward algorithm (FF) was introduced. FF trains deep learning models using only forward passes. Although FF cannot fully replace BP due to its need for specialized inputs and loss functions, it remains promising in situations where BP is difficult to use, such as federated learning. To address these limitations and demonstrate the practical value of FF, we propose a Local Back-Propagation method that incorporates unsupervised FF. By using an unsupervised learning model, our approach allows training with standard inputs and common loss functions, thereby avoiding the special requirements of FF. This not only leads to more stable learning but also enables a wider range of possible applications than FF alone. Furthermore, because our method allows each layer to be physically separated, we have tested its effectiveness in scenarios like federated learning, where individual models are trained separately and then combined. Our results confirm that this approach expands the usability and scope of FF-based training methods.

Article activity feed