Synaptic plasticity-based regularizer for artificial neural networks

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Regularization is an important tool for the generalization of ANN models. Due to the lack of constraints, it cannot guarantee that the model will work in a real environment with input data distribution changes. Inspired by neuroplasticity, this paper introduces a bounded regularization method that can be safely applied during the deployment phase. First, the reliability of neuron outputs is improved by extending our recent neuronal masking method to generate new supporting neurons. The model is then regularized by incorporating a synaptic connection module containing conenctions of the generated neurons to their previous layer. These connections are optimized online by introducing a synaptic rewiring process triggered by the information about the input distribution. This process is formulated as bilevel mixed-integer nonlinear programming (MINLP) with an objective to minimize the outer risk of the output by identifying the connections that minimize the inner risk of the neuron output. To address this optimization problem, a single-wave scheme is introduced to decompose the problem into smaller, parallel sub-problems that minimize the inner cost function while ensuring the aggregated solution to minimize the outer one. In addition, a storage/recovery memory module is proposed to memorize these connections and their corresponding risks, enabling the model to retrieve previous knowledge when encountering similar situations. Experimental results from classification and regression tasks show around 8% improvement in accuracy over state-of-the-art techniques. As a result, the proposed regularization method enhances the adaptability and robustness of ANN models in a variable environment.

Article activity feed