Matignon-Based Stability and Weight Synchronization of a Fractional Time Delay Neural Network Model

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial neural networks (ANNs) are powerful models inspired by the structure and function of the human brain. They are widely used for tasks such as classification, prediction, and model recognition. This study examines the stability of fractional-order neural networks with neuronal conditions, dynamic behavior, synchronization, and delays of time. Synchronization and stability for delayed neural network models are two important aspects of dynamic behavior. For a calculated fractionalorder, the state of the state variable wi(t) are synchronized with each other. Weight synchronization of wi (i = 1, 2, 3, . . . ,6) provides coherent updates during training, helping neural networks to study stable models. The incommensurate fractional-orders are linked to a system where each dynamic component develops with a different value, i.e. qi ≠ qj (i ≠ j) is inconsistent. These fractionalorders are calculated for the system’s eigenvalues and their singular points within the stability region defined by the Matignon-based stability. As the time delay decreases, more activation functions are induced, and the variable state of w3(t) requires longer relaxation times to be more stable than the variable state of w4(t). The Grunwald-Letnikov method is used to solve a fractional neural network system numerically and effectively handle fractional derivatives. This approach helps to more accurately simulate memory in neural networks.

Article activity feed