Architecture and Scaling of the TSKI Model: A Phase–Temporal Neural Network Without a Loss Function
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper presents a biologically inspired neural network model based on the theoretical framework of TSKI 4.2 [1] by Atorin A., and examines its potential for architectural scaling. The model is positioned as an alternative class of neural networks in which computation is not based on minimizing a loss function, but on the formation and stabilization of temporal information trajectories through phase–temporal synchronization of neurons and homeostatic regulation of their parameters. The paper is structured in two phases. The first phase describes the components of TSKI theory that have already been implemented in software. The main mathematical objects of the model and their implementation are discussed, including neurons and synaptic connections with a mirrored representation, where access to the connection vector is available to both presynaptic and postsynaptic neurons. The computational core of the model is described, implementing a single operational step based on a phase synchronization mechanism between presynaptic and postsynaptic neurons (k5 = 0, k5 = 1). This mechanism constitutes the necessary and sufficient condition for information computation and synaptic parameter updates, and represents a digital analogue of biological STDP (Spike-Timing-Dependent Plasticity). This fundamental computational algorithm of TSKI reduces computational costs by introducing a binary condition for synchronization within a synapse: when synchronization is present, computations and parameter updates are performed; when synchronization is absent, computational activity is blocked. The simulation also implements a computational algorithm analogous to biological homeostatic regulation, responsible for stabilizing neuronal parameters and influencing synchronization conditions across neural connections. Simulation results [2] obtained from four small-scale TSKI neural networks [3] are presented, on the basis of which the hypothesis is advanced that the TSKI model may reduce susceptibility to catastrophic forgetting after training. The main focus of the paper is its second part, which explores the scalability of TSKI and describes an architecture designed to enable such scaling. The architectural organization of the TSKI model is oriented toward reproducing principles of information processing characteristic of the biological nervous system: receptor zone → thalamic nuclei → cerebral cortex. Within this analogy, four functional zones are identified in the model, each performing a distinct role; their operating principles and architectural structures are described. The key objective of the TSKI architectural solution is to ensure alignment between the phase–temporal states of neurons and the temporal dynamics of changes in a physical (detectable) stimulus parameter. Such alignment is necessary for the TSKI model to form, associate, and reproduce adaptive responses to stimuli over time. The analysis of the proposed architectural solutions reveals local analogues of the backpropagation mechanism (error backpropagation in ANNs) and optimization algorithms used in classical neural network models, as applied to the TSKI architecture. The nature of the objective function in TSKI and methods for improving the efficiency of its realization through the model’s internal dynamics are also examined. The paper concludes by demonstrating a high degree of similarity between the information-processing principles of the TSKI architecture and those of the biological nervous system. It is shown that the TSKI algorithms developed and implemented in simulations of small-scale neural networks are fundamentally ready for architectural scaling, and that the architectural principles embedded in the functional zones are falsifiable. This creates the possibility for experimental verification of the theoretical assumptions formulated in this work.