Deep Learning 2.0.1: Mind and Cosmos - Towards Cosmos-Inspired Interpretable Neural Networks

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The standard dot product, foundational to deep learning, conflates magnitude and direction, limiting geometric expressiveness and often necessitating additional architectural components such as activation and normalization layers. We introduce the \ⵟ-product (Yat-product), a novel neural operator inspired by physical inverse-square laws, which intrinsically unifies vector alignment and spatial proximity within a single, non-linear, and self-regulating computation. This operator forms the basis of Neural-Matter Networks (NMNs), a new class of architectures that embed non-linearity and normalization directly into the core interaction mechanism, obviating the need for separate activation or normalization layers. We demonstrate that NMNs, and their convolutional and attention-based extensions, achieve competitive or superior performance on benchmark tasks in image classification and language modeling, while yielding more interpretable and geometrically faithful representations. Theoretical analysis establishes the \ⵟ-product as a positive semi-definite Mercer kernel with universal approximation and stable gradient properties. Our results suggest a new design paradigm for deep learning: by grounding neural computation in geometric and physical principles, we can build models that are not only efficient and robust, but also inherently interpretable.

Article activity feed