Local Responsibility Allocation in Multi-Layer Perceptrons
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Traditional multi-layer perceptrons rely on global backpropagation, where all connections are updated according to the same rule. In such networks, learning occurs implicitly through the adjustment of connection weights, with the parameter set collectively encoding the mapping function. This paper describes an optimization method that introduces a local responsibility allocation mechanism operating at the connection level. For each connection, a responsibility factor is calculated based on the distance between its input and a learnable center. This factor modulates both the forward signal and the backward gradient. A key aspect of this approach is the normalization of responsibility factors across all connections feeding into the same neuron. This normalization converts raw distance-based responses into a normalized allocation where the sum of responsibilities equals one. As a result, the neuron's output becomes a weighted sum with coefficients that sum to one, which constrains the output range and helps stabilize gradient flow. This mechanism shifts learning from treating weights in isolation toward capturing structured relationships among connections, offering a distinct perspective on credit assignment in neural networks.