ECA110-Pooling: A Comparative Analysis of Pooling Strategies in Convolutional Neural Networks

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In this paper, we introduce and validate ECA110-Pooling, a novel rule-based pooling operator for Convolutional Neural Networks inspired by elementary cellular automata. A systematic comparative study is conducted, benchmarking ECA110-Pooling against conventional pooling methods (MaxPooling, AveragePooling, MedianPooling, MinPooling, KernelPooling) as well as state-of-the-art (SOTA) architectures. Experiments on three benchmark datasets-ImageNet (subset), CIFAR-10, and Fashion-MNIST-across training horizons ranging from 20 to 50,000 epochs demonstrate that ECA110-Pooling consistently achieves higher Top-1 accuracy, lower error rates, and stronger F1-scores than traditional pooling operators, while maintaining computational efficiency comparable to MaxPooling. Furthermore, in direct comparison with SOTA models, ECA110-Pooling delivers competitive accuracy with substantially fewer parameters and reduced training time. These results establish ECA110-Pooling as a validated and principled approach for image classification, bridging the gap between fixed pooling schemes and complex deep architectures. Its interpretable, rule-based design underscores both theoretical relevance and practical applicability in scenarios requiring a balance of accuracy, efficiency, and scalability.

Article activity feed