A dynamic spatiotemporal normalization model for continuous vision
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Perception and neural activity are profoundly shaped by the spatial and temporal context of sensory input, which has been modeled by divisive normalization over space or time. However, theoretical work has largely treated normalization separately within these dimensions and has not explained how future stimuli can suppress past ones. Here we introduce a computational model with a unified spatiotemporal receptive field structure that implements normalization across both space and time and ask whether this model captures the bidirectional effects of temporal context on neural responses and behavior. We found that biphasic temporal receptive fields emerged from this normalization computation, consistent with empirical observations. The model also reproduced several neural response properties, including nonlinear response dynamics, subadditivity, response adaptation, backwards masking, and bidirectional contrast-dependent suppression. Thus, the model captured a wide range of neural and behavioral effects, suggesting that a unified spatiotemporal normalization computation could underlie dynamic stimulus processing and perception.