Specifying the orthographic prediction error for a better understanding of efficient visual word recognition in humans and machines

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent evidence suggests that readers optimize low-level visual information following the principles of predictive coding. Based on a transparent neurocognitive model, we postulated that readers optimize their percept by removing redundant visual signals, which allows them to focus on the informative aspects of the sensory input, i.e., the orthographic prediction error (oPE). Here, we test alternative oPE implementations by assuming all-or-nothing signaling units based on multiple thresholds and compare them to the original oPE implementation. For model evaluation, we implemented the comparison based on behavioral and electrophysiological data (EEG at 230, 430 ms). We found the highest model fit for the oPE with a 50% threshold integrating multiple prediction units for behavior and the late EEG component. The early EEG component was still explained best by the original hypothesis. In the final evaluation, we used image representations of both oPE implementations as input to a deep-neuronal network model (DNN). We compared the lexical decision performance of the DNN in two tasks (words vs. consonant strings; words vs. pseudowords) to the performance after training with unaltered word images and found better DNN performance when trained with the 50% oPE representations in both tasks. Thus, the new formulation is adequate for late but not early neuronal signals and lexical decision behavior in humans and machines. The change from early to late neuronal processing likely reflects a transformation in the representational structure over time that relates to accessing the meaning of words.

Article activity feed