Modeling location-invariant orthographic processing in the Finnish language with an inflectional morphology

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Previous research indicates that readers may construct a location-invariant letter string representation before associating it with a lexical representation. In neural network modeling, this is achieved with a two-stack model in which a hidden layer first learns to map location-specific letter representations into word-centered positions. Then, another stack is trained to map them into lexical representations. However, the model has been validated only in one language (French). In this study, we replicated the model with a Finnish language corpus having substantially more resembling words (KISSAA, KISSAN, KISSAT) and with modern and accessible programming tools. To gain insight into the model’s coding principles, we also trained the model with random letter strings, analyzed the model’s hidden layer activation patterns, and conducted an error analysis. The results indicated that random letter string training leads to superior letter identity and position coding. However, the natural corpus with many close words leads to a more fine-grained mapping from word-centered letter string representations to lexical representations. Finally, bigram coding may emerge in weights connecting hidden to output nodes. We conclude that the one- or two-stack model seems a potent candidate for developing a universal and comprehensive connectionist model of reading.

Article activity feed