Modeling memory effects in a head-final language with category locality

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Memory limitations have been assumed to be a major factor shaping human sentence comprehension, but characterizing memory-demanding structures in a broad-coverage and cross-linguistic manner has remained a challenge. A recent study suggested that such a general characterization can be obtained by assuming efficient compression of the syntactic information from the input guided by a logic-based grammar. Specifically, it was shown that the memory cost of syntactic integration measured in terms of Combinatory Categorial Grammar (CCG) predicts variance in reading times for naturalistic texts in English, unlike the more traditional dependency-based integration cost. The crucial test for the new theory is yet to be done, however, since the predictions of CCG- and dependency-based integration cost most drastically diverge in head-final languages. This study conducts this crucial test using a naturalistic reading time dataset in Japanese. Our results favor CCG over dependency grammar: CCG-based integration cost predicted reading time variance of held-out data, while the dependency-based version failed to show such a predictive power. We also show that CCG predicts the cost of information storage better than dependency grammar, though in this realm the two formalisms make similar predictions. Overall, our results, combined with previous ones,} suggest that the logic-based grammar captures some cross-linguistic, potentially universal aspect of human sentence comprehension.

Article activity feed