Word Segmentation using Self-Supervised Hierarchical Transformers for Scriptio Continua in Greek and Latin
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The digitization of historical documents written in Scriptio Continua, a style of writing without spaces between words, requires not only accurate Optical Character Recognition (OCR), but also a second step of very costly manual segmentation to convert character sequences into meaningful words. This article presents a new self-supervised hierarchical transformer model that automates word segmentation for Scriptio Continua in Greek and Latin texts. Our method utilizes readily available printed editions to train, first, a character-level transformer for word boundary labeling and, second, a word-level transformer to reclassify segmentation candidates based on a perplexity score. Evaluated on a diverse corpus of Greek and Latin codices (4th-6th centuries) and Byzantine seals (6th-12th centuries), our approach significantly outperforms state-of-the-art Bayesian (NHPYLM) and flat neural network models, achieving F1 scores of 96%, 94%, and 42% for boundary detection, respectively, on the three considered datasets. While significantly improving the performance of current approaches on highly abbreviated texts, this study highlights ongoing challenges for the study of abbreviations on smaller epigraphic objects, such as seals and coins. We additionally provide our code and datasets to ensure full reproducibility and foster future research.