Command Palette
Search for a command to run...
Solène Tarride Tristan Faine Mélodie Boillet Harold Mouchère Christopher Kermorvant

Abstract
In this paper, we explore different ways of training a model for handwritten text recognition when multiple imperfect or noisy transcriptions are available. We consider various training configurations, such as selecting a single transcription, retaining all transcriptions, or computing an aggregated transcription from all available annotations. In addition, we evaluate the impact of quality-based data selection, where samples with low agreement are removed from the training set. Our experiments are carried out on municipal registers of the city of Belfort (France) written between 1790 and 1946. % results The results show that computing a consensus transcription or training on multiple transcriptions are good alternatives. However, selecting training samples based on the degree of agreement between annotators introduces a bias in the training data and does not improve the results. Our dataset is publicly available on Zenodo: https://zenodo.org/record/8041668.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| handwritten-text-recognition-on-belfort | PyLaia (human transcriptions + random split) | CER (%): 10.54 WER (%): 28.11 |
| handwritten-text-recognition-on-belfort | PyLaia (all transcriptions + agreement-based split) | CER (%): 4.34 WER (%): 15.14 |
| handwritten-text-recognition-on-belfort | PyLaia (human transcriptions + agreement-based split) | CER (%): 5.57 WER (%): 19.12 |
| handwritten-text-recognition-on-belfort | PyLaia (rover consensus + agreement-based split) | CER (%): 4.95 WER (%): 17.08 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.