A limited-size ensemble of homogeneous CNN/LSTMs for high-performance word classification

Mahya Ameryan*, Lambert Schomaker

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

46 Downloads (Pure)

Abstract

The strength of long short-term memory neural networks (LSTMs) that have been applied is more located in handling sequences of variable length than in handling geometric variability of the image patterns. In this paper, an end-to-end convolutional LSTM neural network is used to handle both geometric variation and sequence variability. The best results for LSTMs are often based on large-scale training of an ensemble of network instances. We show that high performances can be reached on a common benchmark set by using proper data augmentation for just five such networks using a proper coding scheme and a proper voting scheme. The networks have similar architectures (convolutional neural network (CNN): five layers, bidirectional LSTM (BiLSTM): three layers followed by a connectionist temporal classification (CTC) processing step). The approach assumes differently scaled input images and different feature map sizes. Three datasets are used: the standard benchmark RIMES dataset (French); a historical handwritten dataset KdK (Dutch); the standard benchmark George Washington (GW) dataset (English). Final performance obtained for the word-recognition test of RIMES was 96.6%, a clear improvement over other state-of-the-art approaches which did not use a pre-trained network. On the KdK and GW datasets, our approach also shows good results. The proposed approach is deployed in the Monk search engine for historical-handwriting collections.
Original languageEnglish
JournalNeural Computing and Applications
Early online date1-Feb-2021
DOIs
Publication statusE-pub ahead of print - 1-Feb-2021

Keywords

  • cs.CL
  • cs.LG

Cite this