Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition

  • Hui Li University of Adelaide
  • Peng Wang Northwestern Polytechnical University
  • Chunhua Shen University of Adelaide
  • Guyu Zhang Northwestern Polytechnical University

Abstract

Recognizing irregular text in natural scene images is challenging due to the large variance in text appearance, such as curvature, orientation and distortion. Most existing approaches rely heavily on sophisticated model designs and/or extra fine-grained annotations, which, to some extent, increase the difficulty in algorithm implementation and data collection. In this work, we propose an easy-to-implement strong baseline for irregular scene text recognition, using offthe-shelf neural network components and only word-level annotations. It is composed of a 31-layer ResNet, an LSTMbased encoder-decoder framework and a 2-dimensional attention module. Despite its simplicity, the proposed method is robust. It achieves state-of-the-art performance on irregular text recognition benchmarks and comparable results on regular text datasets. The code will be released.

Published
2019-07-17
Section
AAAI Technical Track: Vision