Abstract
We present an attention-based model for end-to-end handwriting recognition. Our system does not require any segmentation of the input paragraph. The model is inspired by the differentiable attention models presented recently for speech recognition, image captioning or translation. The main difference is the covert and overt attention, implemented as a multi-dimensional LSTM network. Our principal contribution towards handwriting recognition lies in the automatic transcription without a prior segmentation into lines, which was crucial in previous approaches. To the best of our knowledge this is the first successful attempt of end-to-end multi-line handwriting recognition. We carried out experiments on the well-known IAM Database. The results are encouraging and bring hope to perform full paragraph transcription in the near future.
Abstract (translated by Google)
我们提出了一种基于关注的端到端手写识别模型。我们的系统不需要对输入段落进行任何细分。该模型受到最近提出的用于语音识别,图像字幕或翻译的可区分的注意模型的启发。主要区别在于隐蔽和公开的关注,作为多维LSTM网络实施。我们对手写识别的主要贡献在于自动转录,没有事先分割成行,这在以前的方法中是至关重要的。就我们所知,这是端到端多行手写识别的首次成功尝试。我们在着名的IAM数据库上进行了实验。结果是令人鼓舞的,并带来希望在不久的将来执行全段转录。
URL
https://arxiv.org/abs/1604.03286