Design of a linguistic postprocessor using variable memory length Markov models.

I. Guyon and F. Pereira.
In International Conference on Document Analysis and Recognition, pages 454--457, Montreal, Canada, IEEE Computer Society Press.
1995



We describe a linguistic postprocessor for character recognizers. The central module of our system is a trainable variable memory length Markov model (VLMM) that predicts the next character given a variable length window of past characters. The overall system is composed of several finite state automata, including the main VLMM and a proper noun VLMM. The best model reported in the literature (Brown et al 1992) achieves 1.75 bits per character on the Brown corpus. On that same corpus, our model, trained on 10 times less data, reaches 2.19 bits per character and is 200 times smaller (parameters). The model was designed for handwriting recognition applications but could also be used for other OCR problems and speech recognition.



[ next paper ]