Uncovering the Hidden Cost of Model Compression
Diganta Misra, Muawiz Chaudhary, et al.
CVPRW 2024
For large-vocabulary handwriting-recognition applications, such as note-taking, word-level language modeling is of key importance, to constrain the recognizer's search and to contribute to the scoring of hypothesized texts. We discuss the creation of a word-unigram language model, which associates probabilities with individual words. Typically, such models are derived from a large, diverse text corpus. We describe a three-stage algorithm for determining a word unigram from such a corpus. First is tokenization, the segmenting of a corpus into words. Second, we select for the model a subset of the set of distinct words found during tokenization. Complexities of these stages are discussed. Finally, we create recognizer-specific data structures for the word set and unigram. Applying our method to a 600-million-word corpus, we generate a 50,000-word model which eliminates 45% of word-recognition errors made by a baseline system employing only a character-level language model. © 2001 IEEE.
Diganta Misra, Muawiz Chaudhary, et al.
CVPRW 2024
Hannah Kim, Celia Cintas, et al.
IJCAI 2023
Benedikt Blumenstiel, Johannes Jakubik, et al.
NeurIPS 2023
Ritendra Datta, Jianying Hu, et al.
ICPR 2008