Semantic structured language models
Hakan Erdogan, Ruhi Sarikaya, et al.
ICSLP 2002
Language modeling for an inflected language such as Arabic poses new challenges for speech recognition and machine translation due to its rich morphology. Rich morphology results in large increases in out-of-vocabulary (OOV) rate and poor language model parameter estimation in the absence of large quantities of data. In this study, we present a joint morphological- lexical language model (JMLLM) that takes advantage of Arabic morphology. JMLLM combines morphological segments with the underlying lexical items and additional available information sources with regards to morphological segments and lexical items in a single joint model. Joint representation and modeling of morphological and lexical items reduces the OOV rate and provides smooth probability estimates while keeping the predictive power of whole words. Speech recognition and machine translation experiments in dialectal-Arabic show improvements over word and morpheme based trigram language models. We also show that as the tightness of integration between different information sources increases, both speech recognition and machine translation performances improve. © 2008 IEEE.
Hakan Erdogan, Ruhi Sarikaya, et al.
ICSLP 2002
Ruhi Sarikaya, Yonggang Deng
NAACL-HLT 2007
Ruhi Sarikaya, Mohamed Afify, et al.
NAACL-HLT 2009
Mohamed Afify, Xinwei Li, et al.
IEEE Transactions on Audio, Speech and Language Processing