Pavel Kisilev, Daniel Freedman, et al.
ICPR 2012
Natural concept generation is critical to statistical interlingua-based speech-to-speech translation performance. To improve maximum-entropy-based concept generation, a forward-backward modeling approach is proposed, which generates concept sequences in the target language by selecting the hypothesis with the highest combined conditional probability based on both the forward and backward generation models. Statistical language models are further applied to utilize word-level context information. The concept generation error rate is reduced by over 20% in our speech translation corpus within limited domains. Improvements are also achieved in our experiments on speech translation.
Pavel Kisilev, Daniel Freedman, et al.
ICPR 2012
Michelle X. Zhou, Fei Wang, et al.
ICMEW 2013
Sudeep Sarkar, Kim L. Boyer
Computer Vision and Image Understanding
James E. Gentile, Nalini Ratha, et al.
BTAS 2009