Hua Yang, Ligang Lu
ICASSP 2004
Visual information in a speaker's face is known to improve robustness of automatic speech recognizers. However, most studies in audio-visual ASR have focused on "visually clean" data to benefit ASR in noise. This paper is a follow up on a previous study that investigated audio-visual ASR in visually challenging environments. It focuses on visual speech front end processing, and it proposes an improved, appearance based face and feature detection algorithm that utilizes Gaussian mixture model classifiers. This method is shown to improve the accuracy of face and feature detection, and thus visual speech recognition, over our previously used baseline system. In turn, this translates to improved audio-visual ASR, resulting in a 10% relative reduction of the word-error-rate in noisy speech.
Hua Yang, Ligang Lu
ICASSP 2004
G. Potamianos, C. Neti, et al.
ICASSP 2004
Iain Matthews, Gerasimos Potamianos, et al.
ICME 2001
Ashutosh Garg, Sreeram Balakrishnan, et al.
ICASSP 2004