Clustering of bootstrapped acoustic model with full covariance
Xin Chen, Xiaodong Cui, et al.
ICASSP 2011
This paper proposes an acoustic modeling approach based on bootstrap and restructuring to dealing with data sparsity for low-resourced languages. The goal of the approach is to improve the statistical reliability of acoustic modeling for automatic speech recognition (ASR) in the context of speed, memory and response latency requirements for real-world applications. In this approach, randomized hidden Markov models (HMMs) estimated from the bootstrapped training data are aggregated for reliable sequence prediction. The aggregation leads to an HMM with superior prediction capability at cost of a substantially larger size. For practical usage the aggregated HMM is restructured by Gaussian clustering followed by model refinement. The restructuring aims at reducing the aggregated HMM to a desirable model size while maintaining its performance close to the original aggregated HMM. To that end, various Gaussian clustering criteria and model refinement algorithms have been investigated in the full covariance model space before the conversion to the diagonal covariance model space in the last stage of the restructuring. Large vocabulary continuous speech recognition (LVCSR) experiments on Pashto and Dari have shown that acoustic models obtained by the proposed approach can yield superior performance over the conventional training procedure with almost the same run-time memory consumption and decoding speed. © 2012 IEEE.
Xin Chen, Xiaodong Cui, et al.
ICASSP 2011
Jian Xue, Xiaodong Cui, et al.
INTERSPEECH 2011
Xiaodong Cui, Mohamed Afify, et al.
ICASSP 2012
Yonggui Yan, Jie Chen, et al.
ICML 2023