Sören Bleikertz, Carsten Vogel, et al.
ACSAC 2014
A new technique for constructing Markov models for the acoustic representation of words is described. Word models are constructed from models of sub-word units called fenones. Fenones represent very short speech events, and are obtained automatically through the use of a vector quantizer. The fenonic baseform for a word—i.e., the sequence of fenones used to represent the word—is derived automatically from one or more utterances of that word. Since the word models are all composed from a small inventory of sub-word models, training for large-vocabulary speech recognition systems can be accomplished with a small training script. A method for combining phonetic and fenonic models is presented. Results of experiments with speaker-dependent and speaker-independent models on several isolated-word recognition tasks are reported. Comparative results with phonetics-based Markov models and template-based DP matching are also given. © 1993 IEEE
Sören Bleikertz, Carsten Vogel, et al.
ACSAC 2014
Peter L. Williams, Nelson L. Max, et al.
IEEE TVCG
Benedikt Blumenstiel, Johannes Jakubik, et al.
NeurIPS 2023
Ritendra Datta, Jianying Hu, et al.
ICPR 2008