Leda Sari, Samuel Thomas, et al.
INTERSPEECH 2019
Evolutionary stochastic gradient descent (ESGD) was proposed as a population-based approach that combines the merits of gradient-aware and gradient-free optimization algorithms for superior overall optimization performance. In this paper we investigate a variant of ESGD for optimization of acoustic models for automatic speech recognition (ASR). In this variant, we assume the existence of a well-trained acoustic model and use it as an anchor in the parent population whose good “gene” will prorogate in the evolution to the offsprings. We propose an ESGD algorithm leveraging the anchor models such that it guarantees the best fitness of the population will never degrade from the anchor model. Experiments on 50-hour Broadcast News (BN50) and 300-hour Switchboard (SWB300) show that the ESGD with anchors can further improve the loss and ASR performance over the existing well-trained acoustic models.
Leda Sari, Samuel Thomas, et al.
INTERSPEECH 2019
Michael Picheny, Zoltan Tuske, et al.
INTERSPEECH 2019
Rui Zhang, Conrad Albrecht, et al.
KDD 2020
Hakan Erdogan, Ruhi Sarikaya, et al.
ICSLP 2002