Murali Karthick Baskar, Shinji Watanabe, et al.
INTERSPEECH 2019
Self-supervised ASR-TTS models suffer in out-of-domain data conditions. Here we propose an enhanced ASR-TTS (EAT) model that incorporates two main features: 1) The ASR→TTS direction is equipped with a language model reward to penalize the ASR hypotheses before forwarding it to TTS. 2) In the TTS→ASR direction, a hyper-parameter is introduced to scale the attention context from synthesized speech before sending it to ASR to handle out-of-domain data. Training strategies and the effectiveness of the EAT model are explored under out-of-domain data conditions. The results show that EAT reduces the performance gap between supervised and self-supervised training significantly by absolute 2.6% and 2.7% on Librispeech and BABEL respectively.
Murali Karthick Baskar, Shinji Watanabe, et al.
INTERSPEECH 2019
Xiaodong Cui, Songtao Lu, et al.
ICASSP 2021
Djallel Bouneffouf, Raphael Feraud, et al.
ICASSP 2021
Oldřich Plchot, Lukáš Burget, et al.
ICASSP 2016