Ensembles of multi-scale VGG acoustic models
Michael Heck, Masayuki Suzuki, et al.
INTERSPEECH 2017
In recent years, deep learning has achieved great success in speech enhancement. However, there are two major limitations regarding existing works. First, the Bayesian framework is not adopted in many such deep-learning-based algorithms. In particular, the prior distribution for speech in the Bayesian framework has been shown useful by regularizing the output to be in the speech space, and thus improving the performance. Second, the majority of the existing methods operate on the frequency domain of the noisy speech, such as spectrogram and its variations. The clean speech is then reconstructed using the approach of overlap-Add, which is limited by its inherent performance upper bound. This paper presents a Bayesian speech enhancement framework, called BaWN (Bayesian WaveNet), which directly operates on raw audio samples. It adopts the recently announced WaveNet, which is shown to be effective in modeling conditional distributions of speech samples while generating natural speech. Experiments show that BaWN is able to recover clean and natural speech.
Michael Heck, Masayuki Suzuki, et al.
INTERSPEECH 2017
Guanhua Zhang, Bing Bai, et al.
ACL 2019
Xiaoxiao Guo, Shiyu Chang, et al.
AAAI 2019
Bairu Hou, Yujian Liu, et al.
ICML 2024