MAMGAN: Multiscale attention metric GAN for monaural speech enhancement in the time domain
2023-06-19
Author(s): Guo, HM (Guo, Huimin); Jian, HF (Jian, Haifang); Wang, YQ (Wang, Yequan); Wang, HC (Wang, Hongchang); Zhao, XF (Zhao, Xiaofan); Zhu, WQ (Zhu, Wenqi); Cheng, QH (Cheng, Qinghua)
Source: APPLIED ACOUSTICS Volume: 209 Article Number: 109385 DOI: 10.1016/j.apacoust.2023.109385 Early Access Date: MAY 2023 Published: JUN 30 2023
Abstract: In the speech enhancement (SE) task, the mismatch between the objective function used to train the SE model, and the evaluation metric will lead to the low quality of the generated speech. Although existing studies have attempted to use the metric discriminator to learn the alternative function of evaluation metric from data to guide generator updates, the metric discriminator's simple structure cannot better approximate the function of the evaluation metric, thus limiting the performance of SE. This paper proposes a multiscale attention metric generative adversarial network (MAMGAN) to resolve this problem. In the metric discriminator, the attention mechanism is introduced to emphasize the meaningful features of spatial direction and channel direction to avoid the feature loss caused by direct average pooling to better approximate the calculation of the evaluation metric and further improve SE's performance. In addition, driven by the effectiveness of the self-attention mechanism in capturing long-term dependence, we construct a multiscale attention module (MSAM). It fully considers the multiple representations of signals, which can better model the features of long sequences. The ablation experiment verifies the effectiveness of the attention metric discriminator and the MSAM. Quantitative analysis on the Voice Bank + DEMAND dataset shows that MAMGAN outperforms various time-domain SE methods with a 3.30 perceptual evaluation of speech quality score.
Accession Number: WOS:000997548200001
ISSN: 0003-682X
eISSN: 1872-910X