Singapore Institute of Technology
Browse

File(s) not publicly available

Self-attention generative adversarial network for speech enhancement

conference contribution
posted on 2024-04-03, 04:14 authored by Huy Phan, Huy Le Nguyen, Oliver Y. Chén, Philipp Koch, Ngoc Q. K. Duong, Ian McLoughlinIan McLoughlin, Alfred Mertins

Existing generative adversarial networks (GANs) for speech enhancement solely rely on the convolution operation, which may obscure temporal dependencies across the sequence input. To remedy this issue, we propose a self-attention layer adapted from non-local attention, coupled with the convolutional and deconvolutional layers of a speech enhancement GAN (SEGAN) using raw signal input. Further, we empirically study the effect of placing the self-attention layer at the (de)convolutional layers with varying layer indices as well as at all of them when memory allows. Our experiments show that introducing self-attention to SEGAN leads to consistent improvement across the objective evaluation metrics of enhancement performance. Furthermore, applying at different (de)convolutional layers does not significantly alter performance, suggesting that it can be conveniently applied at the highest-level (de)convolutional layer with the smallest memory overhead.

History

Journal/Conference/Book title

46th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2021), 6-11 June 2021, Toronto, Ontario, Canada.

Publication date

2021-06-06

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC