Singapore Institute of Technology
Browse
- No file added yet -

Time–Frequency Feature Fusion for Noise Robust Audio Event Classification

Download (1.1 MB)
journal contribution
posted on 2024-04-03, 07:28 authored by Ian McLoughlinIan McLoughlin, Zhipeng Xie, Yan Song, Huy Phan, Ramaswamy Palaniappan

This paper explores the use of three different two-dimensional time–frequency features for audio event classification with deep neural network back-end classifiers. The evaluations use spectrogram, cochleogram and constant-Q transform-based images for classification of 50 classes of audio events in varying levels of acoustic background noise, revealing interesting performance patterns with respect to noise level, feature image type and classifier. Evidence is obtained that two well-performing features, the spectrogram and cochleogram, make use of information that is potentially complementary in the input features. Feature fusion is thus explored for each pair of features, as well as for all tested features. Results indicate that a fusion of spectrogram and cochleogram information is particularly beneficial, yielding an impressive 50-class accuracy of over96% in 0 dB SNR and exceeding99% accuracy in 10 dB SNR and above. Meanwhile, the cochleogram image feature is found to perform well in extreme noise cases of−5 dB and−10 dB SNR.

History

Journal/Conference/Book title

Circuits, Systems, and Signal Processing

Publication date

2020-03-20

Version

  • Published

Usage metrics

    Categories

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC