Singapore Institute of Technology
Browse

File(s) not publicly available

An effective deep embedding learning method based on dense-residual networks for speaker verification

conference contribution
posted on 2024-04-03, 04:41 authored by Ying Liu, Yan Song, Ian McLoughlinIan McLoughlin, Lin Liu, Li-Rong Dai

In this paper, we present an effective end-to-end deep embedding learning method based on Dense-Residual networks, which combine the advantages of a densely connected convolutional network (DenseNet) and a residual network (ResNet), for speaker verification (SV). Unlike a model ensemble strategy which merges the results of multiple systems, the proposed Dense-Residual networks perform feature fusion on every basic DenseR building block. Specifically, two types of DenseR blocks are designed. A sequential-DenseR block is constructed by densely connecting stacked basic units in a residual block of ResNet. A parallel-DenseR comprises split and concatenation operations on residual and dense components via corresponding skip connections. These building blocks are stacked into deep networks to exploit the complementary information with different receptive field sizes and growth rates. Extensive experiments have been conducted on the VoxCeleb1 dataset to evaluate the proposed methods. The SV performance achieved by the proposed Dense-Residual networks is shown to outperform corresponding ResNet, DenseNet or fusions of them, with similar model complexity, by a significant margin.

History

Journal/Conference/Book title

ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 06-11 June 2021, Toronto, Ontario, Canada.

Publication date

2021-05-13

Usage metrics

    Categories

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC