File(s) not publicly available
An effective deep embedding learning method based on dense-residual networks for speaker verification
In this paper, we present an effective end-to-end deep embedding learning method based on Dense-Residual networks, which combine the advantages of a densely connected convolutional network (DenseNet) and a residual network (ResNet), for speaker verification (SV). Unlike a model ensemble strategy which merges the results of multiple systems, the proposed Dense-Residual networks perform feature fusion on every basic DenseR building block. Specifically, two types of DenseR blocks are designed. A sequential-DenseR block is constructed by densely connecting stacked basic units in a residual block of ResNet. A parallel-DenseR comprises split and concatenation operations on residual and dense components via corresponding skip connections. These building blocks are stacked into deep networks to exploit the complementary information with different receptive field sizes and growth rates. Extensive experiments have been conducted on the VoxCeleb1 dataset to evaluate the proposed methods. The SV performance achieved by the proposed Dense-Residual networks is shown to outperform corresponding ResNet, DenseNet or fusions of them, with similar model complexity, by a significant margin.