Singapore Institute of Technology
Browse

Effective Exploitation of Posterior Information for Attention-Based Speech Recognition

Download (1.6 MB)
journal contribution
posted on 2024-04-03, 06:44 authored by Jian Tang, Junfeng Hou, Yan Song, Li-Rong Dai, Ian McLoughlinIan McLoughlin

End-to-end attention-based modeling is increasingly popular for tackling sequence-to-sequence mapping tasks. Traditional attention mechanisms utilize prior input information to derive attention, which then conditions the output. However, we believe that knowledge of posterior output information may convey some advantage when modeling attention. A recent technique proposed for machine translation called the posterior attention model (PAM) demonstrates that posterior output information can be used in that way for machine translation. This paper explores the use of posterior information for attention modeling in an automatic speech recognition (ASR) task. We demonstrate that direct application of PAM to ASR is unsatisfactory, due to two deficiencies; Firstly, PAM adopts attention based weighted single-frame output prediction by assuming a single focused attention variable, whereas wider contextual information from acoustic frames is important for output prediction in ASR. Secondly, in addition to the well-known exposure bias problem, PAM introduces additional mismatches in attention training and inference calculations. We present extensive experiments combining a number of alternative approaches to solving these problems, leading to a high performance technique which we call extended PAM (EPAM). To counter the first deficiency, EPAM modifies the encoder to introduce additional context information for output prediction. The second deficiency is overcome in EPAM through a two part solution of a mismatch penalty term and an alternate learning strategy. The former applies a divergence-based loss to correct the mismatch bias distribution, while the latter employs a novel update strategy which relies on introducing iterative inference steps alongside each training step. In experiments with both WSJ-80hrs and Switchboard-300hrs datasets we found significant performance gains. For example, the full EPAM system model achieved a word error rate (WER) of 10.6% on the WSJ eval92 test set, compared to 11.6% for traditional prior-attention modeling. Meanwhile, on the Switchboard eval2000 test set, we achieved 16.3% WER compared to the traditional method WER of 17.3%.

History

Journal/Conference/Book title

IEEE Access

Publication date

2020-06-11

Version

  • Published

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC