pith. machine review for the scientific record. sign in

arxiv: 1811.05250 · v2 · submitted 2018-11-13 · 💻 cs.CL · cs.CV· cs.SD· eess.AS

Recognition: unknown

Modality Attention for End-to-End Audio-visual Speech Recognition

Authors on Pith no claims yet
classification 💻 cs.CL cs.CVcs.SDeess.AS
keywords recognitionspeechattentionaudio-visualmethodmodalityend-to-endmultimodal
0
0 comments X
read the original abstract

Audio-visual speech recognition (AVSR) system is thought to be one of the most promising solutions for robust speech recognition, especially in noisy environment. In this paper, we propose a novel multimodal attention based method for audio-visual speech recognition which could automatically learn the fused representation from both modalities based on their importance. Our method is realized using state-of-the-art sequence-to-sequence (Seq2seq) architectures. Experimental results show that relative improvements from 2% up to 36% over the auditory modality alone are obtained depending on the different signal-to-noise-ratio (SNR). Compared to the traditional feature concatenation methods, our proposed approach can achieve better recognition performance under both clean and noisy conditions. We believe modality attention based end-to-end method can be easily generalized to other multimodal tasks with correlated information.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.