文摘
This paper proposes a decision level fusion strategy for audio-visual speech recognition in noisy situations. This method aims to enhance the recognition over different noisy conditions by fusing the scores obtained with classifiers trained with different feature sets. In particular, this method is evaluated by considering three modalities, audio, visual and audio-visual, respectively, but it could be employed using as many modalities as needed. The combination of the scores is performed by taking into account the reliability of each modality at different noisy conditions. The performance of the proposed recognition system is evaluated over two isolated word audio-visual databases, a public one and a database compiled by the authors of this paper. The proposed decision level fusion strategy is evaluated by considering different kind of classifier. Experimental results show that a good performance is achieved with the proposed system, leading to improvements in the recognition rates through a wide range of signal-to-noise ratios.