BenAV: A Bengali Audio-Visual Corpus for Visual Speech Recognition

Ashish Pondit, Muhammad Eshaque Ali Rukon, Anik Das, Ashad Kabir

Research output: Book chapter/Published conference paperConference paperpeer-review

Abstract

Visual speech recognition (VSR) is a challenging task. It has many applications such as facilitating speech recognition when the acoustic data is noisy or missing, assisting hearing impaired people, etc. Modern VSR systems require a large amount of data to achieve a good performance. Popular VSR datasets are mostly available for the English language and none in Bengali. In this paper, we present a large-scale Bengali audio-visual dataset, named "BenAV". To the best of our knowledge, BenAV is the first publicly available large-scale dataset in the Bengali language. BenAV contains a lexicon of 50 words from 128 speakers with a total number of 26,300 utterances. We have also applied three existing deep learning based VSR models to provide a baseline performance of our BenAV dataset. We run extensive experiments in two different configurations of the dataset to study the robustness of those models and achieved 98.70% and 82.5% accuracy, respectively. We believe that this research provides a basis to develop Bengali lip reading systems and opens the doors to conduct further research on this topic.
Original languageEnglish
Title of host publicationThe 28th International Conference on Neural Information Processing (ICONIP2021)
PublisherSpringer
Number of pages10
Publication statusAccepted/In press - 26 Sep 2021

Fingerprint

Dive into the research topics of 'BenAV: A Bengali Audio-Visual Corpus for Visual Speech Recognition'. Together they form a unique fingerprint.

Cite this