An emotion recognition model based on facial recognition in virtual learning environment

D. Yang, Abeer Alsadoon, P. W.C. Prasad, A. K. Singh, A. Elchouemi

Research output: Contribution to journalArticlepeer-review

138 Citations (Scopus)
1664 Downloads (Pure)

Abstract

The purpose of this study is to introduce a method based on facial recognition to identify students' understanding of the entire distance learning process. This study proposes a learning emotion recognition model, which consists of three stages: Feature extraction, subset feature and emotion classifier. A Haar Cascades method is used to detect the input image, a face, as the basis for the extraction of eyes and mouth, and then through the Sobel edge detection to obtain the characteristic value. Through Neural Network classifier training, six kinds of different emotional categories are obtained. Experiments using JAFF database show that the proposed method has high classification performance. Experimental results show that the model proposed in this paper is consistent with the expressions from the learning situation of students in virtual learning environments. This paper demonstrates that emotion recognition based on facial expressions is feasible in distance education, permitting identification of a student's learning status in real time. Therefore, it can help teachers to change teaching strategies in virtual learning environments according to the student's emotions.

Original languageEnglish
Pages (from-to)2-10
Number of pages9
JournalProcedia Computer Science
Volume125
DOIs
Publication statusPublished - 01 Jan 2018

Fingerprint

Dive into the research topics of 'An emotion recognition model based on facial recognition in virtual learning environment'. Together they form a unique fingerprint.

Cite this