Advanced Facial Emotion Recognition Using DCNN-ELM: A Comprehensive Approach to Preprocessing, Feature Extraction and Performance Evaluation
- 1 School of Computing, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, India
- 2 Department CSE, M.M. Engineering College, Maharishi Markandeshwar (Deemed to Be University), Mullana, Ambala, Haryana, India
- 3 Department of CSE-AI&ML, GMR Institute of Technology, Rajam Andhra Pradesh, India
- 4 Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India
- 5 Department of Electronics and Communication Engineering, Aditya University, Surampalem, India
- 6 Department of AIML, M S Engineering College, Navarathna Agrahara, Sadahalli, Bengaluru, India
- 7 Department of Computer Science and Engineering, Sree Rama Engineering College, Tirupathi, India
Abstract
As a subfield of affective computing, Facial Emotion Recognition (FER) teaches computers to read people's facial expressions to determine their emotional state. Because facial expressions convey 55% of an individual's emotional and mental state in the whole range of face-to-face communication, Facial Emotion Recognition is crucial for connecting humans and computers. Improvements in the way computer systems (robotic systems) interact with or assist humans are another benefit of advancements in this area. Deep learning is key to the highly advanced research being conducted in this area. Recently, FER research has made use of Ekman's list of fundamental emotions as one of these models. Anger, Disgust, Fear, Happy, Sad, Surprise, and Neutral are the seven main emotions mapped out on Robert Plutchik's wheel. Opposite to each of the main emotions is its polar opposite. There are four steps to the suggested method: Preprocessing, feature extraction, model performance evaluation, and finalization. The preprocessing step makes use of the kernel filter. The proposed approach uses SWLDA for feature extraction. Facial Emotion Recognition (FER) is critical for improving human-computer interactions, particularly in educational settings. This study presents a novel hybrid approach combining Deep Convolutional Neural Networks (DCNN) with Extreme Learning Machines (ELM) to enhance emotion recognition accuracy. The proposed model demonstrates superior performance compared to traditional DCNN and standalone ELM approaches, offering real-time emotion detection in online learning environments. The effectiveness of the model is validated using publicly available datasets, setting a new benchmark for FER. This study makes major contributions to the field of Facial Emotion Recognition (FER) by offering a robust architecture that combines Deep Convolutional Neural Networks (DCNN) with Extreme Learning Machines (ELM). The methodology's efficacy is proven with publicly available datasets, establishing a new standard in FER, particularly in educational settings.
DOI: https://doi.org/10.3844/jcssp.2025.13.24
Copyright: © 2025 Boopalan K., Satyajee Srivastava, K. Kavitha, D. Usha Rani, K. Jayaram Kumar, M. V. Jagannatha and V. Bhoopathy. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
- 146 Views
- 40 Downloads
- 0 Citations
Download
Keywords
- Facial Emotion Recognition (FER)
- Linear Discriminant Analysis (LDA)
- Extreme Learning Machine (ELM)
- Deep Convolutional Neural Network (DCNN)