TY - JOUR AU - Shukla, Tushar Dhar AU - Ahmad, Oqail AU - Sapra, Puneet AU - Tamma, Ratna Kumari AU - Kamepalli, Sujatha AU - Malarvizhi, C. AU - Bhoopathy, V. PY - 2025 TI - An Innovative Method for Recognizing Face Expressions Based on Genetic Algorithm and Extreme Learning Based Hybrid Model JF - Journal of Computer Science VL - 21 IS - 2 DO - 10.3844/jcssp.2025.388.398 UR - https://thescipub.com/abstract/jcssp.2025.388.398 AB - The detection of facial action coding system Action Units (AU) and other facial expressions has been a major focus of computer science for more than 20 years categorizing discrete emotional states based on facial expressions. While several widely used face expression databases exist, standardization and comparability remain key challenges. The lack of a universally recognized assessment procedure and insufficient information to replicate documented outcomes hinder progress and comparison in the field. To address these challenges, this research proposes a periodic challenge in facial expression recognition to facilitate fair comparisons and provide insights into the field's advancements, obstacles, and focal points. Additionally, a novel preprocessing method is introduced to remove illumination effects from facial images efficiently. Gabor filters are applied to preprocess images, enhancing subsequent digital image processing by improving image quality. The face extraction stage utilizes the Viola-Jones algorithm to identify faces in received images, followed by ROI segmentation to estimate the face's dimensions and automatically split ROIs into mouth and areas surrounding the eyes and forehead using projective integrals and picture moments. Feature extraction employs Shi Tomasi Corner Points to extract corner points, which are then utilized with ELM, ELM-CNN, or GA-ELM to train the model. In the proposed research study, the proposed GA-ELM model achieved recognition accuracy of 96.88%, outperforming other state-of-the-art methods such as ELM-CNN and ELM, which achieved accuracies of 90.53 and 86.24% respectively. This demonstrates the superior performance of the GA-ELM model in facial expression recognition tasks.