Journal of Computer Science

Performance Analysis of LDA, AdaBoost and Ensemble Bag Classifiers for Automatically Recognizing Nine Common Facial Expressions

M.D. Sumithra and M. Abdul Rahiman

DOI : 10.3844/jcssp.2018.1226.1237

Journal of Computer Science

Volume 14, Issue 9

Pages 1226-1237


Facial expression recognition holds a prominent role in today’s digital world with more Human –Computer Interaction happening in day to day life. Successful identification of facial expressions needs extraction of descriptive attributes from the active facial patches and accurate classification. This paper is presented with the comparison of three multi class classifiers namely Linear Discriminant Analysis, AdaBoost and Ensemble bag. Nine common facial emotions such as happiness, sad, anger, fear, disgust, surprised, confused, neutral and sleepy are recognized and classified. The feature descriptors are formed by combining Local Binary Patterns and Grey Level Co-occurrence Matrix. Feature descriptor formed from LBP operator supports handling the illumination invariance in the image and GLCM being capable of deriving second order textual information proves to be a good feature descriptor. Twenty-one active facial patches are extracted from the facial land marks eyebrows, iris, nose, sides of nose and lip corners. Feature vectors are generated for these twenty one facial patches, which considerably reduced the dimension feature vectors for classification. Each classifier is then trained using training set which consists of feature vector and corresponding expression of the image used for training. After training testing was done and the accuracy of recognition are analyzed. The experiments were done on facial expression data bases JAFFE and YALE. The proposed method obtained an accuracy of 98.03% for recognizing nine expressions.


© 2018 M.D. Sumithra and M. Abdul Rahiman. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.