Human emotions have important role in communication especially to understand the emotions of those with speech problems. Various facial emotion recognition and detection systems have been developed but most of these systems have difficulty in performing a muti-class classification and yielded lower accuracy. Therefore, this research employed convolutional neural network for recognition and detection of four basic emotions: happy, sad, angry, neutral. The dataset for training the convolutional neural network model was obtained locally and it include about 133 images. Results show that the system developed performed well with an accuracy of 0.9533, precision of 0.97, F1-score of 0.94 and recall of 0.93. The approach used showed a significant improvement over traditional machine learning methods and be a useful tool for those with speech problems and visually to predict human emotion.
Author(s): Adebimpe Esan (1), Adedayo Sobowale (2), Janet Jooda (3), Tomilayo Adebiyi (4), Michael Adio (5) and