Skip to content

DinoWithCurls/EmotionDetectionCNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Emotion Detection in Humans using CNN

Technologies Used

  1. Python
  2. OpenCV
  3. Tensorflow-GPU (depending on GPU availability)
  4. NVidia CUDA Computing Toolkit (depending on GPU availability)

Dataset Used

FER_Modified : https://www.kaggle.com/srinivasbece/fer-modified

Group Members

  1. Syed Qausain Huda
  2. Swadha Kumar
  3. Saunak Das
  4. Aditya Raj Singh

Description

This is an implementation of Convolutional Neural Network architecture for detecting emotions in human faces, provided either as a static image or as a video from webcam. The model is trained beforeuse, and then it is provided the images for predicting. The prediction value is passed back, which decides on the emotion being displayed by the person. Our model has one input layer, 4 hidden layers, and two fully-connected neural networks(FCNNs). Through this, we were able to achieve around 97% testing accuracy and around 78% validation accuracy. We use Tensorflow and Keras functions and models for our purpose.

Testing and Validation Graphs

For training at 20 epochs

20epochs

For training at 50 epochs

50epochs

For training at 100 epochs

100epochs

Results we got

res1 res2

Future Scope

  1. Our training was done only on static images only having the faces. We can have our model trained with videos.
  2. Our model currently is not great with side profiles. We can either train it using side profiles as well, or use better libraries for the purpose.
  3. Our model does not work properly in cases of bad lighting. We need to work on that.
  4. We can implement this model for other forms of media as well, such as audio, video, text.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages