A project to classify emotions like happiness, sadness, and anger from speech using MFCCs, machine learning models, and visualizations for audio features and model performance.
-
Updated
Dec 28, 2024 - Jupyter Notebook
A project to classify emotions like happiness, sadness, and anger from speech using MFCCs, machine learning models, and visualizations for audio features and model performance.
Speech emotion recognition system trained on multiple public datasets (RAVDESS, CREMA‑D, TESS, MELD, etc.). Uses audio features and deep learning to classify emotions (anger, happy, sad, neutral, fear, etc.) for more robust and generalizable models.
PyVoiceMood is an interactive voice-based assistant that ONLY detects the emotional sentiment of spoken input and prints the result.
Add a description, image, and links to the voice-emotion-recognition topic page so that developers can more easily learn about it.
To associate your repository with the voice-emotion-recognition topic, visit your repo's landing page and select "manage topics."