- Richard Gao @MrFlyingPizza
- Jooyoung (Julia) Lee @jylee2033
- Calvin Weng @yamikazoo
- Aarham Haider @AarhamH
- Abrar Rahman @abr-rhmn
| Project report |
|---|
Sample Charts
repository
├── src ## source code of the package itself
├── data ## code for data preprocessing, feature engineering, and the EEGDataSet class
├── model ## code for 1-D CNN class and training
├── utils ## utility
config.py ## contains paths and hyperparameters used by learning model
main.py ## main driver
run.py ## gradio deliverable
├── README.md ## You are here
├── requirements.txt ## If you use condaInstall the project.
-
Create the virtual environment
python -m venv .venv
-
Activate the virtual environment (this may take a while on CSIL)
macOS / Linux
source .venv/bin/activateWindows
.venv\Scripts\activate
-
Install dependencies using the requirements file (there are many to install so this may take additional time).
pip install -r requirements.txt <---- this will be found in this repo, so cd into the repo first
-
If you update the dependencies, remember to freeze it and commit the changes to the requirements.txt everytime.
pip freeze > requirements.txt
Clone the repository
git clone https://github.com/huytungst/EEGEmotions-27.git
cd EEGEmotions-27Windows Powershell
Move-Item -Path "path\to\EEGEmotions-27\training\eeg_features_extracted.csv" -Destination "path\to\2025_3_project_06\src\"Linux/macOS
mv /path/to/EEGEmotions-27/training/eeg_features_extracted.csv /path/to/2025_3_project_06/src/If you wish the rename the file, or change the path overall, you have to change CSV_FILE_PATH parameter under src/config.py
class Config:
...
CSV_FILE_PATH = "eeg_features_extracted.csv" <---- CHANGE YOUR .csv PATH HERE
...To train the model, simply cd into the repo and run
python src/main.py After training has completed, a set of charts will be created under plots/ for accuracy and loss for training and validation loops, as well as a confusion matrix.
Data can be found at: https://github.com/huytungst/EEGEmotions-27
Output will be saved in: best_cnn_model.pth (our model with the best model weights)
After training the model, you can use the interactive Gradio web application to make predictions on new EEG data.
-
Ensure your virtual environment is activated (see step 2 in Installation)
-
From the project root directory, run:
python src/run.py
-
The application will start and display a local URL (typically
http://127.0.0.1:7860) -
Open the URL in your web browser
The application has two tabs:
Upload your own EEG data files to get emotion predictions.
- Download Sample Dataset: Click the download button to get a sample CSV file (
sample_eeg_dataset.csv) that demonstrates the correct format - Upload EEG Data File: Upload a CSV file with pre-extracted EEG features
- Must contain 14 channels with ~35 features each per sample
- Asymmetry features are computed automatically
- Supported formats:
.csv
- Results: The model will display:
- Predicted emotion probabilities for the first sample (top 5 emotions shown)
- Confidence scores for all uploaded samples (up to 10 samples shown in detail)
Browse predictions on the training dataset to see how the model performs.
- Use the slider to select a sample index (0 to number of samples - 1)
- View:
- Ground truth emotion label from the training data
- Model's predicted emotion
- Prediction confidence scores (top 5 emotions)
- Participant ID and Cowen label information
Your uploaded CSV file should match the training data format:
- Columns: Features for 14 EEG channels (e.g.,
min_1,max_1,mean_1,ar1_1, ...,min_14,max_14, etc.) - Rows: Each row represents one EEG sample
- No labels needed: The model will predict the emotion labels
Tip: Download and examine the sample dataset from the web interface to see the exact format required.