This project is the source code for the bachelor thesis titled EmotionWave: GAN-AUGMENTED CUSTOM DATASET USING EEG HEADSET FOR EMOTION RECOGNITION.
You can find the repository on GitHub here.
This guide will help you set up the environment for EEG Emotion Recognition using Miniconda, FFMPEG, cuDNN, and PyTorch.
This project focuses on EEG emotion recognition through advanced machine learning techniques. It includes the following capabilities:
- Dataset Collection: Using prepared
.mp4videos categorized by emotions (negative, neutral, positive). - Model Training and Evaluation: Includes baseline training on the SEED dataset and custom dataset EmotionWave.
- GAN Implementation: Utilizes GANs for data augmentation.
All model code is located in the model directory, and paths mentioned will be relative to this directory.
For access to the EmotionWave dataset, you may contact the project owner at andreidezvoltator@gmail.com to request the dataset.
Before diving into the installation steps, ensure you have the following prerequisites:
- Miniconda
- FFMPEG
- cuDNN
- Git
- PyCharm Professional with Jupyter Notebook extension (VSCode is also possible but not covered here)
Instead of using Conda, you may also use a Python virtual environment, but the guide will focus on Conda.
First, clone the repository to your local machine using Git.
git clone https://github.com/andreighinea1/BachelorThesis.git
cd BachelorThesisFollow the instructions on the official Miniconda installation page for your operating system.
Download the Miniconda installer for Windows and run it. Follow the installation instructions provided on the page.
Download the Miniconda installer for Linux and run the following commands in your terminal:
bash Miniconda3-latest-Linux-x86_64.shFollow the prompts to complete the installation.
Visit the FFmpeg download page to get the latest release.
- Download the release for Windows.
- Extract the downloaded file.
- Add the
bindirectory to your system's PATH environment variable:- Open the Start Search, type in "env", and select "Edit the system environment variables".
- Click on the "Environment Variables" button.
- Under "System variables", find the
Pathvariable and click "Edit". - Click "New" and add the path to the
bindirectory of your extracted FFMPEG folder. - Click "OK" to save and exit.
- Download the release for Linux.
- Extract the downloaded file.
- Add the
bindirectory to your system's PATH environment variable:export PATH=/path/to/ffmpeg/bin:$PATH
- To make this change permanent, add the above line to your
.bashrcor.zshrcfile.
Follow the installation instructions from the official cuDNN installation guide for your operating system.
Refer to the cuDNN installation guide for Windows.
Refer to the cuDNN installation guide for Linux.
conda create --name eeg_env python=3.11conda activate eeg_envsource activate eeg_envVisit the PyTorch Get Started page.
-
Select the following options:
- PyTorch Build: Stable (2.3.1)
- Your OS: Choose your operating system
- Package: Conda
- Language: Python
- Compute Platform: CUDA 12.1
-
Copy the provided installation command and run it in your terminal. For example:
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
Ensure you are in the activated conda environment and install the other required packages from requirements.txt:
pip install -r requirements.txt-
Prepare Videos:
- Place
.mp4videos indataset_collection/videos/{EMOTION}directories, with{EMOTION}being one ofnegative, neutral, positive. Name the videos as1.mp4,2.mp4,3.mp4, etc.
- Place
-
Run Jupyter Notebook:
- Open and run
dataset_collection/notebook_dataset_collection.ipynb. - This notebook concatenates the videos for each emotion and creates segments (choose segment length, default is 4 minutes).
- Uncomment the line
# experiment.run_experiment()to show the video and experiment setup.
- Open and run
-
SEED Dataset:
- Use
notebook.ipynbto load the SEED Dataset, perform data augmentation, pre-training, fine-tuning, and evaluate the model.
- Use
-
Custom Dataset (EmotionWave):
- Use
notebook_my_dataset_processing.ipynbto load the EmotionWave dataset, run GANs for each channel, perform data augmentation, pre-training, fine-tuning, and evaluate the model. - Uncomment the line
# gan_manager.initialize_and_train_gans()to train GANs. By default, GANs are loaded withgan_manager.load_gan_models(epochs).
- Use
-
Hyperparameter Tuning:
- Use
notebook_testing_values.ipynbfor hyperparameter tuning to find the best values for training the model.
- Use
For development, it's recommended to use PyCharm Professional with the Jupyter Notebook extension. If you prefer using VSCode, it is possible but not covered in this guide.
A detailed scientific paper documenting the methodology, results, and discussion for this thesis is included in the repository:
Ghinea_AndreiRobert_Documentation_CTIEN_Licence - no signature.pdf— Final thesis with all figures, findings, and references.
The original LyX source file (BachelorThesis.lyx) is also provided in the same directory.
This project is licensed under the MIT License. See the LICENSE file for details.
You have now set up your environment and are ready to work on EEG Emotion Recognition. If you encounter any issues, refer to the official documentation of the respective tools or libraries.