EEGEyeNet is benchmark to evaluate ET prediction based on EEG measurements with an increasing level of difficulty

Overview

Introduction EEGEyeNet

EEGEyeNet is a benchmark to evaluate ET prediction based on EEG measurements with an increasing level of difficulty.

Overview

The repository consists of general functionality to run the benchmark and custom implementation of different machine learning models. We offer to run standard ML models (e.g. kNN, SVR, etc.) on the benchmark. The implementation can be found in the StandardML_Models directory.

Additionally, we implemented a variety of deep learning models. These are implemented and can be run in both pytorch and tensorflow.

The benchmark consists of three tasks: LR (left-right), Direction (Angle, Amplitude) and Coordinates (x,y)

Installation (Environment)

There are many dependencies in this benchmark and we propose to use anaconda as package manager.

You can install a full environment to run all models (standard machine learning and deep learning models in both pytorch and tensorflow) from the eegeyenet_benchmark.yml file. To do so, run:

conda env create -f eegeyenet_benchmark.yml

Otherwise you can also only create a minimal environment that is able to run the models that you want to try (see following section).

General Requirements

Create a new conda environment:

conda create -n eegeyenet_benchmark python=3.8.5 

First install the general_requirements.txt

conda install --file general_requirements.txt 

Pytorch Requirements

If you want to run the pytorch DL models, first install pytorch in the recommended way. For Linux users with GPU support this is:

conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch 

For other installation types and cuda versions, visit pytorch.org.

Tensorflow Requirements

If you want to run the tensorflow DL models, run

conda install --file tensorflow_requirements.txt 

Standard ML Requirements

If you want to run the standard ML models, run

conda install --file standard_ml_requirements.txt 

This should be installed after installing pytorch to not risk any dependency issues that have to be resolved by conda.

Configuration

The model configuration takes place in hyperparameters.py. The training configuration is contained in config.py.

config.py

We start by explaining the settings that can be made for running the benchmark:

Choose the task to run in the benchmark, e.g.

config['task'] = 'LR_task'

For some tasks we offer data from multiple paradigms. Choose the dataset used for the task, e.g.

config['dataset'] = 'antisaccade'

Choose the preprocessing variant, e.g.

config['preprocessing'] = 'min'

Choose data preprocessed with Hilbert transformation. Set to True for the standard ML models:

config['feature_extraction'] = True

Include our standard ML models into the benchmark run:

config['include_ML_models'] = True 

Include our deep learning models into the benchmark run:

config['include_DL_models'] = True

Include your own models as specified in hyperparameters.py. For instructions on how to create your own custom models see further below.

config['include_your_models'] = True

Include dummy models for comparison into the benchmark run:

config['include_dummy_models'] = True

You can either choose to train models or use existing ones in /run/ and perform inference with them. Set

config['retrain'] = True 
config['save_models'] = True 

to train your specified models. Set both to False if you want to load existing models and perform inference. In this case specify the path to your existing model directory under

config['load_experiment_dir'] = path/to/your/model 

In the model configuration section you can specify which framework you want to use. You can run our deep learning models in both pytorch and tensorflow. Just specify it in config.py, make sure you set up the environment as explained above and everything specific to the framework will be handled in the background.

config.py also allows to configure hyperparameters such as the learning rate, and enable early stopping of models.

hyperparameters.py

Here we define our models. Standard ML models and deep learning models are configured in a dictionary which contains the object of the model and hyperparameters that are passed when the object is instantiated.

You can add your own models in the your_models dictionary. Specify the models for each task separately. Make sure to enable all the models that you want to run in config.py.

Running the benchmark

Create a /runs directory to save files while running models on the benchmark.

benchmark.py

In benchmark.py we load all models specified in hyperparameters.py. Each model is fitted and then evaluated with the scoring function corresponding to the task that is benchmarked.

main.py

To start the benchmark, run

python3 main.py

A directory of the current run is created, containing a training log, saving console output and model checkpoints of all runs.

Add Custom Models

To benchmark models we use a common interface we call trainer. A trainer is an object that implements the following methods:

fit() 
predict() 
save() 
load() 

Implementation of custom models

To implement your own custom model make sure that you create a class that implements the above methods. If you use library models, make sure to wrap them into a class that implements above interface used in our benchmark.

Adding custom models to our benchmark pipeline

In hyperparameters.py add your custom models into the your_models dictionary. You can add objects that implement the above interface. Make sure to enable your custom models in config.py.

You might also like...
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano https:

Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano https:

Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano https:

Ludwig is a toolbox that allows to train and evaluate deep learning models without the need to write code.
Ludwig is a toolbox that allows to train and evaluate deep learning models without the need to write code.

Translated in 🇰🇷 Korean/ Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. It is built on

Ludwig is a toolbox that allows to train and evaluate deep learning models without the need to write code.
Ludwig is a toolbox that allows to train and evaluate deep learning models without the need to write code.

Translated in 🇰🇷 Korean/ Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. It is built on

Train/evaluate a Keras model, get metrics streamed to a dashboard in your browser.
Train/evaluate a Keras model, get metrics streamed to a dashboard in your browser.

Hera Train/evaluate a Keras model, get metrics streamed to a dashboard in your browser. Setting up Step 1. Plant the spy Install the package pip

OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)

OCTIS : Optimizing and Comparing Topic Models is Simple! OCTIS (Optimizing and Comparing Topic models Is Simple) aims at training, analyzing and compa

Narya API allows you track soccer player from camera inputs, and evaluate them with an Expected Discounted Goal (EDG) Agent

Narya The Narya API allows you track soccer player from camera inputs, and evaluate them with an Expected Discounted Goal (EDG) Agent. This repository

Comments
  • ValueError: Found input variables with inconsistent numbers of samples: [5265250, 21061]

    ValueError: Found input variables with inconsistent numbers of samples: [5265250, 21061]

    Hey! We're trying to run the benchmark. When running main.py, we're getting a dimensionality error. Here's our full error log:

    LinearSVC(C=0.01, max_iter=1200, tol=1e-05)
    Traceback (most recent call last):
      File "/Users/derrickzhen/code/cs91/EEGEyeNet/main.py", line 63, in <module>
        main()
      File "/Users/derrickzhen/code/cs91/EEGEyeNet/main.py", line 54, in main
        benchmark(trainX, trainY)
      File "/Users/derrickzhen/code/cs91/EEGEyeNet/benchmark.py", line 105, in benchmark
        try_models(trainX=trainX, trainY=y, ids=ids, models=models, scoring=scoring)
      File "/Users/derrickzhen/code/cs91/EEGEyeNet/benchmark.py", line 68, in try_models
        trainer.fit(X_train, y_train, X_val, y_val)
      File "/Users/derrickzhen/code/cs91/EEGEyeNet/StandardML_Models/StandardClassifier_1D.py", line 42, in fit
        self.model.fit(trainX, trainY.ravel())
      File "/opt/homebrew/lib/python3.9/site-packages/sklearn/svm/_classes.py", line 246, in fit
        X, y = self._validate_data(
      File "/opt/homebrew/lib/python3.9/site-packages/sklearn/base.py", line 581, in _validate_data
        X, y = check_X_y(X, y, **check_params)
      File "/opt/homebrew/lib/python3.9/site-packages/sklearn/utils/validation.py", line 981, in check_X_y
        check_consistent_length(X, y)
      File "/opt/homebrew/lib/python3.9/site-packages/sklearn/utils/validation.py", line 332, in check_consistent_length
        raise ValueError(
    ValueError: Found input variables with inconsistent numbers of samples: [5265250, 21061]
    

    Looks like this has to do with the reshaping of the EEG data for model fitting? In Line 40 of StandardClassifier_1D.py, the data is reshaped with the following method: trainX = trainX.reshape((-1, 258)) # TODO: A hack for now

    Then, when this reshaped data is plugged into the model for fitting / inference, the error is thrown. Is this WAI? How can we repro your results?

    Thank you!

    opened by dzhen19 6
  • Covariance Matrices from Hilbert-Transformed data

    Covariance Matrices from Hilbert-Transformed data

    Hi, is there any way to get good covariance matrices from the hilbert-transformed data? Using the non-hilbert transformed data, there are 500 data points for each time frame, allowing for non-singular covariance matrices to be made. However, with the hilbert-transformed data, there is only one datapoint per time frame, meaning that the covariance matrices will often be singular. Do you know if there is a good way to get non-singular covariance matrices from this data?

    opened by binaryguy1001101 0
Owner
Ard Kastrati
Ard Kastrati
This source code is implemented using keras library based on "Automatic ocular artifacts removal in EEG using deep learning"

CSP_Deep_EEG This source code is implemented using keras library based on "Automatic ocular artifacts removal in EEG using deep learning" {https://www

Seyed Mahdi Roostaiyan 2 Nov 8, 2022
A series of Python scripts to access measurements from Fluke 28X meters. Fluke IR Remote Interface required.

Fluke289_data_access A series of Python scripts to access measurements from Fluke 28X meters. Fluke IR Remote Interface required. Created from informa

null 3 Dec 8, 2022
AntroPy: entropy and complexity of (EEG) time-series in Python

AntroPy is a Python 3 package providing several time-efficient algorithms for computing the complexity of time-series. It can be used for example to e

Raphael Vallat 153 Dec 27, 2022
Classification of EEG data using Deep Learning

Graduation-Project Classification of EEG data using Deep Learning Epilepsy is the most common neurological disease in the world. Epilepsy occurs as a

Osman Alpaydın 5 Jun 24, 2022
MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python

MNE-Python MNE-Python software is an open-source Python package for exploring, visualizing, and analyzing human neurophysiological data such as MEG, E

MNE tools for MEG and EEG data analysis 2.1k Dec 28, 2022
Self-supervised spatio-spectro-temporal represenation learning for EEG analysis

EEG-Oriented Self-Supervised Learning and Cluster-Aware Adaptation This repository provides a tensorflow implementation of a submitted paper: EEG-Orie

Wonjun Ko 4 Jun 9, 2022
Price-Prediction-For-a-Dream-Home - A machine learning based linear regression trained model for house price prediction.

Price-Prediction-For-a-Dream-Home ROADMAP TO THIS LINEAR REGRESSION BASED HOUSE PRICE PREDICTION PREDICTION MODEL Import all the dependencies of the p

DIKSHA DESWAL 1 Dec 29, 2021
A novel benchmark dataset for Monocular Layout prediction

AutoLay AutoLay: Benchmarking Monocular Layout Estimation Kaustubh Mani, N. Sai Shankar, J. Krishna Murthy, and K. Madhava Krishna Abstract In this pa

Kaustubh Mani 39 Apr 26, 2022
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch

Transformer in Transformer Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image c

Phil Wang 272 Dec 23, 2022