Codes and Data Processing Files for our paper.

Overview

Code Scripts and Processing Files for EEG Sleep Staging Paper

1. Folder Tree

  • ./src_preprocess (data preprocessing files for SHHS and Sleep EDF)

    • sleepEDF_cassette_process.py (script for processing Sleep EDF data)
    • shhs_processing.py (script for processing SHHS dataset)
  • ./src

    • loss.py (the contrastive loss function of MoCo, SimCLR, BYOL, SimSiame and our ContraWR)
    • model.py (the encoder model for Sleep EDF and SHHS data)
    • self_supervised.py (the code for running self-supervised model)
    • supervised.py (the code for running supervised STFT CNN model)
    • utils.py (other functionalities, e.g., data loader)

2. Data Preparation

2.1 Instructions for Sleep EDF

  • Step1: download the Sleep EDF data from https://physionet.org/content/sleep-edfx/1.0.0/
    • we will use the Sleep EDF cassette portion
    mkdir SLEEP_data; cd SLEEP_data
    wget -r -N -c -np https://physionet.org/files/sleep-edfx/1.0.0/
  • Step2: running sleepEDF_cassette_process.py to process the data
    • running the following command line. The data will be stored in ./SLEEP_data/cassette_processed/pretext, ./SLEEP_data/cassette_processed/train and ./SLEEP_data/cassette_processed/test
    cd ../src_preprocess
    python sleepEDF_cassette_process.py

2.2 Instructions for SHHS

  • Step1: download the SHHS data from https://sleepdata.org/datasets/shhs
    mkdir SHHS_data; cd SHHS_data
    [THEN DOWNLOAD YOUR DATASET HERE, NAME THE FOLDER "SHHS"]
  • Step2: running shhs_preprocess.py to process the data
    • running the following command line. The data will be stored in ./SHHS_data/processed/pretext, ./SHHS_data/processed/train and ./SHHS_data/processed/test
    cd ../src_preprocess
    python shhs_process.py

3. Running the Experiments

First, go to the ./src directory, then run the supervised model

cd ./src
# run on the SLEEP dataset
python -W ignore supervised.py --dataset SLEEP --n_dim 128
# run on the SHHS dataset
python -W ignore supervised.py --dataset SHHS --n_dim 256

Second, run the self-supervised models

# run on the SLEEP dataset
python -W ignore self_supervised.py --dataset SLEEP --model ContraWR --n_dim 128
# run on the SHHS dataset
python -W ignore self_supervised.py --dataset SHHS --model ContraWR --n_dim 256
# try other self-supervised models
# change "ContraWR" to "MoCo", "SimCLR", "BYOL", "SimSiam"
You might also like...
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.

P-tuning A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''. How to use our code We have released the code

codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification
codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification

DLCF-DCA codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification. submitted t

The codes reproduce the figures and statistics in the paper, "Controlling for multiple covariates," by Mark Tygert.

The accompanying codes reproduce all figures and statistics presented in "Controlling for multiple covariates" by Mark Tygert. This repository also pr

Codes for the paper Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing
Codes for the paper Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing

Contrast and Mix (CoMix) The repository contains the codes for the paper Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Backgroun

Codes and models for the paper "Learning Unknown from Correlations: Graph Neural Network for Inter-novel-protein Interaction Prediction".

GNN_PPI Codes and models for the paper "Learning Unknown from Correlations: Graph Neural Network for Inter-novel-protein Interaction Prediction". Lear

Pytorch codes for
Pytorch codes for "Self-supervised Multi-view Stereo via Effective Co-Segmentation and Data-Augmentation"

Self-Supervised-MVS This repository is the official PyTorch implementation of our AAAI 2021 paper: "Self-supervised Multi-view Stereo via Effective Co

Code release for our paper,
Code release for our paper, "SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic Data via Stereo"

SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic Data via Stereo Thomas Kollar, Michael Laskey, Kevin Stone, Brijen Thananjeyan

Related resources for our EMNLP 2021 paper Plan-then-Generate: Controlled Data-to-Text Generation via Planning

Plan-then-Generate: Controlled Data-to-Text Generation via Planning Authors: Yixuan Su, David Vandyke, Sihui Wang, Yimai Fang, and Nigel Collier Code

Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer
Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer

AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S

Comments
  • Supervised and untrained encoder

    Supervised and untrained encoder

    Can you please provide some information about how the supervised model and the untrained encoder are trained in the paper? My assumption is that the supervised model is trained on both pretext group + train group and the encoder model is trained on only the train group. When I did run the supervised.py script I was getting an accuracy of 78.28 and a kappa score of only 0.3871, clearly, the model is performing badly due to fewer training subjects and I was using the same random seed as yours. I have been experimenting on sleepEDF dataset.

    And in the paper, why there is a difference in metrics between the table values and graph values. Do they point to the same results?

    image image

    opened by likith012 4
  • ContraWR+ on time series

    ContraWR+ on time series

    Is it a good idea to parameterize the weights where you calculate using the dot product between two latent vectors in the contraWR+. If we can how do you think doing it is a good idea. Let me know your thoughts on it, I shall experiment with them.

    opened by likith012 1
  • About augment function in utils

    About augment function in utils

    def noise_channel(ts, mode, degree, bound):
        """
        Add noise to ts
        
        mode: high, low, both
        degree: degree of noise, compared with range of ts    
        
        Input:
            ts: (n_length)
        Output:
            out_ts: (n_length)
            
        """
    

    noise_channel需要一个单通道时间序列,但是调用它的add_noiseremove_noise传给noise_channelx[i,:],函数说明中说 x: (n_length, n_channel),那每次传给noise_channel的x不是变成了某个采样点的全通道信号吗,请问一下作者这里是否是有什么问题

        def add_noise(self, x, ratio):
            """
            Add noise to multiple ts
            Input: 
                x: (n_length, n_channel)
            Output: 
                x: (n_length, n_channel)
            """
            for i in range(self.n_channels):
                if np.random.rand() > ratio:
                    mode = np.random.choice(['high', 'low', 'both', 'no'])
                    x[i,:] = noise_channel(x[i,:], mode=mode, degree=0.05, bound=self.bound)
            return x
        
        def remove_noise(self, x, ratio):
            """
            Remove noise from multiple ts
            Input: 
                x: (n_length, n_channel)
            Output: 
                x: (n_length, n_channel)
            """
            for i in range(self.n_channels):
                rand = np.random.rand()
                if rand > 0.75:
                    x[i, :] = denoise_channel(x[i, :], self.bandpass1, self.signal_freq, bound=self.bound) +\
                            denoise_channel(x[i, :], self.bandpass2, self.signal_freq, bound=self.bound)
                elif rand > 0.5:
                    x[i, :] = denoise_channel(x[i, :], self.bandpass1, self.signal_freq, bound=self.bound)
                elif rand > 0.25:
                    x[i, :] = denoise_channel(x[i, :], self.bandpass2, self.signal_freq, bound=self.bound)
                else:
                    pass
    
            return x
    
    opened by YoloEliwa 1
Owner
Chaoqi Yang
CS PhD@UIUC, CS@SJTU
Chaoqi Yang
Codes for our paper "SentiLARE: Sentiment-Aware Language Representation Learning with Linguistic Knowledge" (EMNLP 2020)

SentiLARE: Sentiment-Aware Language Representation Learning with Linguistic Knowledge Introduction SentiLARE is a sentiment-aware pre-trained language

null 74 Dec 30, 2022
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Shi Guo 32 Dec 15, 2022
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python

Digital Image Processing Python MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python TO-DO: Refactor scripts, curren

Merve Noyan 24 Oct 16, 2022
Source codes for the paper "Local Additivity Based Data Augmentation for Semi-supervised NER"

LADA This repo contains codes for the following paper: Jiaao Chen*, Zhenghui Wang*, Ran Tian, Zichao Yang, Diyi Yang: Local Additivity Based Data Augm

GT-SALT 36 Dec 2, 2022
The source codes for ACL 2021 paper 'BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data'

BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data This repository provides the implementation details for

null 124 Dec 27, 2022
Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

ImageProcessingTransformer Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

null 61 Jan 1, 2023
Convolutional neural network web app trained to track our infant’s sleep schedule using our Google Nest camera.

Machine Learning Sleep Schedule Tracker What is it? Convolutional neural network web app trained to track our infant’s sleep schedule using our Google

g-parki 7 Jul 15, 2022
Data Preparation, Processing, and Visualization for MoVi Data

MoVi-Toolbox Data Preparation, Processing, and Visualization for MoVi Data, https://www.biomotionlab.ca/movi/ MoVi is a large multipurpose dataset of

Saeed Ghorbani 51 Nov 27, 2022
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

null 75 Dec 16, 2022