Code for the paper "TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks"

Overview

TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks

This is a Python3 / Pytorch implementation of TadGAN paper. The associated blog explaining the architecture details can be found here.

Data:

The TadGAN architecture can be used for detecting anomalies in time series data.

Pretrained Model:

The trained model is saved in the Model directory. The training is incomplete and the model has to be retrained for other datasets.

Architecture:

The model implements an encoder and decoder as generator and two critics as discriminators as described in the paper. The loss function is wasserstein loss with gradient penalty.

Usage:

  1. Format of the dataset - The dataset should have a column name as signal containing the signals and a column with name anomaly containing the true labels (used during validation).

  2. Delete the contents of the directory Model.

  3. Change the file name exchange-2_cpc_results.csv in main.py to the name of your dataset.

Note:

This is an independent implementation and I am not related to the authors of the paper.

Comments
  • Possible bug in anomaly_detection.py

    Possible bug in anomaly_detection.py

    In anomaly_detection.py line 105, you calculate the standard deviation of the mean value (which is a scalar) and it makes no sense. I think it should be window_std = np.std(window_elts)

    opened by shyam1998 3
  • 1-lipschitz penalty

    1-lipschitz penalty

    hi, thanks for your code, when I read the code, I have a little question, when you calculate gp_loss. The original paper is written 1-lipschitz penalty should (gp_loss-1)^2, but your code is only gp_loss. I understand that you limit the loss to small and meet the requirements. I want to know, is the result obtained by the experiment? Looking forward to your reply. image

    image

    opened by 529261027 2
  • How to use other datasets for TadGAN?

    How to use other datasets for TadGAN?

    • Python version:3.6
    • Operating System:Ubuntu 18.04.5 LTS

    Description

    The dataset for ur code should have a column name as signal containing the signals and a column with name anomaly containing the true labels. But in the paper of TADGAN, e.g.,NAB(Art) dataset is only has two COLUMN ( timestamp and value). how can i use the dataset like NAB?

    opened by kanesp 2
  • Problem running the model

    Problem running the model

    Hi,

    Thanks for the translation of TadGan into pytorch. I'm trying to run the code present in this github just running:

    python main.py
    

    and I'm having the following error

    Traceback (most recent call last):
      File "main.py", line 251, in <module>
        anomaly_detection.test(test_loader, encoder, decoder, critic_x)
      File "/content/drive/MyDrive/TadGAN/anomaly_detection.py", line 27, in test
        find_scores(y_true, y_predict)
      File "/content/drive/MyDrive/TadGAN/anomaly_detection.py", line 131, in find_scores
        precision = tp / (tp + fp)
    ZeroDivisionError: division by zero
    

    what could be problem causing this error?

    In addition I'd try other dataset mentioned in the TadGan paper, like MSL for example. Is that possible with the code already present?

    Thanks in advance.

    Alessandro

    opened by aleflabo 1
  • Is this ready to use?

    Is this ready to use?

    Hi,

    Thanks for making this available. I see that you just have one commit. Is this ready to use?

    I will look more carefully at your paper, but does the technique work reasonably well for signals that very little to no periodic structure?

    Thanks

    opened by kb1ooo 1
  • Using multivariate time series data

    Using multivariate time series data

    Hi guys, awesome work!

    I was exploring it and couldn't figure out how to use multivariate time series data as also told in your paper. Looking forward to the instructions.

    opened by nik13 0
  • prune_false_positive

    prune_false_positive

    Hi, Thanks for the translation of TadGan into pytorch. when I apply prune_false_positive program, I found some confusion about this, i think we should find every abnormal sequences that the max anomaly score ,and the max normal score, so , I modify the code of prune_false_positive, can you help me to determine that the code is correct, Looking forward to your reply

    def prune_false_positive(is_anomaly, anomaly_score, change_threshold):
        #The model might detect a high number of false positives.
        #In such a scenario, pruning of the false positive is suggested.
        #Method used is as described in the Section 5, part D Identifying Anomalous
        #Sequence, sub-part - Mitigating False positives
        #TODO code optimization
        seq_details = []
        delete_sequence = 0
        start_position = 0
        end_position = 0
        anomaly_score = np.abs(anomaly_score)  # calculate standard deviations from the mean of the window
        max_seq_element = anomaly_score[0]
        for i in range(1, len(is_anomaly)):
            if is_anomaly[i] == 1 and is_anomaly[i-1] == 0:  # anomaly start
                start_position = i  # anomaly start position
                max_seq_element = anomaly_score[i]  # first anomaly score
            if is_anomaly[i] == 1 and is_anomaly[i-1] == 1 and anomaly_score[i] > max_seq_element:  # continuous anomaly, compare anomaly score
                max_seq_element = anomaly_score[i]
            if i+1 == len(is_anomaly) and is_anomaly[i] == 1:  # last is anomaly
                seq_details.append([start_position, i, max_seq_element, delete_sequence])
            elif is_anomaly[i] == 1 and is_anomaly[i+1] == 0:  # anomaly end
                end_position = i  # anomaly end postion
                seq_details.append([start_position, end_position, max_seq_element, delete_sequence])
    
        max_elements = list()
        max_elements.append(max(anomaly_score[is_anomaly==0]))  # normal data max score
        for i in range(0, len(seq_details)):
            max_elements.append(seq_details[i][2])
    
        max_elements.sort(reverse=True)
        max_elements = np.array(max_elements)
        change_percent = abs(max_elements[1:] - max_elements[:-1]) / max_elements[1:]
    
        # Appending 0 for the 1 st element which is not change percent
        delete_seq = np.append(np.array([0]), change_percent < change_threshold)
    
        # Mapping max element and seq details
        for i, max_elt in enumerate(max_elements):
            for j in range(0, len(seq_details)):
                if seq_details[j][2] == max_elt:
                    seq_details[j][3] = delete_seq[i]
    
        for seq in seq_details:
            if seq[3] == 1: # Delete sequence
                is_anomaly[seq[0]:seq[1]+1] = [0] * (seq[1] - seq[0] + 1)
     
        return is_anomaly
    
    opened by 529261027 0
  • About Encoder/Decoder loss function

    About Encoder/Decoder loss function

    After reading the code carefully, I found that the loss function of the encoder consists of three parts.like this loss_enc = mse + critic_score_valid_x - critic_score_fake_x i can understand 'mse',but i can not understand "critic_score_valid_x , critic_score_fake_x". 'critic_score_valid_x' and 'critic_score_fake_x' I think it is not related to the encoder. Is it replaced with 'critic_score_valid_z' and 'critic_score_fake_z'?

    opened by zhaotianzi 0
  • The loss of the encoder and decoder is very high

    The loss of the encoder and decoder is very high

    i have use the model in my own dataset ,But after a long time of training, the loss is still very high.Can you tell me how to reduce my loss?

    DEBUG:root:critic x loss -30.026 critic z loss 0.416 encoder loss 1265.846 decoder loss 1235.256

    opened by zhaotianzi 3
  • TadGAN does not work with the default setup

    TadGAN does not work with the default setup

    Hi,

    I have tried to run the code with the current setup (number of epochs is 30) but I get

    File TadGAN/anomaly_detection.py", line 129, in find_scores precision = tp / (tp + fp) ZeroDivisionError: division by zero

    Any ideas about what is going on ?

    With Kind Regards, Roberto

    opened by rruizdeaustri 17
Owner
Arun
Arun
Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

NeuralTextures This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for

Visual Understanding Lab @ Samsung AI Center Moscow 18 Oct 6, 2022
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
TensorFlow code for the neural network presented in the paper: "Structural Language Models of Code" (ICML'2020)

SLM: Structural Language Models of Code This is an official implementation of the model described in: "Structural Language Models of Code" [PDF] To ap

null 73 Nov 6, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

J K Terry 32 Nov 9, 2021
Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation In this repo, we provide the code for our paper : "Few-Shot Segmentation Wit

Malik Boudiaf 138 Dec 12, 2022
Code for ACM MM 2020 paper "NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination"

NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination The offical implementation for the "NOH-NMS: Improving Pedestrian Detection by

Tencent YouTu Research 64 Nov 11, 2022
Official TensorFlow code for the forthcoming paper

~ Efficient-CapsNet ~ Are you tired of over inflated and overused convolutional neural networks? You're right! It's time for CAPSULES :)

Vittorio Mazzia 203 Jan 8, 2023
This is the code for the paper "Contrastive Clustering" (AAAI 2021)

Contrastive Clustering (CC) This is the code for the paper "Contrastive Clustering" (AAAI 2021) Dependency python>=3.7 pytorch>=1.6.0 torchvision>=0.8

Yunfan Li 210 Dec 30, 2022
Code for the paper Learning the Predictability of the Future

Learning the Predictability of the Future Code from the paper Learning the Predictability of the Future. Website of the project in hyperfuture.cs.colu

Computer Vision Lab at Columbia University 139 Nov 18, 2022
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

null 43 Nov 19, 2022
Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation

A Theoretical Analysis of the Repetition Problem in Text Generation This repository share the code for the paper "A Theoretical Analysis of the Repeti

Zihao Fu 37 Nov 21, 2022
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Qing-Long Zhang 199 Jan 8, 2023
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
Code for the Shortformer model, from the paper by Ofir Press, Noah A. Smith and Mike Lewis.

Shortformer This repository contains the code and the final checkpoint of the Shortformer model. This file explains how to run our experiments on the

Ofir Press 138 Apr 15, 2022
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

Facebook Research 366 Dec 28, 2022
Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

ming71 56 Nov 28, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022