Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting

Overview

Autoformer (NeurIPS 2021)

Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting

Time series forecasting is a critical demand for real applications. Enlighted by the classic time series analysis and stochastic process theory, we propose the Autoformer as a general series forecasting model [paper]. Autoformer goes beyond the Transformer family and achieves the series-wise connection for the first time.

In long-term forecasting, Autoformer achieves SOTA, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease.

Autoformer vs. Transformers

1. Deep decomposition architecture

We renovate the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.



Figure 1. Overall architecture of Autoformer.

2. Series-wise Auto-Correlation mechanism

Inspired by the stochastic process theory, we design the Auto-Correlation mechanism, which can discover period-based dependencies and aggregate the information at the series level. This empowers the model with inherent log-linear complexity. This series-wise connection contrasts clearly from the previous self-attention family.



Figure 2. Auto-Correlation mechansim.

Get Started

  1. Install Python 3.6, PyTorch 1.9.0.
  2. Download data. You can obtain all the six benchmarks from Tsinghua Cloud or Google Drive. All the datasets are well pre-processed and can be used easily.
  3. Train the model. We provide the experiment scripts of all benchmarks under the folder ./scripts. You can reproduce the experiment results by:
bash ./scripts/ETT_script/Autoformer_ETTm1.sh
bash ./scripts/ECL_script/Autoformer.sh
bash ./scripts/Exchange_script/Autoformer.sh
bash ./scripts/Traffic_script/Autoformer.sh
bash ./scripts/Weather_script/Autoformer.sh
bash ./scripts/ILI_script/Autoformer.sh
  1. Sepcial-designed implementation
  • Speedup Auto-Correlation: We built the Auto-Correlation mechanism as a batch-normalization-style block to make it more memory-access friendly. See the paper for details.

  • Without the position embedding: Since the series-wise connection will inherently keep the sequential information, Autoformer does not need the position embedding, which is different from Transformers.

Main Results

We experiment on six benchmarks, covering five main-stream applications. We compare our model with ten baselines, including Informer, N-BEATS, etc. Generally, for the long-term forecasting setting, Autoformer achieves SOTA, with a 38% relative improvement over previous baselines.

Citation

If you find this repo useful, please cite our paper.

@inproceedings{wu2021autoformer,
  title={Autoformer: Decomposition Transformers with {Auto-Correlation} for Long-Term Series Forecasting},
  author={Haixu Wu and Jiehui Xu and Jianmin Wang and Mingsheng Long},
  booktitle={Advances in Neural Information Processing Systems},
  year={2021}
}

Contact

If you have any question or want to use the code, please contact [email protected] .

Acknowledgement

We appreciate the following github repos a lot for their valuable code base or datasets:

https://github.com/zhouhaoyi/Informer2020

https://github.com/zhouhaoyi/ETDataset

https://github.com/laiguokun/multivariate-time-series-data

Comments
  • Inverse Transformation of the Data while Plotting Ground Truth Vs Prediction

    Inverse Transformation of the Data while Plotting Ground Truth Vs Prediction

    Good Day, I'm currently working on the forecasting of Univariant time series using your Autoformer in google colab. I've attached the dataset I'm using below and I am currently trying to predict the Voltage data in the dataset.

    dataset4.csv

    After the Prediction of the test data, the prediction and ground truth values are plotted however they are not scaled back to their original values. I want your help in doing this.

    I've made the some changes in the run.py and data_factory.py to fit my custom dataset in the code below

    run.py.pdf

    • Line 69 I've added a new line of code: parser.add_argument('--inverse', action='store_true', help='inverse output data', default=False) and made some changes with the epochs and enc_in , dec_in and c_out values.

    data_factory.py.pdf

    • Line 9 I've inserted dataset4 as the custom data set.

    After running the code : !python run.py --is_training 1 --model_id test --model Informer --data dataset4

    I was able to train the model and run it on test data and got the corresponding MAE and MSE values

    This is the code used to Plot the results:

    %cd /content/drive/MyDrive/Autoformer_main/Autoformer-main/results/test_Informer_dataset4_ftM_sl96_ll48_pl24_dm512_nh8_el2_dl1_df2048_fc5_ebtimeF_dtTrue_test_0 !ls

    import numpy as np preds = np.load('pred.npy') trues = np.load('true.npy') import matplotlib.pyplot as plt import seaborn as sns

    import matplotlib.pyplot as plt plt.figure() plt.plot (trues[:,1,-1], label='GroundTruth') plt.plot (preds[:,1,-1], label='Prediction') plt.legend() from google.colab import files plt.show()

    In the Plot below you can see that the ground truth value of Voltage is between -2 and 1:

    image

    but if you look at the original dataset, I've plotted the ground Truth value of voltage and its is between 3 and 4.5 as shown below:

    image

    You are using Standard scaler to scale the data, however I am unable to re-scale the standardized data back to its original values while plotting both the ground truth and prediction value.

    Please help in using the inverse transform to re-scale my data and predictions back to its original values.

    Thank you

    opened by vinayakrajurs 17
  • Train, Val, Test dataset leakage

    Train, Val, Test dataset leakage

    Hi, thanks for making your code public! It's great for the forecasting community and research.

    I am unsure about how you create your train, val and test splits. The splits seem to be overlapping with a length of self.seq_len, see the code snippets below.

    https://github.com/thuml/Autoformer/blob/7f9ce3c58b73b4f4b1163e093f47550a5cdbc6d5/data_provider/data_loader.py#L237-L240 https://github.com/thuml/Autoformer/blob/7f9ce3c58b73b4f4b1163e093f47550a5cdbc6d5/data_provider/data_loader.py#L267-L268 If self.seq_len is longer than self.pred_len then sequences that the model is trained on will be present in the validation split. Data used for validation would also be present in the test split.

    opened by Big-Tree 9
  • A small question about the cuda error

    A small question about the cuda error

    Hello, I'm a little bit confused about this situation, where the cuda is out of memory when I finished the inference. I tried debug, but I found it hard to understand that. Traceback (most recent call last): File "/data/jyq/Autoformer-main1/run.py", line 126, in torch.cuda.empty_cache() File "/data/jyq/Anaconda3/envs/autoformer/lib/python3.6/site-packages/torch/cuda/memory.py", line 114, in empty_cache torch._C._cuda_emptyCache() RuntimeError: CUDA error: out of memory CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

    opened by Daniel-Jiang358 6
  • 关于 transformer 部分的疑问

    关于 transformer 部分的疑问

    你好大佬,我是 ai 新手,刚才在思考 n_head 的作用时候,发现在 encoder 中,x 分别经过全连接变为 q, k, v. 形状为 (32, 8, 64, 96) (输入时长是 96 ),感觉即使形状就保持 (32, 512, 96) 不必拆成 8*64 这样的 8 个头 似乎运算也是一样的?

    image
    opened by h3z 5
  • Operation on Dataset

    Operation on Dataset

    `border1s = [0, 12 * 30 * 24 - self.seq_len, 12 * 30 * 24 + 4 * 30 * 24 - self.seq_len] border2s = [12 * 30 * 24, 12 * 30 * 24 + 4 * 30 * 24, 12 * 30 * 24 + 8 * 30 * 24] border1 = border1s[self.set_type] border2 = border2s[self.set_type]#没看懂

        if self.features == 'M' or self.features == 'MS':#输入
            cols_data = df_raw.columns[1:]#DataFrame.columns属性以返回给定 DataFrame 的列标签,这里不要date
            df_data = df_raw[cols_data]#再通过列名索引取出数据
        elif self.features == 'S':
            df_data = df_raw[[self.target]]#只取OT这一列作为输入
    
        if self.scale:#scale在_init_初始化True
            train_data = df_data[border1s[0]:border2s[0]]#不知道为什么可以问问author_wu
            self.scaler.fit(train_data.values)#简单来说,求得训练集的均值、方差、最大值、最小值等属性,对数据进行拟合
            data = self.scaler.transform(df_data.values)#在fit的基础上,进行归一化等
        else:
            data = df_data.values`  in data_provider  data_loader.py  Class Dataset_ETT_hour(Dataset)
    

    Thanks for watching this,I wonder why and how you set the border and why train_data = df_data[border1s[0]:border2s[0]]

    opened by Zero-coder 5
  • reproduce ETTm2 univariate results in Table 2

    reproduce ETTm2 univariate results in Table 2

    Thank you very much for sharing the code! I can't get univariate prediction results even when I use the scripts. I'm running it with pytorch==1.9.0 and python3.7. The log is as follow.

    Namespace(activation='gelu', batch_size=32, c_out=1, checkpoints='./checkpoints/', d_ff=2048, d_layers=1, d_model=512, data='ETTm2', data_path='ETTm2.csv', dec_in=1, des='Exp', devices='0,1,2,3', distil=True, do_predict=False, dropout=0.05, e_layers=2, embed='timeF', enc_in=1, factor=1, features='S', freq='h', gpu=0, is_training=1, itr=1, label_len=96, learning_rate=0.0001, loss='mse', lradj='type1', model='Autoformer', model_id='ETTm2_96_96', moving_avg=25, n_heads=8, num_workers=10, output_attention=False, patience=3, pred_len=96, root_path='./dataset/ETT-small/', seq_len=96, target='OT', train_epochs=10, use_amp=False, use_gpu=True, use_multi_gpu=False) Use GPU: cuda:0

    start training : ETTm2_96_96_Autoformer_ETTm2_ftS_sl96_ll96_pl96_dm512_nh8_el2_dl1_df2048_fc1_ebtimeF_dtTrue_Exp_0>>>>>>>>>>>>>>>>>>>>>>>>>> train 34369 val 11425 test 11425 iters: 100, epoch: 1 | loss: 0.0966879 speed: 0.0820s/iter; left time: 872.3655s iters: 200, epoch: 1 | loss: 0.1096209 speed: 0.0755s/iter; left time: 795.7340s iters: 300, epoch: 1 | loss: 0.1418100 speed: 0.0757s/iter; left time: 790.5091s iters: 400, epoch: 1 | loss: 0.0920060 speed: 0.0763s/iter; left time: 789.2272s iters: 500, epoch: 1 | loss: 0.1261624 speed: 0.0763s/iter; left time: 781.5422s iters: 600, epoch: 1 | loss: 0.1070141 speed: 0.0767s/iter; left time: 778.2832s iters: 700, epoch: 1 | loss: 0.0975105 speed: 0.0770s/iter; left time: 773.4665s iters: 800, epoch: 1 | loss: 0.0870143 speed: 0.0773s/iter; left time: 768.7166s iters: 900, epoch: 1 | loss: 0.0923501 speed: 0.0772s/iter; left time: 760.1836s iters: 1000, epoch: 1 | loss: 0.1329619 speed: 0.0775s/iter; left time: 754.6965s Epoch: 1 cost time: 83.01469206809998 Epoch: 1, Steps: 1074 | Train Loss: 0.1192958 Vali Loss: 0.1495133 Test Loss: 0.1170112 Validation loss decreased (inf --> 0.149513). Saving model ... Updating learning rate to 0.0001 iters: 100, epoch: 2 | loss: 0.1307557 speed: 0.4428s/iter; left time: 4236.3043s iters: 200, epoch: 2 | loss: 0.0746148 speed: 0.0738s/iter; left time: 698.9521s iters: 300, epoch: 2 | loss: 0.0781722 speed: 0.0739s/iter; left time: 692.0664s iters: 400, epoch: 2 | loss: 0.0840582 speed: 0.0743s/iter; left time: 688.5206s iters: 500, epoch: 2 | loss: 0.0945234 speed: 0.0742s/iter; left time: 679.8758s iters: 600, epoch: 2 | loss: 0.1197210 speed: 0.0742s/iter; left time: 672.6283s iters: 700, epoch: 2 | loss: 0.0902257 speed: 0.0742s/iter; left time: 665.6231s iters: 800, epoch: 2 | loss: 0.1256524 speed: 0.0742s/iter; left time: 658.1316s iters: 900, epoch: 2 | loss: 0.1183934 speed: 0.0743s/iter; left time: 651.0201s iters: 1000, epoch: 2 | loss: 0.0817829 speed: 0.0745s/iter; left time: 645.9785s Epoch: 2 cost time: 80.55055594444275 Epoch: 2, Steps: 1074 | Train Loss: 0.1038648 Vali Loss: 0.1367796 Test Loss: 0.1242634 Validation loss decreased (0.149513 --> 0.136780). Saving model ... Updating learning rate to 5e-05 iters: 100, epoch: 3 | loss: 0.1129365 speed: 0.4511s/iter; left time: 3830.8966s iters: 200, epoch: 3 | loss: 0.1033758 speed: 0.0740s/iter; left time: 621.1884s iters: 300, epoch: 3 | loss: 0.1275468 speed: 0.0742s/iter; left time: 615.0512s iters: 400, epoch: 3 | loss: 0.0710122 speed: 0.0745s/iter; left time: 610.2690s iters: 500, epoch: 3 | loss: 0.0717602 speed: 0.0744s/iter; left time: 602.2747s iters: 600, epoch: 3 | loss: 0.1175499 speed: 0.0744s/iter; left time: 595.0589s iters: 700, epoch: 3 | loss: 0.1519643 speed: 0.0749s/iter; left time: 590.9385s iters: 800, epoch: 3 | loss: 0.0800418 speed: 0.0750s/iter; left time: 584.5540s iters: 900, epoch: 3 | loss: 0.0746994 speed: 0.0746s/iter; left time: 573.7547s iters: 1000, epoch: 3 | loss: 0.0830479 speed: 0.0745s/iter; left time: 565.4744s Epoch: 3 cost time: 80.76482033729553 Epoch: 3, Steps: 1074 | Train Loss: 0.0973873 Vali Loss: 0.1398393 Test Loss: 0.1179208 EarlyStopping counter: 1 out of 3 Updating learning rate to 2.5e-05 iters: 100, epoch: 4 | loss: 0.0736060 speed: 0.4432s/iter; left time: 3287.9202s iters: 200, epoch: 4 | loss: 0.1116835 speed: 0.0791s/iter; left time: 579.0581s iters: 300, epoch: 4 | loss: 0.0793331 speed: 0.0791s/iter; left time: 570.8419s iters: 400, epoch: 4 | loss: 0.0720609 speed: 0.0788s/iter; left time: 560.6287s iters: 500, epoch: 4 | loss: 0.1089775 speed: 0.0789s/iter; left time: 553.9732s iters: 600, epoch: 4 | loss: 0.0835999 speed: 0.0786s/iter; left time: 544.0225s iters: 700, epoch: 4 | loss: 0.1258804 speed: 0.0786s/iter; left time: 536.2984s iters: 800, epoch: 4 | loss: 0.0553840 speed: 0.0783s/iter; left time: 526.0710s iters: 900, epoch: 4 | loss: 0.0811464 speed: 0.0788s/iter; left time: 521.5191s iters: 1000, epoch: 4 | loss: 0.1213600 speed: 0.0785s/iter; left time: 511.5148s Epoch: 4 cost time: 85.34881067276001 Epoch: 4, Steps: 1074 | Train Loss: 0.0949577 Vali Loss: 0.1529772 Test Loss: 0.1448818 EarlyStopping counter: 2 out of 3 Updating learning rate to 1.25e-05 iters: 100, epoch: 5 | loss: 0.0767112 speed: 0.4476s/iter; left time: 2840.0901s iters: 200, epoch: 5 | loss: 0.0909449 speed: 0.0783s/iter; left time: 488.9913s iters: 300, epoch: 5 | loss: 0.1096216 speed: 0.0781s/iter; left time: 480.1752s iters: 400, epoch: 5 | loss: 0.1170162 speed: 0.0787s/iter; left time: 475.6964s iters: 500, epoch: 5 | loss: 0.0840287 speed: 0.0778s/iter; left time: 462.4421s iters: 600, epoch: 5 | loss: 0.0611459 speed: 0.0773s/iter; left time: 451.9506s iters: 700, epoch: 5 | loss: 0.0679403 speed: 0.0777s/iter; left time: 446.5334s iters: 800, epoch: 5 | loss: 0.0693620 speed: 0.0783s/iter; left time: 442.0431s iters: 900, epoch: 5 | loss: 0.1007061 speed: 0.0784s/iter; left time: 434.5144s iters: 1000, epoch: 5 | loss: 0.0944251 speed: 0.0783s/iter; left time: 426.3235s Epoch: 5 cost time: 84.73156476020813 Epoch: 5, Steps: 1074 | Train Loss: 0.0931660 Vali Loss: 0.1550726 Test Loss: 0.1493730 EarlyStopping counter: 3 out of 3 Early stopping testing : ETTm2_96_96_Autoformer_ETTm2_ftS_sl96_ll96_pl96_dm512_nh8_el2_dl1_df2048_fc1_ebtimeF_dtTrue_Exp_0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< test 11425 test shape: (357, 32, 96, 1) (357, 32, 96, 1) test shape: (11424, 96, 1) (11424, 96, 1) mse:0.12426342070102692, mae:0.24697716534137726

    Similar results are observed with unfixed random seed and itr=3. Are there any other tips for reproducing this work?

    opened by XBR-1111 5
  • Zeros for Decoder Input

    Zeros for Decoder Input

    I noticed that during training and testing you feed the decoder input with zeros using torch.zeros_like() for the future input values. Why do you do that? Wouldn't it make more sense to use teacher forcing, providing the decoder with the true future value? During inference output would need to be feed back? Not sure if the Auto-Correlation does not permit that, but for the vanilla transformer version that should be possible.

    Appreciate your view on that. Thanks.

    opened by deeepwin 4
  • little details debug

    little details debug

    run.py Line 57 parser.add_argument('--output_attention', action='store_true', help='whether to output attention in ecoder') Does the handsome Author want to say "whether to output attention in encoder,instead of 'ecoder'?"

    opened by Zero-coder 4
  • How to access preprocessing code for weather, illness, exchange..

    How to access preprocessing code for weather, illness, exchange..

    Thanks for the nice work.

    I wonder how to get OT values in weather, illness, exchange datasets that this repository provides.

    Can I access the preprocessing code?

    Or, can I know how to get OT values in original data?

    opened by junwoopark92 4
  • Results for univariate experiments

    Results for univariate experiments

    Hello, could you provide the setting for univariate prediction? The results I run are a bit different from the results reported in the paper. I run the code according to the default setting.

    python run.py --is_training 1 --root_path ./data/ --data_path ETTm2.csv --model_id ETTm2_96_96 --model Autoformer --data ETTm2 --features S --seq_len 96 --label_len 48 --pred_len 96 --e_layers 2 --d_layers 1 --factor 3 --enc_in 1 --dec_in 1 --c_out 1 --des 'Exp' --itr 1 mse:0.1019936352968216, mae:0.2465277761220932 But the results in the paper are: mse:0.065, mae:0.189

    python run.py --is_training 1 --root_path ./data/ --data_path ETTm2.csv --model_id ETTm2_96_336 --model Autoformer --data ETTm2 --features S --seq_len 96 --label_len 48 --pred_len 336 --e_layers 2 --d_layers 1 --factor 3 --enc_in 1 --dec_in 1 --c_out 1 --des 'Exp' --itr 1 mse:0.18364596366882324, mae:0.3304046392440796 But the results in the paper are: mse:0.154, mae:0.305

    opened by ddz16 4
  • Choose proper moving average kernel for short input

    Choose proper moving average kernel for short input

    Hello! Thank you for your well-commented code! I'm currently using Autoformer to deal with some data which have very short input length, such as only 8 timestamps. I noticed that the default moving average kernel size in series decomposition part is 25, which maybe too long for the input in this case. I tried some smaller kernel such as 3, 5, 7. But the model turned out to be worse on validation dataset. Do you have any suggestions about adjusting hyper parameters for short input? Any suggestion would be appreciated. Thank you!

    opened by PeihanDou 4
  • --itr argument

    --itr argument

    Hello,

    I was trying multivariate forecasting using the Autoformer model.

    In run.py there is an argument --itr

    1. Is it used when there is more than one script in the .sh file?

    2. I have a single script in the script(.sh) file, when I had given --itr as 2 the same experiment had repeated twice. However, I noticed that during the second round of testing, the error metrics were significantly lower. Can you please explain why?

    Thank you Niharika Joshi

    opened by Niharikajo 0
  • About seq/pred lenth

    About seq/pred lenth

    感谢作者的优秀作品Autoformer! 初次接触Autoformer,在使用Autoformer的过程中我产生了一些小的疑惑,希望可以得到作者解答: 我在自己的一项10000+的数据集的实验中采用了[seq_lenth:label_lenth:pred_lenth=96:48:24=4:2:1]的参数,并在整个测试集中均取得了很好的结果,我想请教若我扩大数据集至10W-20W+(预测目标值的波动范围不大,且变化规律接近),采取同样的比例[seq_lenth:label_lenth:pred_lenth=960:480:240]理论上是否可以取得与原[seq_lenth:label_lenth:pred_lenth=96:48:24]参数的整个测试集的优秀效果? 又或者是,我在这个10W-20W+的数据集上用[seq_lenth:label_lenth:pred_lenth=96:48:24]的参数,在完整测试集上取得了很好的效果,是不是在这个数据集上可以用[seq_lenth:label_lenth:pred_lenth=96:48:24]的结果来代表[seq_lenth:label_lenth:pred_lenth=960:480:240]的预测效果呢? 感谢!

    opened by SimyokH 0
  • getting error for freq a, needed for yearly granularity

    getting error for freq a, needed for yearly granularity

    return np.vstack([feat(dates) for feat in time_features_from_frequency_str(freq)])
    

    File "<array_function internals>", line 6, in vstack File "D:\autoformer\venv\lib\site-packages\numpy\core\shape_base.py", line 282, in vstack return _nx.concatenate(arrs, 0) File "<array_function internals>", line 6, in concatenate ValueError: need at least one array to concatenate

    opened by Sanjeev97 0
Owner
THUML @ Tsinghua University
Machine Learning Group, School of Software, Tsinghua University
THUML @ Tsinghua University
LONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION

Query Selector Here you can find code and data loaders for the paper https://arxiv.org/pdf/2107.08687v1.pdf . Query Selector is a novel approach to sp

MORAI 62 Dec 17, 2022
Scikit-event-correlation - Event Correlation and Forecasting over High Dimensional Streaming Sensor Data algorithms

scikit-event-correlation Event Correlation and Changing Detection Algorithm Theo

Intellia ICT 5 Oct 30, 2022
Code for Blind Image Decomposition (BID) and Blind Image Decomposition network (BIDeN).

arXiv, porject page, paper Blind Image Decomposition (BID) Blind Image Decomposition is a novel task. The task requires separating a superimposed imag

null 64 Dec 20, 2022
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting This is the origin Pytorch implementation of Informer in the followin

Haoyi 3.1k Dec 29, 2022
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stock price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022
[ECCVW2020] Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DiMP)

Feel free to visit my homepage Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DIMP) [ECCVW2020 paper] Presentation

Seokeon Choi 35 Oct 26, 2022
Synthesizing Long-Term 3D Human Motion and Interaction in 3D in CVPR2021

Long-term-Motion-in-3D-Scenes This is an implementation of the CVPR'21 paper "Synthesizing Long-Term 3D Human Motion and Interaction in 3D". Please ch

Jiashun Wang 76 Dec 13, 2022
LSTMs (Long Short Term Memory) RNN for prediction of price trends

Price Prediction with Recurrent Neural Networks LSTMs BTC-USD price prediction with deep learning algorithm. Artificial Neural Networks specifically L

null 5 Nov 12, 2021
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

null 77 Dec 16, 2022
Multi-resolution SeqMatch based long-term Place Recognition

MRS-SLAM for long-term place recognition In this work, we imply an multi-resolution sambling based visual place recognition method. This work is based

METASLAM 6 Dec 6, 2022
Event-forecasting - Event Forecasting Algorithms With Python

event-forecasting Event Forecasting Algorithms Theory Correlating events in comp

Intellia ICT 4 Feb 15, 2022
Forecasting for knowable future events using Bayesian informative priors (forecasting with judgmental-adjustment).

What is judgyprophet? judgyprophet is a Bayesian forecasting algorithm based on Prophet, that enables forecasting while using information known by the

AstraZeneca 56 Oct 26, 2022
Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting.

Non-AR Spatial-Temporal Transformer Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series For

Chen Kai 66 Nov 28, 2022
Spectral Temporal Graph Neural Network (StemGNN in short) for Multivariate Time-series Forecasting

Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting This repository is the official implementation of Spectral Temporal Gr

Microsoft 306 Dec 29, 2022
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.8k Jan 8, 2023
This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the time series forecasting research space.

TSForecasting This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the tim

Rakshitha Godahewa 80 Dec 30, 2022
Code for the CIKM 2019 paper "DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting".

Dual Self-Attention Network for Multivariate Time Series Forecasting 20.10.26 Update: Due to the difficulty of installation and code maintenance cause

Kyon Huang 223 Dec 16, 2022
The GitHub repository for the paper: “Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction“.

SCINet This is the original PyTorch implementation of the following work: Time Series is a Special Sequence: Forecasting with Sample Convolution and I

null 386 Jan 1, 2023
The source code and data of the paper "Instance-wise Graph-based Framework for Multivariate Time Series Forecasting".

IGMTF The source code and data of the paper "Instance-wise Graph-based Framework for Multivariate Time Series Forecasting". Requirements The framework

Wentao Xu 24 Dec 5, 2022