Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Overview

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Python 3.6 PyTorch 1.2 cuDNN 7.3.1 License CC BY-NC-SA

This is the origin Pytorch implementation of Informer in the following paper: Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. Special thanks to Jieqi Peng@cookieminions for building this repo.



Figure 1. The architecture of Informer.

ProbSparse Attention

The self-attention scores form a long-tail distribution, where the "active" queries lie in the "head" scores and "lazy" queries lie in the "tail" area. We designed the ProbSparse Attention to select the "active" queries rather than the "lazy" queries. The ProbSparse Attention with Top-u queries forms a sparse Transformer by the probability distribution. Why not use Top-u keys? The self-attention layer's output is the re-represent of input. It is formulated as a weighted combination of values w.r.t. the score of dot-product pairs. The top queries with full keys encourage a complete re-represent of leading components in the input, and it is equivalent to selecting the "head" scores among all the dot-product pairs. If we choose Top-u keys, the full keys just preserve the trivial sum of values within the "long tail" scores but wreck the leading components' re-represent.



Figure 2. The illustration of ProbSparse Attention.

Requirements

  • Python 3.6
  • matplotlib == 3.1.1
  • numpy == 1.19.4
  • pandas == 0.25.1
  • scikit_learn == 0.21.3
  • torch == 1.4.0

Dependencies can be installed using the following command:

pip install -r requirements.txt

Data

The ETT dataset used in the paper can be download in the repo ETDataset. The required data files should be put into data/ETT/ folder. A demo slice of the ETT data is illustrated in the following figure. Note that the input of each dataset is zero-mean normalized in this implementation.



Figure 3. An example of the ETT data.

Usage

Commands for training and testing the model with ProbSparse self-attention on Dataset ETTh1, ETTh2 and ETTm1 respectively:

# ETTh1
python -u main_informer.py --model informer --data ETTh1 --attn prob

# ETTh2
python -u main_informer.py --model informer --data ETTh2 --attn prob

# ETTm1
python -u main_informer.py --model informer --data ETTm1 --attn prob

More parameter information please refer to main_informer.py.

Results



Figure 4. Univariate forecasting results.



Figure 5. Multivariate forecasting results.

FAQ

If you run into a problem like RuntimeError: The size of tensor a (98) must match the size of tensor b (96) at non-singleton dimension 1, you can check torch version or modify code about Conv1d of TokenEmbedding in models/embed.py as the way of circular padding mode in Conv1d changed in different torch version.

Citation

If you find this repository useful in your research, please consider citing the following paper:

@inproceedings{haoyietal-informer-2021,
  author    = {Haoyi Zhou and
               Shanghang Zhang and
               Jieqi Peng and
               Shuai Zhang and
               Jianxin Li and
               Hui Xiong and
               Wancai Zhang},
  title     = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},
  booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021},
  pages     = {online},
  publisher = {{AAAI} Press},
  year      = {2021},
}

Contact

If you have any questions, feel free to contact Haoyi Zhou through Email ([email protected]) or Github issues. Pull requests are highly welcomed!

Comments
  • data_stamp = time_features(df_stamp, timeenc=self.timeenc, freq=self.freq)

    data_stamp = time_features(df_stamp, timeenc=self.timeenc, freq=self.freq)

    你好,大佬,为什么搞这个, data_stamp = time_features(df_stamp, timeenc=self.timeenc, freq=self.freq)

    还有你的输入,Xen 代码 长度为96的序列吗

    Xde={Xtoken,X0} ,Xtoken和X0 是什么意思?

    大佬可否指导一下 。

    opened by Phil610351 25
  • Reproducing the results for ETTh2

    Reproducing the results for ETTh2

    Hello,

    Thanks a lot for the publishing your results and code, I enjoyed reading the paper. While trying to reproduce the paper results, the output was way off especially for the ETTh2 dataset. (Ran it with the same configuration in the Colab notebook)

    testing : informer_ETTh2_ftM_sl96_ll48_pl24_dm512_nh8_el3_dl2_df512_atprob_ebtimeF_dtTrue_exp_0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< test 2857 test shape: (89, 32, 24, 7) (89, 32, 24, 7) test shape: (2848, 24, 7) (2848, 24, 7) mse:0.8689931831590327, mae:0.7622690107174594

    Could you please let me know if you use a different hyper parameter? or what am i doing wrong .

    Thanks in advance.

    Regards, Kiran

    opened by 18kiran12 25
  • IndexError when using 'learned' or 'fixed' in args.embed

    IndexError when using 'learned' or 'fixed' in args.embed

    args.model = 'informerstack' # model of experiment, options: [informer, informerstack, informerlight(TBD)]

    args.data = 'custom' # data args.root_path = './' # root path of data file args.data_path = 'test.csv' # data file args.features = 'S' # forecasting task, options:[M, S, MS(TBD)]; M:multivariate predict multivariate, S:univariate predict univariate, MS:multivariate predict univariate args.target = 'target' # target feature in S or MS task args.freq = 't' # freq for time features encoding

    args.seq_len = 128 # input sequence length of Informer encoder args.label_len = 96 # start token length of Informer decoder args.pred_len = 15 # prediction sequence length

    args.enc_in = 1 # encoder input size number of features in input args.dec_in = 1 # decoder input size number of features args.c_out = 7 # output size output dimension before FN args.factor = 5 # probsparse attn factor args.d_model = 512 # dimension of model args.n_heads = 8 # num of heads args.e_layers = 3 # num of encoder layers args.d_layers = 2 # num of decoder layers args.d_ff = 512 # dimension of fcn in model args.dropout = 0.05 # dropout args.attn = 'full' # attention used in encoder, options:[prob, full] args.embed = 'fixed' # time features encoding, options:[timeF, fixed, learned] args.activation = 'gelu' # activation args.distil = True # whether to use distilling in encoder args.output_attention = False # whether to output attention in ecoder

    args.batch_size = 64 args.learning_rate = 0.0001 ## 0.0001 args.loss = 'mse' args.lradj = 'type1'

    args.num_workers = 0 args.itr = 1 args.train_epochs = 6 args.patience = 3 args.des = 'exp'

    我用以上参数训练,结果报错:


    IndexError Traceback (most recent call last) in 9 # train 10 print('>>>>>>>start training : {}>>>>>>>>>>>>>>>>>>>>>>>>>>'.format(setting)) ---> 11 exp.train(setting) 12 13 # test

    ~/max/Informer2020/exp/exp_informer.py in train(self, setting) 169 # encoder - decoder 170 if self.args.output_attention: --> 171 outputs = self.model(batch_x, batch_x_mark, dec_inp, batch_y_mark)[0] 172 else: 173 outputs = self.model(batch_x, batch_x_mark, dec_inp, batch_y_mark)

    ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(),

    ~/max/Informer2020/models/model.py in forward(self, x_enc, x_mark_enc, x_dec, x_mark_dec, enc_self_mask, dec_self_mask, dec_enc_mask) 144 def forward(self, x_enc, x_mark_enc, x_dec, x_mark_dec, 145 enc_self_mask=None, dec_self_mask=None, dec_enc_mask=None): --> 146 enc_out = self.enc_embedding(x_enc, x_mark_enc) 147 enc_out, attns = self.encoder(enc_out, attn_mask=enc_self_mask) 148

    ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(),

    ~/max/Informer2020/models/embed.py in forward(self, x, x_mark) 105 106 def forward(self, x, x_mark): --> 107 x = self.value_embedding(x) + self.position_embedding(x) + self.temporal_embedding(x_mark) 108 109 return self.dropout(x)

    ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(),

    ~/max/Informer2020/models/embed.py in forward(self, x) 75 x = x.long() 76 ---> 77 minute_x = self.minute_embed(x[:,:,4]) if hasattr(self, 'minute_embed') else 0. 78 hour_x = self.hour_embed(x[:,:,3]) 79 weekday_x = self.weekday_embed(x[:,:,2])

    IndexError: index 4 is out of bounds for dimension 2 with size 4

    如果使用'timeF',则一切正常。

    discussion 
    opened by lzmax888 21
  • About inputs to the decoder

    About inputs to the decoder

    @zhouhaoyi Suppose I want to input a 28 length sequence, X_i,...,X_{i+27} and want to predict the 28 length sequence X_{i+28},...,X_{i+55}. Then the encoder will take in (X_i,...,X_{i+27}) as input and the decoder will take in (X_i,...,X_{i+27},0,...,0}. Is my understanding correct? Is this what you meant in the paper when you said you concat the start token and the zero placeholder for the target?

    opened by puzzlecollector 15
  • 用户自定义数据集模型训练问题

    用户自定义数据集模型训练问题

    你好,我在自定义的数据集上用此模型解决预测任务(长周期品类销量预测)上碰到如下问题,渴望得到解答,关键参数:features=MS,freq = d,seq_len=84,label_len=28,pred_len=14 1、训练模型时设置train_epochs=500,patience=30,在前2个epochs会得到最优模型,之后模型的val loss和test loss就没法继续降低,早停机制发挥作用,这个如何改善 2、想问下最优模型的判断选取是在./utils/tools里的save_checkpoint函数里通过self.verbose判定吗,这块能说明一下吗?还有针对epoch设置较大时patience参数的设置有何经验 3、informer、informerstack和informerlight的区别

    opened by XinWangTIM 11
  • EarlyStopping's occurence

    EarlyStopping's occurence

    Hello,

    I am currently trying to run your code to see how it works, but every time the code terminates too soon based on EarlyStopping. The result MSE and MAE were also quite off compared with the results shown here. I have had no involvement with almost any programming-related things for a long time so my knowledge is too limited to solve the problem myself. With that being said, I did try to set the EarlyStopping patience to 100 instead, but the code still ended on its own despite saying that EarlyStopping counter is at 3 out of 100. Also, at the start, the code would show that Use GPU: cuda: 0, which made me concerned that if the training was done on CPU at first, but when I checked with the Task Manager the GPU use was at almost 100%, so I believed it was fine, but the fact that the code terminates itself too early every time still makes me wonder if it is using the GPU properly. It would be great if you could provide me some help on this.

    In case if any information on specs are needed: OS: Windows Server 2019 64-bit Processor: Intel Xeon CPU @ 2.20GHz 2.20GHz Memory: 30GB GPU: Nvidia Tesla V100

    Thank you in advance. Let me know if there is any additional information you need.

    opened by alexzhang0825 10
  • Is there a way to deal with categorical feature?

    Is there a way to deal with categorical feature?

    I want to train model with scalar or categorical feature.

    but I can't find the way to deal with categorical feature in informer.

    is available to control categorical feature in informer?

    opened by kja815 9
  • Colab error on ETTm1

    Colab error on ETTm1

    Hi,

    I was trying to run the model on the provided Colab with the ETTm1 dataset. And I run into the error RuntimeError: mat1 dim 1 must match mat2 dim 0:

    /content/Informer2020/exp/exp_informer.py in train(self, setting)
        171                     outputs = self.model(batch_x, batch_x_mark, dec_inp, batch_y_mark)[0]
        172                 else:
    --> 173                     outputs = self.model(batch_x, batch_x_mark, dec_inp, batch_y_mark)
        174 
        175                 f_dim = -1 if self.args.features=='MS' else 0
    
    /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    /content/Informer2020/models/model.py in forward(self, x_enc, x_mark_enc, x_dec, x_mark_dec, enc_self_mask, dec_self_mask, dec_enc_mask)
         67     def forward(self, x_enc, x_mark_enc, x_dec, x_mark_dec, 
         68                 enc_self_mask=None, dec_self_mask=None, dec_enc_mask=None):
    ---> 69         enc_out = self.enc_embedding(x_enc, x_mark_enc)
         70         enc_out, attns = self.encoder(enc_out, attn_mask=enc_self_mask)
         71 
    
    /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    /content/Informer2020/models/embed.py in forward(self, x, x_mark)
        105 
        106     def forward(self, x, x_mark):
    --> 107         x = self.value_embedding(x) + self.position_embedding(x) + self.temporal_embedding(x_mark)
        108 
        109         return self.dropout(x)
    
    /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    /content/Informer2020/models/embed.py in forward(self, x)
         92 
         93     def forward(self, x):
    ---> 94         return self.embed(x)
         95 
         96 class DataEmbedding(nn.Module):
    
    /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    /usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input)
         91 
         92     def forward(self, input: Tensor) -> Tensor:
    ---> 93         return F.linear(input, self.weight, self.bias)
         94 
         95     def extra_repr(self) -> str:
    
    /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
       1690         ret = torch.addmm(bias, input, weight.t())
       1691     else:
    -> 1692         output = input.matmul(weight.t())
       1693         if bias is not None:
       1694             output += bias
    
    RuntimeError: mat1 dim 1 must match mat2 dim 0
    

    All I changed was:

    args.data = 'ETTm1' # data
    

    Am I missing anything in the configuration?

    opened by thisiscam 9
  • Autocorrelation

    Autocorrelation

    Hi! I want to predict a government bond with a very high autocorrelation ( ACF close to one until a lag of 50). When I use your model I found a very similar pattern for trues and preds in the test phase but I think I shouldn't find it since it is impossible to predict using only the past value (it should be like a random walk) . It is a problem given by autocorrelation? Do you please have any advice to transform my target or another solution or suggestion? I try to use the difference from one day to another as the target but in this case, there are few high peaks that the model doesn't predict.

    Thank you!

    opened by laviniarossimori 8
  • 一些基础问题

    一些基础问题

    首先非常感谢作者的分享,受益匪浅。下面我有几个问题想要询问一下,也期待所有看到这个提问的人们一起进行讨论。 1、在数据加载文件中,seq_len, label_len, pred_len这三个属性的设定是否遵循了哪些规则。 2、timeenc这个属性的作用,是在计算时间成本吗? 3、在进行数据集分割中,border1s和border2s的标准是什么,123024是如何得到的。

    opened by SSSUNSHINNING 8
  • Troubles in application

    Troubles in application

    I would to make use of your work in a real-world application. Your paper looks very promising, but unfortunately, I have some problems with implementation.

    Before I proceed further I would like to inform you, that I have built Informer2020 as a .whl package (I did it using non-public Azure DevOps workspace, but for discussion purposes, I uploaded it on GitHub: https://github.com/jkarolczak/informer-whl). I removed from the package files and directories like img, scripts, exp, main_informer.py which seems not to be useful in a real-world application. If you will I can put this file into the PIP indexer. Please notice, that you are listed as the authors.

    I use Jupiter Notebook for experiments. I start with installation and necessary imports:

    !pip install informer2020-0.0.1-py3-none-any.whl
    import informer2020
    from informer2020.models.model import Informer
    from informer2020.data.data_loader import Dataset_Custom
    import torch
    import pandas as pd
    import matplotlib.pyplot as plt
    from matplotlib.pyplot import figure
    

    Later on, I define necessary parameters and instantiate object basing on Informer class. I follow your naming convention.

    enc_in = 1
    dec_in = 1
    pred_len = 24
    c_out = 1
    seq_len = 96
    label_len = 48
    out_len = 24
    size = (seq_len, label_len, pred_len)
    
    model = Informer(enc_in, dec_in, c_out, seq_len, label_len, out_len)
    

    The code above contains the first crucial part - setting enc_in and/or dec_in values greater than one returns an error indicating, that model script cannot execute because of mismatching dimensions.

    Later on, I prepare data, to make it easier I use file from paper

    train_dataset = Dataset_Custom('data/', size=size, data_path='ETTH1-train.csv', freq='h', flag='train', target='OT')
    train_data_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32)
    

    I perform training with quite standard training loop.

    epochs = 30
    lr = 0.0001
    loss_fn = torch.nn.MSELoss()
    optimizer = torch.optim.Adam(model.parameters(), lr)
    for e in range(epochs):
        for b, batch in enumerate(train_data_loader):
            batch_x, batch_y, batch_x_mark, batch_y_mark = batch
            dec_inp = torch.zeros([batch_y.shape[0], pred_len, batch_y.shape[-1]]).float()
            dec_inp = torch.cat([batch_y[:,:label_len,:], dec_inp], dim=1).float()
            y_hat = model(batch_x.float(), batch_x_mark.float(), dec_inp.float(), batch_y_mark.float())
            batch_y = batch_y[:,-pred_len:, 0:]
            loss = loss_fn(y_hat, batch_y.float())
            optimizer.zero_grad()
            loss.backward(retain_graph=True)
            optimizer.step()
            print('epoch: {:8} | batch: {:8} | loss:{:14.4f}'.format(e, b, loss.item()), end='\r')
    

    Results of such a training are very unsatisfying as MSE is expressed in full tenths. Empirical judgment of the last batch of the training set is also quite disappointing.

    image

    Below I show how I obtain the plot. I know it's quite a dumb way, but I will to fix previous issues before spending time on making it more elegant and easier to apply in real-world applications.

    test_dataset = Dataset_Custom('data/', size=size, data_path='ETTH1-train.csv', freq='h', flag='test', target='OT')
    test_data_loader = torch.utils.data.DataLoader(test_dataset, batch_size=len(test_dataset), drop_last=False)
    _ = model.eval()
    
    for batch in test_data_loader:
        batch_x, batch_y, batch_x_mark, batch_y_mark = batch    
        dec_inp = torch.zeros([batch_y.shape[0], pred_len, batch_y.shape[-1]]).float()
        dec_inp = torch.cat([batch_y[:,:label_len,:], dec_inp], dim=1).float()
        batch_y = batch_y[:,-pred_len:, 0:]
        y_hat = model(batch_x.float(), batch_x_mark.float(), dec_inp.float(), batch_y_mark.float())
    test_ds = pd.read_csv('data/ETTH1-train.csv')
    y = test_ds['OT'][-pred_len:]
    y_hat = train_dataset.inverse_transform(torch.flatten(y_hat[-1]).detach().numpy())
    fig = figure(figsize=(16, 9))
    x = test_ds['date'][-pred_len:]
    plt.plot(x, y)
    plt.plot(x, y_hat)
    

    Could you give me some help and guidance in repairing the code? I will be very glad for your advice. Please, also tell me whether you will or not to index your package to PyPI

    opened by jkarolczak 7
  • 关于drop_last

    关于drop_last

    我们在迁移到自己数据中发现,当test的drop_last设置为False的时候可以正常执行,但是按照原始代码设置为True的时候,无法执行enumerate(vali_loader)/enumerate(test_loader)对dataloader进行迭代遍历,不知道为什么,不知道是否有和我们遇到一样问题的朋友。(可能我们的参数设置的比较奇怪,由于我们数据量比较小,用的seq_len是12,label_len和pred_len均是1)

    opened by navfour 0
  • Decoder Input

    Decoder Input

    Hey guys, I really enjoyed reading the paper, and thanks for pushing the source code. I am working on Informer for a multivariate problem where I am having 94 features and one output target. I have a question about the model inference/prediction. In the _process_one_batch method, we are passing the decoder input where firstly, it is being initiated by zeros and it is being concatenated with batch_y. I am removing if conditions as I am working with padding==0. I am training for MS but I am not passing the target variable in the input by changing the read_data method so that seq_x will be having only the features.

    # decoder input
    dec_inp = torch.zeros([batch_y.shape[0], self.args.pred_len, batch_y.shape[-1]]).float()
    dec_inp = torch.cat([batch_y[:,:self.args.label_len,:], dec_inp], dim=1).float().to(self.device)
    

    My question is why are we concatenating batch_y values as in real-time, we will not be having batch_y values? I trained a model and the results look so promising with the above decoder input but if I don't use the batch_y concatenation part then the results aren't looking good.

    opened by HasnainKhanNiazi 1
Owner
Haoyi
B curious 2 everything
Haoyi
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stock price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022
LONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION

Query Selector Here you can find code and data loaders for the paper https://arxiv.org/pdf/2107.08687v1.pdf . Query Selector is a novel approach to sp

MORAI 62 Dec 17, 2022
The GitHub repository for the paper: “Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction“.

SCINet This is the original PyTorch implementation of the following work: Time Series is a Special Sequence: Forecasting with Sample Convolution and I

null 386 Jan 1, 2023
Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting.

Non-AR Spatial-Temporal Transformer Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series For

Chen Kai 66 Nov 28, 2022
Time Series Forecasting with Temporal Fusion Transformer in Pytorch

Forecasting with the Temporal Fusion Transformer Multi-horizon forecasting often contains a complex mix of inputs – including static (i.e. time-invari

Nicolás Fornasari 6 Jan 24, 2022
Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting

Autoformer (NeurIPS 2021) Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting Time series forecasting is a c

THUML @ Tsinghua University 847 Jan 8, 2023
Spectral Temporal Graph Neural Network (StemGNN in short) for Multivariate Time-series Forecasting

Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting This repository is the official implementation of Spectral Temporal Gr

Microsoft 306 Dec 29, 2022
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.8k Jan 8, 2023
This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the time series forecasting research space.

TSForecasting This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the tim

Rakshitha Godahewa 80 Dec 30, 2022
Code for the CIKM 2019 paper "DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting".

Dual Self-Attention Network for Multivariate Time Series Forecasting 20.10.26 Update: Due to the difficulty of installation and code maintenance cause

Kyon Huang 223 Dec 16, 2022
The source code and data of the paper "Instance-wise Graph-based Framework for Multivariate Time Series Forecasting".

IGMTF The source code and data of the paper "Instance-wise Graph-based Framework for Multivariate Time Series Forecasting". Requirements The framework

Wentao Xu 24 Dec 5, 2022
Event-forecasting - Event Forecasting Algorithms With Python

event-forecasting Event Forecasting Algorithms Theory Correlating events in comp

Intellia ICT 4 Feb 15, 2022
Forecasting for knowable future events using Bayesian informative priors (forecasting with judgmental-adjustment).

What is judgyprophet? judgyprophet is a Bayesian forecasting algorithm based on Prophet, that enables forecasting while using information known by the

AstraZeneca 56 Oct 26, 2022
Iterative Normalization: Beyond Standardization towards Efficient Whitening

IterNorm Code for reproducing the results in the following paper: Iterative Normalization: Beyond Standardization towards Efficient Whitening Lei Huan

Lei Huang 21 Dec 27, 2022
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Segmentation Transformer Implementation of Segmentation Transformer in PyTorch, a new model to achieve SOTA in semantic segmentation while using trans

Abhay Gupta 161 Dec 8, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021) Citation Please cite as: @inproceedings{liu2020understan

Sunbow Liu 22 Nov 25, 2022
[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Fudan Zhang Vision Group 897 Jan 5, 2023
Sequence to Sequence Models with PyTorch

Sequence to Sequence models with PyTorch This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch At present it ha

Sandeep Subramanian 708 Dec 19, 2022