The GitHub repository for the paper: “Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction“.

Related tags

Deep Learning SCINet
Overview

SCINet

Arxiv link pytorch cure

This is the original PyTorch implementation of the following work: Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction. If you find this repository useful for your work, please consider citing it as follows:

@article{liu2021SCINet,
  title={Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction},
  author={Liu, Minhao and Zeng, Ailing and Lai, Qiuxia and Xu, Qiang},
  journal={arXiv preprint arXiv:2106.09305},
  year={2021}
}

Updates

[2021-09-17] SCINet v1.0 is released!

Features

  • Support 11 popular time-series forecasting datasets.

traffic electric Solar Energy finance

  • Provide all training logs.

To-do items

  • Integrate GNN-based spatial models into SCINet for better performance and higher efficiency on spatial-temporal time series. Our preliminary results show that this feature could result in considerable gains on the prediction accuracy of some datasets (e.g., PEMSxx).
  • Generate probalistic forecasting results.

Stay tuned!

Dataset

We conduct the experiments on 11 popular time-series datasets, namely Electricity Transformer Temperature (ETTh1, ETTh2 and ETTm1) , Traffic, Solar-Energy, Electricity and Exchange Rate and PeMS (PEMS03, PEMS04, PEMS07 and PEMS08), ranging from power, energy, finance and traffic domains.

Overall information of the 11 datasets

Datasets Variants Timesteps Granularity Start time Task Type
ETTh1 7 17,420 1hour 7/1/2016 Multi-step
ETTh2 7 17,420 1hour 7/1/2016 Multi-step
ETTm1 7 69,680 15min 7/1/2016 Multi-step
PEMS03 358 26,209 5min 5/1/2012 Multi-step
PEMS04 307 16,992 5min 7/1/2017 Multi-step
PEMS07 883 28,224 5min 5/1/2017 Multi-step
PEMS08 170 17,856 5min 3/1/2012 Multi-step
Traffic 862 17,544 1hour 1/1/2015 Single-step
Solar-Energy 137 52,560 1hour 1/1/2006 Single-step
Electricity 321 26,304 1hour 1/1/2012 Single-step
Exchange-Rate 8 7,588 1hour 1/1/1990 Single-step

Get started

Requirements

Install the required package first:

cd SCINet
conda create -n scinet python=3.8
conda activate scinet
pip install -r requirements.txt

Dataset preparation

All datasets can be downloaded here. To prepare all dataset at one time, you can just run:

source prepare_data.sh

ett pems financial

The data directory structure is shown as follows.

./
└── datasets/
    ├── ETT-data
    │   ├── ETTh1.csv
    │   ├── ETTh2.csv
    │   └── ETTm1.csv
    ├── financial
    │   ├── electricity.txt
    │   ├── exchange_rate.txt
    │   ├── solar_AL.txt
    │   └── traffic.txt
    └── PEMS
        ├── PEMS03.npz
        ├── PEMS04.npz
        ├── PEMS07.npz
        └── PEMS08.npz

Run training code

To facilitate reproduction, we provide the logs on the above datasets here in details. You can check the hyperparameters, training loss and test results for each epoch in these logs as well.

We follow the same settings of StemGNN for PEMS 03, 04, 07, 08 datasets, MTGNN for Solar, electricity, traffic, financial datasets, Informer for ETTH1, ETTH2, ETTM1 datasets. The detailed training commands are given as follows.

For PEMS dataset (All datasets follow Input 12, Output 12):

pems03

python run_pems.py --dataset PEMS03 --hidden-size 0.0625 --dropout 0.25 --model_name pems03_h0.0625_dp0.25

pems04

python run_pems.py --dataset PEMS04 --hidden-size 0.0625 --dropout 0 --model_name pems04_h0.0625_dp0

pems07

python run_pems.py --dataset PEMS07 --hidden-size 0.03125 --dropout 0.25 --model_name pems07_h0.03125_dp0.25

pems08

python run_pems.py --dataset PEMS08 --hidden-size 1 --dropout 0.5 --model_name pems08_h1_dp0.5
PEMS Parameter highlights
Parameter Name Description Parameter in paper Default
dataset Name of dataset N/A PEMS08
horizon Horizon Horizon 12
window_size Look-back window Look-back window 12
hidden-size hidden expansion h 1
levels SCINet block levels L 2
stacks The number of SCINet block K 1

For Solar dataset:

predict 3

python run_financial.py --dataset_name solar_AL --window_size 160 --horizon 3 --hidden-size 2  --lastWeight 0.5 --stacks 1 --levels 4 --lradj 2 --lr 1e-4 --dropout 0.25 --batch_size 1024 --model_name so_I160_o3_lr1e-4_bs1024_dp0.25_h2_s1l4_w0.5

predict 6

python run_financial.py --dataset_name solar_AL --window_size 160 --horizon 6 --hidden-size 2 --lastWeight 0.5 --stacks 1 --levels 4 --lradj 2 --lr 1e-4 --dropout 0.25 --batch_size 1024 --model_name so_I160_o6_lr1e-4_bs1024_dp0.25_h2_s1l4_w0.5 

predict 12

python run_financial.py --dataset_name solar_AL --window_size 160 --horizon 12 --hidden-size 2 --lastWeight 0.5 --stacks 2 --levels 4 --lradj 2 --lr 1e-4 --dropout 0.25 --batch_size 1024 --model_name so_I160_o12_lr1e-4_bs1024_dp0.25_h2_s2l4_w0.5

predict 24

python run_financial.py --dataset_name solar_AL --window_size 160 --horizon 24 --hidden-size 2 --lastWeight 0.5 --stacks 1 --levels 4 --lradj 2 --lr 1e-4 --dropout 0.25 --batch_size 1024 --model_name so_I160_o24_lr1e-4_bs1024_dp0.25_h2_s1l4_w0.5

For Electricity dataset:

predict 3

python run_financial.py --dataset_name electricity --window_size 168 --horizon 3 --hidden-size 8 --single_step 1 --stacks 2 --levels 3 --lr 9e-3 --dropout 0 --batch_size 32 --model_name ele_I168_o3_lr9e-3_bs32_dp0_h8_s2l3_w0.5 --groups 321

predict 6

python run_financial.py --dataset_name electricity --window_size 168 --horizon 6 --hidden-size 8 --single_step 1 --stacks 2 --levels 3 --lr 9e-3 --dropout 0 --batch_size 32 --model_name ele_I168_o6_lr9e-3_bs32_dp0_h8_s2l3_w0.5 --groups 321

predict 12

python run_financial.py --dataset_name electricity --window_size 168 --horizon 12 --hidden-size 8 --single_step 1 --stacks 2 --levels 3 --lr 9e-3 --dropout 0 --batch_size 32 --model_name ele_I168_o12_lr9e-3_bs32_dp0_h8_s2l3_w0.5 --groups 321

predict 24

python run_financial.py --dataset_name electricity --window_size 168 --horizon 24 --hidden-size 8 --single_step 1 --stacks 2 --levels 3 --lr 9e-3 --dropout 0 --batch_size 32 --model_name ele_I168_o24_lr9e-3_bs32_dp0_h8_s2l3_w0.5 --groups 321

For Traffic dataset (warning: 20,000MiB+ memory usage!):

predict 3

python run_financial.py --dataset_name traffic --window_size 168 --horizon 3 --hidden-size 2 --single_step 1 --stacks 2 --levels 3 --lr 5e-4 --dropout 0.25 --batch_size 16 --model_name traf_I168_o3_lr5e-4_bs16_dp0.25_h2_s2l3_w1.0

predict 6

python run_financial.py --dataset_name traffic --window_size 168 --horizon 6 --hidden-size 2 --single_step 1 --stacks 1 --levels 3 --lr 5e-4 --dropout 0.25 --batch_size 16 --model_name traf_I168_o6_lr5e-4_bs16_dp0.25_h2_s1l3_w1.0

predict 12

python run_financial.py --dataset_name traffic --window_size 168 --horizon 12 --hidden-size 1 --single_step 1 --stacks 2 --levels 3 --lr 5e-4 --dropout 0.25 --batch_size 16 --model_name traf_I168_o12_lr5e-4_bs16_dp0.25_h1_s2l3_w1.0

predict 24

python run_financial.py --dataset_name traffic --window_size 168 --horizon 24 --hidden-size 2 --single_step 1 --stacks 2 --levels 2 --lr 5e-4 --dropout 0.5 --batch_size 16 --model_name traf_I168_o24_lr5e-4_bs16_dp0.5_h2_s2l2_w1.0

For Exchange rate dataset:

predict 3

python run_financial.py --dataset_name exchange_rate --window_size 168 --horizon 3 --hidden-size 0.125 --lastWeight 0.5 --stacks 1 --levels 3 --lr 5e-3 --dropout 0.5 --batch_size 4 --model_name ex_I168_o3_lr5e-3_bs4_dp0.5_h0.125_s1l3_w0.5 --epochs 150

predict 6

python run_financial.py --dataset_name exchange_rate --window_size 168 --horizon 6 --hidden-size 0.125 --lastWeight 0.5 --stacks 1 --levels 3 --lr 5e-3 --dropout 0.5 --batch_size 4 --model_name ex_I168_o6_lr5e-3_bs4_dp0.5_h0.125_s1l3_w0.5 --epochs 150

predict 12

python run_financial.py --dataset_name exchange_rate --window_size 168 --horizon 12 --hidden-size 0.125 --lastWeight 0.5 --stacks 1 --levels 3 --lr 5e-3 --dropout 0.5 --batch_size 4 --model_name ex_I168_o12_lr5e-3_bs4_dp0.5_h0.125_s1l3_w0.5 --epochs 150

predict 24

python run_financial.py --dataset_name exchange_rate --window_size 168 --horizon 24 --hidden-size 0.125 --lastWeight 0.5 --stacks 1 --levels 3 --lr 7e-3 --dropout 0.5 --batch_size 4 --model_name ex_I168_o24_lr7e-3_bs4_dp0.5_h0.125_s1l3_w0.5 --epochs 150
Financial Parameter highlights
Parameter Name Description Parameter in paper Default
dataset_name Data name N/A exchange_rate
horizon Horizon Horizon 3
window_size Look-back window Look-back window 168
batch_size Batch size batch size 8
lr Learning rate learning rate 5e-3
hidden-size hidden expansion h 1
levels SCINet block levels L 3
stacks The number of SCINet block K 1
lastweight Loss weight of the last frame Loss weight ($\lambda$) 1.0

For ETTH1 dataset:

multivariate, out 24

python run_ETTh.py --data ETTh1 --features M  --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 3 --lr 3e-3 --batch_size 8 --dropout 0.5 --model_name etth1_M_I48_O24_lr3e-3_bs8_dp0.5_h4_s1l3

multivariate, out 48

python run_ETTh.py --data ETTh1 --features M  --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 1 --levels 3 --lr 0.009 --batch_size 16 --dropout 0.25 --model_name etth1_M_I96_O48_lr0.009_bs16_dp0.25_h4_s1l3

multivariate, out 168

python run_ETTh.py --data ETTh1 --features M  --seq_len 336 --label_len 168 --pred_len 168 --hidden-size 4 --stacks 1 --levels 3 --lr 5e-4 --batch_size 32 --dropout 0.5 --model_name etth1_M_I336_O168_lr5e-4_bs32_dp0.5_h4_s1l3

multivariate, out 336

python run_ETTh.py --data ETTh1 --features M  --seq_len 336 --label_len 336 --pred_len 336 --hidden-size 1 --stacks 1 --levels 4 --lr 1e-4 --batch_size 512 --dropout 0.5 --model_name etth1_M_I336_O336_lr1e-4_bs512_dp0.5_h1_s1l4

multivariate, out 720

python run_ETTh.py --data ETTh1 --features M  --seq_len 736 --label_len 720 --pred_len 720 --hidden-size 1 --stacks 1 --levels 5 --lr 5e-5 --batch_size 256 --dropout 0.5 --model_name etth1_M_I736_O720_lr5e-5_bs256_dp0.5_h1_s1l5

Univariate, out 24

python run_ETTh.py --data ETTh1 --features S  --seq_len 64 --label_len 24 --pred_len 24 --hidden-size 8 --stacks 1 --levels 3 --lr 0.007 --batch_size 64 --dropout 0.25 --model_name etth1_S_I64_O24_lr0.007_bs64_dp0.25_h8_s1l3

Univariate, out 48

python run_ETTh.py --data ETTh1 --features S  --seq_len 720 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 1 --levels 4 --lr 0.0001 --batch_size 8 --dropout 0.5 --model_name etth1_S_I720_O48_lr0.0001_bs8_dp0.5_h4_s1l4

Univariate, out 168

python run_ETTh.py --data ETTh1 --features S  --seq_len 720 --label_len 168 --pred_len 168 --hidden-size 4 --stacks 1 --levels 4 --lr 5e-5 --batch_size 8 --dropout 0.5 --model_name etth1_S_I720_O168_lr5e-5_bs8_dp0.5_h4_s1l4

Univariate, out 336

python run_ETTh.py --data ETTh1 --features S  --seq_len 720 --label_len 336 --pred_len 336 --hidden-size 1 --stacks 1 --levels 4 --lr 1e-3 --batch_size 128 --dropout 0.5 --model_name etth1_S_I720_O336_lr1e-3_bs128_dp0.5_h1_s1l4

Univariate, out 720

python run_ETTh.py --data ETTh1 --features S  --seq_len 736 --label_len 720 --pred_len 720 --hidden-size 4 --stacks 1 --levels 5 --lr 1e-4 --batch_size 32 --dropout 0.5 --model_name etth1_S_I736_O720_lr1e-5_bs32_dp0.5_h4_s1l5

For ETTH2 dataset:

multivariate, out 24

python run_ETTh.py --data ETTh2 --features M  --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 8 --stacks 1 --levels 3 --lr 0.007 --batch_size 16 --dropout 0.25 --model_name etth2_M_I48_O24_lr7e-3_bs16_dp0.25_h8_s1l3

multivariate, out 48

python run_ETTh.py --data ETTh2 --features M  --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 1 --levels 4 --lr 0.007 --batch_size 4 --dropout 0.5 --model_name etth2_M_I96_O48_lr7e-3_bs4_dp0.5_h4_s1l4

multivariate, out 168

python run_ETTh.py --data ETTh2 --features M  --seq_len 336 --label_len 168 --pred_len 168 --hidden-size 0.5 --stacks 1 --levels 4 --lr 5e-5 --batch_size 16 --dropout 0.5 --model_name etth2_M_I336_O168_lr5e-5_bs16_dp0.5_h0.5_s1l4

multivariate, out 336

python run_ETTh.py --data ETTh2 --features M  --seq_len 336 --label_len 336 --pred_len 336 --hidden-size 1 --stacks 1 --levels 4 --lr 5e-5 --batch_size 128 --dropout 0.5 --model_name etth2_M_I336_O336_lr5e-5_bs128_dp0.5_h1_s1l4

multivariate, out 720

python run_ETTh.py --data ETTh2 --features M  --seq_len 736 --label_len 720 --pred_len 720 --hidden-size 4 --stacks 1 --levels 5 --lr 1e-5 --batch_size 32 --dropout 0.5 --model_name etth2_M_I736_O720_lr1e-5_bs32_dp0.5_h4_s1l5

Univariate, out 24

python run_ETTh.py --data ETTh2 --features S  --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 3 --lr 0.001 --batch_size 16 --dropout 0 --model_name etth2_S_I48_O24_lr1e-3_bs16_dp0_h4_s1l3

Univariate, out 48

python run_ETTh.py --data ETTh2 --features S  --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 2 --levels 4 --lr 0.001 --batch_size 32 --dropout 0.5 --model_name etth2_S_I96_O48_lr1e-3_bs32_dp0.5_h4_s2l4

Univariate, out 168

python run_ETTh.py --data ETTh2 --features S  --seq_len 336 --label_len 168 --pred_len 168 --hidden-size 4 --stacks 1 --levels 3 --lr 1e-4 --batch_size 8 --dropout 0 --model_name etth2_S_I336_O168_lr1e-4_bs8_dp0_h4_s1l3

Univariate, out 336

python run_ETTh.py --data ETTh2 --features S  --seq_len 336 --label_len 336 --pred_len 336 --hidden-size 8 --stacks 1 --levels 3 --lr 5e-4 --batch_size 512 --dropout 0.5 --model_name etth2_S_I336_O336_lr5e-4_bs512_dp0.5_h8_s1l3

Univariate, out 720

python run_ETTh.py --data ETTh2 --features S  --seq_len 720 --label_len 720 --pred_len 720 --hidden-size 8 --stacks 1 --levels 3 --lr 1e-5 --batch_size 128 --dropout 0.6 --model_name etth2_S_I736_O720_lr1e-5_bs128_dp0.6_h8_s1l3

For ETTM1 dataset:

multivariate, out 24

python run_ETTh.py --data ETTm1 --features M  --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 3 --lr 0.005 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I48_O24_lr7e-3_bs16_dp0.25_h8_s1l3

multivariate, out 48

python run_ETTh.py --data ETTm1 --features M  --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 2 --levels 4 --lr 0.001 --batch_size 16 --dropout 0.5 --model_name ettm1_M_I96_O48_lr1e-3_bs16_dp0.5_h4_s2l4

multivariate, out 96

python run_ETTh.py --data ETTm1 --features M  --seq_len 384 --label_len 96 --pred_len 96 --hidden-size 0.5 --stacks 2 --levels 4 --lr 5e-5 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I384_O96_lr5e-5_bs32_dp0.5_h0.5_s2l4

multivariate, out 288

python run_ETTh.py --data ETTm1 --features M  --seq_len 672 --label_len 288 --pred_len 288 --hidden-size 4 --stacks 1 --levels 5 --lr 1e-5 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I672_O288_lr1e-5_bs32_dp0.5_h0.5_s1l5

multivariate, out 672

python run_ETTh.py --data ETTm1 --features M  --seq_len 672 --label_len 672 --pred_len 672 --hidden-size 4 --stacks 2 --levels 5 --lr 1e-5 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I672_O672_lr1e-5_bs32_dp0.5_h4_s2l5

Univariate, out 24

python run_ETTh.py --data ETTm1 --features S  --seq_len 96 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 4 --lr 0.001 --batch_size 8 --dropout 0 --model_name ettm1_S_I96_O24_lr1e-3_bs8_dp0_h4_s1l4

Univariate, out 48

python run_ETTh.py --data ETTm1 --features S  --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 1 --levels 3 --lr 0.0005 --batch_size 16 --dropout 0 --model_name ettm1_S_I96_O48_lr5e-4_bs16_dp0_h4_s1l3

Univariate, out 96

python run_ETTh.py --data ETTm1 --features S  --seq_len 384 --label_len 96 --pred_len 96 --hidden-size 2 --stacks 1 --levels 4 --lr 1e-5 --batch_size 8 --dropout 0 --model_name ettm1_S_I384_O96_lr1e-5_bs8_dp0_h2_s1l4

Univariate, out 288

python run_ETTh.py --data ETTm1 --features S  --seq_len 384 --label_len 288 --pred_len 288 --hidden-size 4 --stacks 1 --levels 4 --lr 1e-5 --batch_size 64 --dropout 0 --model_name ettm1_S_I384_O288_lr1e-5_bs64_dp0_h4_s1l4

Univariate, out 672

python run_ETTh.py --data ETTm1 --features S  --seq_len 672 --label_len 672 --pred_len 672 --hidden-size 1 --stacks 1 --levels 5 --lr 1e-4 --batch_size 32 --model_name ettm1_S_I672_O672_lr1e-4_bs32_dp0.5_h1_s1l5
ETT Parameter highlights
Parameter Name Description Parameter in paper Default
root_path The root path of subdatasets N/A './datasets/ETT-data/ETT/'
data Subdataset N/A ETTh1
pred_len Horizon Horizon 48
seq_len Look-back window Look-back window 96
batch_size Batch size batch size 32
lr Learning rate learning rate 0.0001
hidden-size hidden expansion h 1
levels SCINet block levels L 3
stacks The number of SCINet blocks K 1

Special Constraint

  • Because of the stacked binary down-sampling method that SCINet adopts, the number of levels (L) and look-back window (W) size should satisfy:

(The formula might not be shown in the darkmode Github)

Contact

If you have any questions, feel free to contact us or post github issues. Pull requests are highly welcomed!

Minhao Liu: [email protected]
Ailing Zeng: [email protected]
Zhijian Xu: [email protected]

Send us feedback!

First of all, thank you all for your attention to this work!

Our library is open source for research purposes, and we would like to keep on improving it for a very long time! So please let us know if you:

  • Find/fix any bug or know how to improve any part of SCINet.
  • Want to add/show some cool functionalities/projects made on top of SCINet. We could add your project link to our Community-based Projects section later or integrate it into the next version of SCINet!
Comments
  • Improved results with no SCINet module

    Improved results with no SCINet module

    Hello authors / fellow GitHub members,

    In models/SCINet.py, in class SCINet, forward() method, comment the lines 334, 335, 336 - we do not pass the input x through the self.blocks1 and the subsequent residual addition step, i.e. we only have x = self.projection1(x) in the first stack. Also, use stacks=1 (there is no need for two stacks). Now, run the experiments on the datasets.

    I believe this will outperform the SCINet model.

    Do try it out and let me know - it seems the odd-even splitting and interactive learning only worsen the results (as compared to what I have suggested, which, at its core, is a simple linear model).

    opened by mcvageesh 6
  • How to Plot the Results of ETTh1 dataset

    How to Plot the Results of ETTh1 dataset

    I'm trying to run the Code for the ETTh1 dataset using the following run command: !python run_ETTh_10.py --data ETTh1 --features S --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 1 --levels 3 --lr 3e-3 --batch_size 8 --dropout 0.5 --model_name etth1_M_I48_O24_lr3e-3_bs8_dp0.5_h4_s1l3 and it runs successfully before early stopping at epoch 17 and i get the MAE and MSE values for both normalized and de-normalized data. `Args in experiment: Namespace(INN=1, RIN=False, batch_size=8, c_out=1, checkpoints='exp/ETT_checkpoints/', cols=None, concat_len=0, data='ETTh1', data_path='ETTh1.csv', dec_in=1, detail_freq='h', devices='0', dilation=1, dropout=0.5, embed='timeF', enc_in=1, evaluate=False, features='S', freq='h', gpu=0, groups=1, hidden_size=4.0, inverse=False, itr=0, kernel=5, label_len=48, lastWeight=1.0, levels=3, loss='mae', lr=0.003, lradj=1, model='SCINet', model_name='etth1_M_I48_O24_lr3e-3_bs8_dp0.5_h4_s1l3', num_decoder_layer=1, num_workers=0, patience=5, positionalEcoding=False, pred_len=48, resume=False, root_path='./datasets/', save=False, seq_len=96, single_step=0, single_step_output_One=0, stacks=1, target='OT', train_epochs=100, use_amp=False, use_gpu=True, use_multi_gpu=False, window_size=12) SCINet( (blocks1): EncoderTree( (SCINet_Tree): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) (SCINet_Tree_odd): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) (SCINet_Tree_odd): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) (SCINet_Tree_even): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) ) (SCINet_Tree_even): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) (SCINet_Tree_odd): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) (SCINet_Tree_even): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) ) ) ) (projection1): Conv1d(96, 48, kernel_size=(1,), stride=(1,), bias=False) (div_projection): ModuleList() )

    start training : SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0>>>>>>>>>>>>>>>>>>>>>>>>>> train 8497 val 2833 test 2833 exp/ETT_checkpoints/SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0 iters: 100, epoch: 1 | loss: 0.2635144 speed: 0.0918s/iter; left time: 9735.2018s iters: 200, epoch: 1 | loss: 0.2746293 speed: 0.0640s/iter; left time: 6782.9779s iters: 300, epoch: 1 | loss: 0.2532458 speed: 0.0641s/iter; left time: 6787.8207s iters: 400, epoch: 1 | loss: 0.2308514 speed: 0.0644s/iter; left time: 6817.4635s iters: 500, epoch: 1 | loss: 0.3040747 speed: 0.0651s/iter; left time: 6883.0310s iters: 600, epoch: 1 | loss: 0.2578846 speed: 0.0644s/iter; left time: 6798.4408s iters: 700, epoch: 1 | loss: 0.2459396 speed: 0.0634s/iter; left time: 6689.3999s iters: 800, epoch: 1 | loss: 0.2914965 speed: 0.0653s/iter; left time: 6883.0213s iters: 900, epoch: 1 | loss: 0.2554513 speed: 0.0641s/iter; left time: 6750.0606s iters: 1000, epoch: 1 | loss: 0.2524573 speed: 0.0650s/iter; left time: 6838.8867s Epoch: 1 cost time: 71.33614897727966 --------start to validate----------- normed mse:0.0814, mae:0.2147, rmse:0.2852, mape:1.3623, mspe:25.9642, corr:0.8572 denormed mse:6.8514, mae:1.9706, rmse:2.6175, mape:0.1642, mspe:0.0813, corr:0.8572 --------start to test----------- normed mse:0.0892, mae:0.2317, rmse:0.2987, mape:0.1735, mspe:0.0476, corr:0.8131 denormed mse:7.5120, mae:2.1258, rmse:2.7408, mape:inf, mspe:inf, corr:0.8131 Epoch: 1, Steps: 1062 | Train Loss: 0.2954810 valid Loss: 0.2147395 Test Loss: 0.2316557 Validation loss decreased (inf --> 0.214739). Saving model ... Updating learning rate to 0.00285 iters: 100, epoch: 2 | loss: 0.2651560 speed: 0.2249s/iter; left time: 23618.5159s iters: 200, epoch: 2 | loss: 0.3282328 speed: 0.0653s/iter; left time: 6857.5758s iters: 300, epoch: 2 | loss: 0.2604796 speed: 0.0648s/iter; left time: 6791.7008s iters: 400, epoch: 2 | loss: 0.2489426 speed: 0.0648s/iter; left time: 6790.2499s iters: 500, epoch: 2 | loss: 0.3080887 speed: 0.0640s/iter; left time: 6695.5776s iters: 600, epoch: 2 | loss: 0.2923984 speed: 0.0637s/iter; left time: 6657.3868s iters: 700, epoch: 2 | loss: 0.3410122 speed: 0.0639s/iter; left time: 6673.1482s iters: 800, epoch: 2 | loss: 0.2606724 speed: 0.0651s/iter; left time: 6792.7796s iters: 900, epoch: 2 | loss: 0.2952042 speed: 0.0646s/iter; left time: 6730.5264s iters: 1000, epoch: 2 | loss: 0.1823040 speed: 0.0639s/iter; left time: 6656.2354s Epoch: 2 cost time: 68.4526846408844 --------start to validate----------- normed mse:0.0799, mae:0.2147, rmse:0.2827, mape:1.4790, mspe:30.6527, corr:0.8621 denormed mse:6.7300, mae:1.9705, rmse:2.5942, mape:0.1594, mspe:0.0701, corr:0.8621 --------start to test----------- normed mse:0.0586, mae:0.1883, rmse:0.2420, mape:0.1493, mspe:0.0399, corr:0.8178 denormed mse:4.9326, mae:1.7278, rmse:2.2210, mape:inf, mspe:inf, corr:0.8178 Epoch: 2, Steps: 1062 | Train Loss: 0.2776401 valid Loss: 0.2147304 Test Loss: 0.1882900 Validation loss decreased (0.214739 --> 0.214730). Saving model ... Updating learning rate to 0.0027075 iters: 100, epoch: 3 | loss: 0.3098924 speed: 0.2221s/iter; left time: 23097.3052s iters: 200, epoch: 3 | loss: 0.2770404 speed: 0.0643s/iter; left time: 6683.5361s iters: 300, epoch: 3 | loss: 0.3261764 speed: 0.0639s/iter; left time: 6627.7702s iters: 400, epoch: 3 | loss: 0.3463621 speed: 0.0645s/iter; left time: 6690.5621s iters: 500, epoch: 3 | loss: 0.2016839 speed: 0.0669s/iter; left time: 6931.5046s iters: 600, epoch: 3 | loss: 0.2773947 speed: 0.0660s/iter; left time: 6829.7108s iters: 700, epoch: 3 | loss: 0.2436916 speed: 0.0645s/iter; left time: 6672.7252s iters: 800, epoch: 3 | loss: 0.3100113 speed: 0.0636s/iter; left time: 6570.5793s iters: 900, epoch: 3 | loss: 0.2406910 speed: 0.0637s/iter; left time: 6570.7141s iters: 1000, epoch: 3 | loss: 0.2994752 speed: 0.0644s/iter; left time: 6641.0536s Epoch: 3 cost time: 68.57215976715088 --------start to validate----------- normed mse:0.0780, mae:0.2130, rmse:0.2793, mape:1.4954, mspe:33.6005, corr:0.8610 denormed mse:6.5708, mae:1.9548, rmse:2.5634, mape:0.1579, mspe:0.0679, corr:0.8610 --------start to test----------- normed mse:0.0572, mae:0.1855, rmse:0.2392, mape:0.1437, mspe:0.0355, corr:0.8176 denormed mse:4.8174, mae:1.7019, rmse:2.1949, mape:inf, mspe:inf, corr:0.8176 Epoch: 3, Steps: 1062 | Train Loss: 0.2742680 valid Loss: 0.2130265 Test Loss: 0.1854673 Validation loss decreased (0.214730 --> 0.213026). Saving model ... Updating learning rate to 0.0025721249999999998 iters: 100, epoch: 4 | loss: 0.3014244 speed: 0.2245s/iter; left time: 23101.4108s iters: 200, epoch: 4 | loss: 0.2271728 speed: 0.0643s/iter; left time: 6606.3076s iters: 300, epoch: 4 | loss: 0.3784584 speed: 0.0640s/iter; left time: 6569.4517s iters: 400, epoch: 4 | loss: 0.2752601 speed: 0.0653s/iter; left time: 6696.5213s iters: 500, epoch: 4 | loss: 0.3025605 speed: 0.0638s/iter; left time: 6536.2859s iters: 600, epoch: 4 | loss: 0.2795481 speed: 0.0638s/iter; left time: 6538.9267s iters: 700, epoch: 4 | loss: 0.2788646 speed: 0.0632s/iter; left time: 6465.8545s iters: 800, epoch: 4 | loss: 0.2323274 speed: 0.0640s/iter; left time: 6545.1154s iters: 900, epoch: 4 | loss: 0.2965076 speed: 0.0648s/iter; left time: 6620.8814s iters: 1000, epoch: 4 | loss: 0.2785395 speed: 0.0643s/iter; left time: 6555.9024s Epoch: 4 cost time: 68.27407765388489 --------start to validate----------- normed mse:0.0787, mae:0.2138, rmse:0.2805, mape:1.5257, mspe:33.8453, corr:0.8624 denormed mse:6.6266, mae:1.9617, rmse:2.5742, mape:0.1578, mspe:0.0676, corr:0.8624 --------start to test----------- normed mse:0.0564, mae:0.1850, rmse:0.2375, mape:0.1471, mspe:0.0391, corr:0.8203 denormed mse:4.7516, mae:1.6981, rmse:2.1798, mape:inf, mspe:inf, corr:0.8203 Epoch: 4, Steps: 1062 | Train Loss: 0.2719553 valid Loss: 0.2137705 Test Loss: 0.1850464 EarlyStopping counter: 1 out of 5 Updating learning rate to 0.0024435187499999996 iters: 100, epoch: 5 | loss: 0.3289642 speed: 0.2193s/iter; left time: 22341.4123s iters: 200, epoch: 5 | loss: 0.2475609 speed: 0.0641s/iter; left time: 6520.0001s iters: 300, epoch: 5 | loss: 0.2934369 speed: 0.0640s/iter; left time: 6501.5043s iters: 400, epoch: 5 | loss: 0.3514774 speed: 0.0639s/iter; left time: 6488.8877s iters: 500, epoch: 5 | loss: 0.3289756 speed: 0.0650s/iter; left time: 6592.7983s iters: 600, epoch: 5 | loss: 0.3147124 speed: 0.0660s/iter; left time: 6684.8574s iters: 700, epoch: 5 | loss: 0.2444675 speed: 0.0656s/iter; left time: 6642.7295s iters: 800, epoch: 5 | loss: 0.2227931 speed: 0.0648s/iter; left time: 6558.8881s iters: 900, epoch: 5 | loss: 0.2905650 speed: 0.0645s/iter; left time: 6519.5313s iters: 1000, epoch: 5 | loss: 0.2011140 speed: 0.0643s/iter; left time: 6490.7155s Epoch: 5 cost time: 68.49428033828735 --------start to validate----------- normed mse:0.0795, mae:0.2143, rmse:0.2820, mape:1.5023, mspe:32.9689, corr:0.8631 denormed mse:6.6963, mae:1.9665, rmse:2.5877, mape:0.1583, mspe:0.0665, corr:0.8631 --------start to test----------- normed mse:0.0568, mae:0.1897, rmse:0.2384, mape:0.1513, mspe:0.0402, corr:0.8214 denormed mse:4.7869, mae:1.7409, rmse:2.1879, mape:inf, mspe:inf, corr:0.8214 Epoch: 5, Steps: 1062 | Train Loss: 0.2680931 valid Loss: 0.2143007 Test Loss: 0.1897096 EarlyStopping counter: 2 out of 5 Updating learning rate to 0.0023213428124999992 iters: 100, epoch: 6 | loss: 0.2878237 speed: 0.2231s/iter; left time: 22487.2134s iters: 200, epoch: 6 | loss: 0.3053960 speed: 0.0642s/iter; left time: 6463.6262s iters: 300, epoch: 6 | loss: 0.2794231 speed: 0.0657s/iter; left time: 6608.5483s iters: 400, epoch: 6 | loss: 0.1824071 speed: 0.0658s/iter; left time: 6613.3665s iters: 500, epoch: 6 | loss: 0.3717845 speed: 0.0657s/iter; left time: 6599.7808s iters: 600, epoch: 6 | loss: 0.2623390 speed: 0.0657s/iter; left time: 6589.1822s iters: 700, epoch: 6 | loss: 0.2274510 speed: 0.0651s/iter; left time: 6523.5771s iters: 800, epoch: 6 | loss: 0.2571564 speed: 0.0665s/iter; left time: 6657.4378s iters: 900, epoch: 6 | loss: 0.2891446 speed: 0.0667s/iter; left time: 6670.0139s iters: 1000, epoch: 6 | loss: 0.3868507 speed: 0.0665s/iter; left time: 6639.8010s Epoch: 6 cost time: 69.7518322467804 --------start to validate----------- normed mse:0.0777, mae:0.2116, rmse:0.2788, mape:1.5445, mspe:35.4777, corr:0.8636 denormed mse:6.5435, mae:1.9422, rmse:2.5580, mape:0.1556, mspe:0.0658, corr:0.8636 --------start to test----------- normed mse:0.0489, mae:0.1677, rmse:0.2211, mape:0.1310, mspe:0.0322, corr:0.8212 denormed mse:4.1181, mae:1.5390, rmse:2.0293, mape:inf, mspe:inf, corr:0.8212 Epoch: 6, Steps: 1062 | Train Loss: 0.2668546 valid Loss: 0.2116495 Test Loss: 0.1677159 Validation loss decreased (0.213026 --> 0.211650). Saving model ... Updating learning rate to 0.0022052756718749992 iters: 100, epoch: 7 | loss: 0.2596520 speed: 0.2240s/iter; left time: 22334.3117s iters: 200, epoch: 7 | loss: 0.2324577 speed: 0.0640s/iter; left time: 6381.1203s iters: 300, epoch: 7 | loss: 0.2214808 speed: 0.0651s/iter; left time: 6474.6273s iters: 400, epoch: 7 | loss: 0.2045112 speed: 0.0632s/iter; left time: 6281.8838s iters: 500, epoch: 7 | loss: 0.2396872 speed: 0.0636s/iter; left time: 6316.9631s iters: 600, epoch: 7 | loss: 0.1907633 speed: 0.0644s/iter; left time: 6388.6488s iters: 700, epoch: 7 | loss: 0.2620018 speed: 0.0747s/iter; left time: 7404.8432s iters: 800, epoch: 7 | loss: 0.2821859 speed: 0.0652s/iter; left time: 6457.3174s iters: 900, epoch: 7 | loss: 0.2233998 speed: 0.0641s/iter; left time: 6339.8982s iters: 1000, epoch: 7 | loss: 0.2333842 speed: 0.0648s/iter; left time: 6402.4253s Epoch: 7 cost time: 69.41319704055786 --------start to validate----------- normed mse:0.0770, mae:0.2089, rmse:0.2776, mape:1.3294, mspe:25.3847, corr:0.8637 denormed mse:6.4876, mae:1.9170, rmse:2.5471, mape:0.1582, mspe:0.0734, corr:0.8637 --------start to test----------- normed mse:0.0935, mae:0.2430, rmse:0.3058, mape:0.1754, mspe:0.0452, corr:0.8126 denormed mse:7.8770, mae:2.2295, rmse:2.8066, mape:inf, mspe:inf, corr:0.8126 Epoch: 7, Steps: 1062 | Train Loss: 0.2652045 valid Loss: 0.2089042 Test Loss: 0.2429530 Validation loss decreased (0.211650 --> 0.208904). Saving model ... Updating learning rate to 0.0020950118882812493 iters: 100, epoch: 8 | loss: 0.2476663 speed: 0.2239s/iter; left time: 22095.3285s iters: 200, epoch: 8 | loss: 0.3140231 speed: 0.0660s/iter; left time: 6502.9781s iters: 300, epoch: 8 | loss: 0.2049506 speed: 0.0647s/iter; left time: 6368.1886s iters: 400, epoch: 8 | loss: 0.2751644 speed: 0.0648s/iter; left time: 6373.9229s iters: 500, epoch: 8 | loss: 0.3105533 speed: 0.0664s/iter; left time: 6520.4210s iters: 600, epoch: 8 | loss: 0.2195727 speed: 0.0640s/iter; left time: 6287.0444s iters: 700, epoch: 8 | loss: 0.2385361 speed: 0.0644s/iter; left time: 6313.8700s iters: 800, epoch: 8 | loss: 0.2514920 speed: 0.0661s/iter; left time: 6476.8801s iters: 900, epoch: 8 | loss: 0.2696154 speed: 0.0639s/iter; left time: 6250.8148s iters: 1000, epoch: 8 | loss: 0.2994070 speed: 0.0646s/iter; left time: 6314.7932s Epoch: 8 cost time: 69.0834846496582 --------start to validate----------- normed mse:0.0786, mae:0.2097, rmse:0.2803, mape:1.3397, mspe:24.8040, corr:0.8618 denormed mse:6.6175, mae:1.9241, rmse:2.5725, mape:0.1595, mspe:0.0748, corr:0.8618 --------start to test----------- normed mse:0.0803, mae:0.2185, rmse:0.2833, mape:0.1605, mspe:0.0409, corr:0.8191 denormed mse:6.7598, mae:2.0055, rmse:2.6000, mape:inf, mspe:inf, corr:0.8191 Epoch: 8, Steps: 1062 | Train Loss: 0.2626855 valid Loss: 0.2096790 Test Loss: 0.2185455 EarlyStopping counter: 1 out of 5 Updating learning rate to 0.0019902612938671868 iters: 100, epoch: 9 | loss: 0.2035707 speed: 0.2216s/iter; left time: 21633.7680s iters: 200, epoch: 9 | loss: 0.1929587 speed: 0.0643s/iter; left time: 6273.6812s iters: 300, epoch: 9 | loss: 0.2027658 speed: 0.0644s/iter; left time: 6275.3628s iters: 400, epoch: 9 | loss: 0.1670165 speed: 0.0655s/iter; left time: 6372.5070s iters: 500, epoch: 9 | loss: 0.3162191 speed: 0.0644s/iter; left time: 6264.6492s iters: 600, epoch: 9 | loss: 0.2530913 speed: 0.0642s/iter; left time: 6233.9174s iters: 700, epoch: 9 | loss: 0.2298067 speed: 0.0676s/iter; left time: 6557.6573s iters: 800, epoch: 9 | loss: 0.2281127 speed: 0.0661s/iter; left time: 6406.0165s iters: 900, epoch: 9 | loss: 0.2107817 speed: 0.0664s/iter; left time: 6426.1786s iters: 1000, epoch: 9 | loss: 0.2355524 speed: 0.0656s/iter; left time: 6345.4314s Epoch: 9 cost time: 69.29217648506165 --------start to validate----------- normed mse:0.0747, mae:0.2069, rmse:0.2732, mape:1.4617, mspe:31.3834, corr:0.8653 denormed mse:6.2872, mae:1.8984, rmse:2.5074, mape:0.1508, mspe:0.0608, corr:0.8653 --------start to test----------- normed mse:0.0819, mae:0.2269, rmse:0.2861, mape:0.1668, mspe:0.0438, corr:0.7987 denormed mse:6.8943, mae:2.0818, rmse:2.6257, mape:inf, mspe:inf, corr:0.7987 Epoch: 9, Steps: 1062 | Train Loss: 0.2619727 valid Loss: 0.2068796 Test Loss: 0.2268594 Validation loss decreased (0.208904 --> 0.206880). Saving model ... Updating learning rate to 0.0018907482291738273 iters: 100, epoch: 10 | loss: 0.2934171 speed: 0.2292s/iter; left time: 22127.8739s iters: 200, epoch: 10 | loss: 0.3041674 speed: 0.0652s/iter; left time: 6284.6789s iters: 300, epoch: 10 | loss: 0.2558330 speed: 0.0646s/iter; left time: 6225.6793s iters: 400, epoch: 10 | loss: 0.3225133 speed: 0.0643s/iter; left time: 6186.9587s iters: 500, epoch: 10 | loss: 0.2021957 speed: 0.0646s/iter; left time: 6214.6386s iters: 600, epoch: 10 | loss: 0.2634687 speed: 0.0644s/iter; left time: 6184.0262s iters: 700, epoch: 10 | loss: 0.3601710 speed: 0.0649s/iter; left time: 6231.3502s iters: 800, epoch: 10 | loss: 0.2715219 speed: 0.0643s/iter; left time: 6158.7077s iters: 900, epoch: 10 | loss: 0.3285161 speed: 0.0662s/iter; left time: 6337.7003s iters: 1000, epoch: 10 | loss: 0.2874655 speed: 0.0650s/iter; left time: 6218.2058s Epoch: 10 cost time: 69.2235357761383 --------start to validate----------- normed mse:0.0741, mae:0.2067, rmse:0.2721, mape:1.4288, mspe:32.2580, corr:0.8650 denormed mse:6.2362, mae:1.8968, rmse:2.4972, mape:0.1524, mspe:0.0658, corr:0.8650 --------start to test----------- normed mse:0.1252, mae:0.2862, rmse:0.3538, mape:0.2056, mspe:0.0598, corr:0.7859 denormed mse:10.5420, mae:2.6266, rmse:3.2468, mape:inf, mspe:inf, corr:0.7859 Epoch: 10, Steps: 1062 | Train Loss: 0.2617135 valid Loss: 0.2067047 Test Loss: 0.2862312 Validation loss decreased (0.206880 --> 0.206705). Saving model ... Updating learning rate to 0.001796210817715136 iters: 100, epoch: 11 | loss: 0.2604572 speed: 0.2211s/iter; left time: 21110.7798s iters: 200, epoch: 11 | loss: 0.1902495 speed: 0.0639s/iter; left time: 6093.1127s iters: 300, epoch: 11 | loss: 0.2706100 speed: 0.0645s/iter; left time: 6144.9446s iters: 400, epoch: 11 | loss: 0.2700502 speed: 0.0641s/iter; left time: 6099.0807s iters: 500, epoch: 11 | loss: 0.2039715 speed: 0.0644s/iter; left time: 6119.2977s iters: 600, epoch: 11 | loss: 0.2211753 speed: 0.0644s/iter; left time: 6116.2928s iters: 700, epoch: 11 | loss: 0.3060542 speed: 0.0641s/iter; left time: 6078.2290s iters: 800, epoch: 11 | loss: 0.2073108 speed: 0.0636s/iter; left time: 6031.3409s iters: 900, epoch: 11 | loss: 0.3166656 speed: 0.0642s/iter; left time: 6077.3573s iters: 1000, epoch: 11 | loss: 0.2857897 speed: 0.0636s/iter; left time: 6020.0385s Epoch: 11 cost time: 68.08029365539551 --------start to validate----------- normed mse:0.0780, mae:0.2101, rmse:0.2792, mape:1.4069, mspe:27.8713, corr:0.8645 denormed mse:6.5642, mae:1.9282, rmse:2.5621, mape:0.1562, mspe:0.0672, corr:0.8645 --------start to test----------- normed mse:0.0486, mae:0.1696, rmse:0.2206, mape:0.1346, mspe:0.0347, corr:0.8196 denormed mse:4.0965, mae:1.5563, rmse:2.0240, mape:inf, mspe:inf, corr:0.8196 Epoch: 11, Steps: 1062 | Train Loss: 0.2599722 valid Loss: 0.2101241 Test Loss: 0.1695920 EarlyStopping counter: 1 out of 5 Updating learning rate to 0.0017064002768293791 iters: 100, epoch: 12 | loss: 0.3560360 speed: 0.2200s/iter; left time: 20776.4963s iters: 200, epoch: 12 | loss: 0.2906877 speed: 0.0638s/iter; left time: 6017.2672s iters: 300, epoch: 12 | loss: 0.3136698 speed: 0.0640s/iter; left time: 6029.7119s iters: 400, epoch: 12 | loss: 0.4521513 speed: 0.0637s/iter; left time: 5992.6421s iters: 500, epoch: 12 | loss: 0.2683279 speed: 0.0635s/iter; left time: 5973.5166s iters: 600, epoch: 12 | loss: 0.2095424 speed: 0.0632s/iter; left time: 5932.0850s iters: 700, epoch: 12 | loss: 0.3217563 speed: 0.0646s/iter; left time: 6062.9666s iters: 800, epoch: 12 | loss: 0.2670196 speed: 0.0635s/iter; left time: 5954.2641s iters: 900, epoch: 12 | loss: 0.2306930 speed: 0.0639s/iter; left time: 5977.7809s iters: 1000, epoch: 12 | loss: 0.2080201 speed: 0.0633s/iter; left time: 5915.4393s Epoch: 12 cost time: 67.63420724868774 --------start to validate----------- normed mse:0.0752, mae:0.2047, rmse:0.2742, mape:1.3057, mspe:24.8466, corr:0.8630 denormed mse:6.3301, mae:1.8782, rmse:2.5160, mape:0.1537, mspe:0.0697, corr:0.8630 --------start to test----------- normed mse:0.1127, mae:0.2651, rmse:0.3357, mape:0.1911, mspe:0.0544, corr:0.7504 denormed mse:9.4922, mae:2.4327, rmse:3.0809, mape:inf, mspe:inf, corr:0.7504 Epoch: 12, Steps: 1062 | Train Loss: 0.2593013 valid Loss: 0.2046781 Test Loss: 0.2651025 Validation loss decreased (0.206705 --> 0.204678). Saving model ... Updating learning rate to 0.00162108026298791 iters: 100, epoch: 13 | loss: 0.2679453 speed: 0.2223s/iter; left time: 20751.6236s iters: 200, epoch: 13 | loss: 0.2244501 speed: 0.0640s/iter; left time: 5970.0265s iters: 300, epoch: 13 | loss: 0.2729070 speed: 0.0638s/iter; left time: 5944.9959s iters: 400, epoch: 13 | loss: 0.2141117 speed: 0.0642s/iter; left time: 5973.2411s iters: 500, epoch: 13 | loss: 0.2737395 speed: 0.0649s/iter; left time: 6035.3685s iters: 600, epoch: 13 | loss: 0.3773285 speed: 0.0655s/iter; left time: 6084.1809s iters: 700, epoch: 13 | loss: 0.3060603 speed: 0.0651s/iter; left time: 6037.3794s iters: 800, epoch: 13 | loss: 0.3271270 speed: 0.0639s/iter; left time: 5919.0781s iters: 900, epoch: 13 | loss: 0.2570842 speed: 0.0644s/iter; left time: 5959.6727s iters: 1000, epoch: 13 | loss: 0.1967695 speed: 0.0650s/iter; left time: 6008.8446s Epoch: 13 cost time: 68.51006627082825 --------start to validate----------- normed mse:0.0754, mae:0.2070, rmse:0.2745, mape:1.3459, mspe:27.9367, corr:0.8626 denormed mse:6.3463, mae:1.8996, rmse:2.5192, mape:0.1547, mspe:0.0686, corr:0.8626 --------start to test----------- normed mse:0.1270, mae:0.2863, rmse:0.3563, mape:0.2047, mspe:0.0591, corr:0.7302 denormed mse:10.6924, mae:2.6270, rmse:3.2699, mape:inf, mspe:inf, corr:0.7302 Epoch: 13, Steps: 1062 | Train Loss: 0.2578851 valid Loss: 0.2070073 Test Loss: 0.2862706 EarlyStopping counter: 1 out of 5 Updating learning rate to 0.0015400262498385146 iters: 100, epoch: 14 | loss: 0.3828691 speed: 0.2242s/iter; left time: 20692.6053s iters: 200, epoch: 14 | loss: 0.2255980 speed: 0.0655s/iter; left time: 6034.3079s iters: 300, epoch: 14 | loss: 0.2057881 speed: 0.0651s/iter; left time: 5999.9542s iters: 400, epoch: 14 | loss: 0.2044961 speed: 0.0654s/iter; left time: 6012.5291s iters: 500, epoch: 14 | loss: 0.1950546 speed: 0.0659s/iter; left time: 6053.8845s iters: 600, epoch: 14 | loss: 0.2513721 speed: 0.0663s/iter; left time: 6081.8917s iters: 700, epoch: 14 | loss: 0.3617742 speed: 0.0676s/iter; left time: 6199.0184s iters: 800, epoch: 14 | loss: 0.1818448 speed: 0.0660s/iter; left time: 6047.2192s iters: 900, epoch: 14 | loss: 0.2921709 speed: 0.0637s/iter; left time: 5831.9500s iters: 1000, epoch: 14 | loss: 0.4443547 speed: 0.0636s/iter; left time: 5812.0902s Epoch: 14 cost time: 69.46138763427734 --------start to validate----------- normed mse:0.0749, mae:0.2054, rmse:0.2737, mape:1.3964, mspe:29.3059, corr:0.8646 denormed mse:6.3101, mae:1.8853, rmse:2.5120, mape:0.1517, mspe:0.0638, corr:0.8646 --------start to test----------- normed mse:0.0582, mae:0.1803, rmse:0.2413, mape:0.1373, mspe:0.0348, corr:0.7685 denormed mse:4.9051, mae:1.6544, rmse:2.2147, mape:inf, mspe:inf, corr:0.7685 Epoch: 14, Steps: 1062 | Train Loss: 0.2567903 valid Loss: 0.2054437 Test Loss: 0.1802866 EarlyStopping counter: 2 out of 5 Updating learning rate to 0.0014630249373465886 iters: 100, epoch: 15 | loss: 0.2464596 speed: 0.2214s/iter; left time: 20196.9599s iters: 200, epoch: 15 | loss: 0.3060012 speed: 0.0648s/iter; left time: 5902.4050s iters: 300, epoch: 15 | loss: 0.3132369 speed: 0.0647s/iter; left time: 5892.1457s iters: 400, epoch: 15 | loss: 0.3352527 speed: 0.0645s/iter; left time: 5867.1592s iters: 500, epoch: 15 | loss: 0.2343948 speed: 0.0640s/iter; left time: 5814.1662s iters: 600, epoch: 15 | loss: 0.1983065 speed: 0.0636s/iter; left time: 5772.0216s iters: 700, epoch: 15 | loss: 0.1803256 speed: 0.0638s/iter; left time: 5782.6625s iters: 800, epoch: 15 | loss: 0.2608042 speed: 0.0640s/iter; left time: 5795.1071s iters: 900, epoch: 15 | loss: 0.3982482 speed: 0.0656s/iter; left time: 5930.5014s iters: 1000, epoch: 15 | loss: 0.2794555 speed: 0.0649s/iter; left time: 5859.7955s Epoch: 15 cost time: 68.22825241088867 --------start to validate----------- normed mse:0.0751, mae:0.2062, rmse:0.2740, mape:1.4129, mspe:29.4700, corr:0.8638 denormed mse:6.3221, mae:1.8922, rmse:2.5144, mape:0.1511, mspe:0.0607, corr:0.8638 --------start to test----------- normed mse:0.0743, mae:0.2067, rmse:0.2726, mape:0.1542, mspe:0.0414, corr:0.7309 denormed mse:6.2598, mae:1.8971, rmse:2.5020, mape:inf, mspe:inf, corr:0.7309 Epoch: 15, Steps: 1062 | Train Loss: 0.2556333 valid Loss: 0.2062003 Test Loss: 0.2067328 EarlyStopping counter: 3 out of 5 Updating learning rate to 0.001389873690479259 iters: 100, epoch: 16 | loss: 0.4233045 speed: 0.2250s/iter; left time: 20289.9657s iters: 200, epoch: 16 | loss: 0.2079662 speed: 0.0654s/iter; left time: 5887.5618s iters: 300, epoch: 16 | loss: 0.2657562 speed: 0.0672s/iter; left time: 6045.0847s iters: 400, epoch: 16 | loss: 0.1947590 speed: 0.0716s/iter; left time: 6432.6096s iters: 500, epoch: 16 | loss: 0.2555844 speed: 0.0686s/iter; left time: 6157.6966s iters: 600, epoch: 16 | loss: 0.2228586 speed: 0.0686s/iter; left time: 6153.4446s iters: 700, epoch: 16 | loss: 0.2561190 speed: 0.0693s/iter; left time: 6211.2939s iters: 800, epoch: 16 | loss: 0.2335158 speed: 0.0682s/iter; left time: 6099.8479s iters: 900, epoch: 16 | loss: 0.1972947 speed: 0.0681s/iter; left time: 6082.7818s iters: 1000, epoch: 16 | loss: 0.2210546 speed: 0.0684s/iter; left time: 6108.1396s Epoch: 16 cost time: 72.44369554519653 --------start to validate----------- normed mse:0.0767, mae:0.2066, rmse:0.2769, mape:1.3287, mspe:26.0432, corr:0.8614 denormed mse:6.4587, mae:1.8956, rmse:2.5414, mape:0.1549, mspe:0.0696, corr:0.8614 --------start to test----------- normed mse:0.0722, mae:0.2028, rmse:0.2686, mape:0.1513, mspe:0.0397, corr:0.7538 denormed mse:6.0773, mae:1.8609, rmse:2.4652, mape:inf, mspe:inf, corr:0.7538 Epoch: 16, Steps: 1062 | Train Loss: 0.2538822 valid Loss: 0.2065719 Test Loss: 0.2027889 EarlyStopping counter: 4 out of 5 Updating learning rate to 0.001320380005955296 iters: 100, epoch: 17 | loss: 0.2435877 speed: 0.2235s/iter; left time: 19918.0769s iters: 200, epoch: 17 | loss: 0.2681281 speed: 0.0643s/iter; left time: 5724.9103s iters: 300, epoch: 17 | loss: 0.2386408 speed: 0.0645s/iter; left time: 5732.2681s iters: 400, epoch: 17 | loss: 0.1940629 speed: 0.0645s/iter; left time: 5730.9041s iters: 500, epoch: 17 | loss: 0.1982391 speed: 0.0647s/iter; left time: 5735.3843s iters: 600, epoch: 17 | loss: 0.2039342 speed: 0.0645s/iter; left time: 5713.2374s iters: 700, epoch: 17 | loss: 0.2077632 speed: 0.0648s/iter; left time: 5738.5635s iters: 800, epoch: 17 | loss: 0.3183163 speed: 0.0640s/iter; left time: 5656.7522s iters: 900, epoch: 17 | loss: 0.2791365 speed: 0.0648s/iter; left time: 5723.3435s iters: 1000, epoch: 17 | loss: 0.2257991 speed: 0.0642s/iter; left time: 5660.2778s Epoch: 17 cost time: 68.39220643043518 --------start to validate----------- normed mse:0.0785, mae:0.2103, rmse:0.2803, mape:1.3704, mspe:27.2711, corr:0.8559 denormed mse:6.6141, mae:1.9294, rmse:2.5718, mape:0.1568, mspe:0.0703, corr:0.8559 --------start to test----------- normed mse:0.1018, mae:0.2485, rmse:0.3191, mape:0.1800, mspe:0.0498, corr:0.7221 denormed mse:8.5757, mae:2.2802, rmse:2.9284, mape:inf, mspe:inf, corr:0.7221 Epoch: 17, Steps: 1062 | Train Loss: 0.2532137 valid Loss: 0.2102546 Test Loss: 0.2484858 EarlyStopping counter: 5 out of 5 Early stopping save model in exp/ETT_checkpoints/SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0/ETTh148.bin testing : SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< test 2833 normed mse:0.1127, mae:0.2651, rmse:0.3357, mape:0.1911, mspe:0.0544, corr:0.7504 TTTT denormed mse:9.4922, mae:2.4327, rmse:3.0809, mape:inf, mspe:inf, corr:0.7504 Final mean normed mse:0.1127,mae:0.2651,denormed mse:9.4922,mae:2.4327`

    After this a new folder in exp is formed exp/ETT_checkpoints/SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0

    and there are 2 folders withing it 1.) ETTh148.bin 2.) checkpoint.pth Upon extracting it we get 1.) archive/data.pkl

    When i try to convert this data.pkl file into a data frame by running this code: import numpy as np import pandas as pd import pickle

    df = pd.read_pickle('out.pkl') print(df) I get the following error: unpicklingerror: a load persistent id instruction was encountered, but no persistent_load function was specified.

    How do i Plot the results of ETTh1 ? Thank you

    opened by vinayakrajurs 4
  • 如何训练自己的数据集呢?

    如何训练自己的数据集呢?

    在下菜鸟一枚。之前在informer模型上运行了自己的数据集,现在打算在SCINet上也如法炮制一番。但是遇到了如下的报错: Q_O7} ~}GKAMSQ@RUO)62_Y 我是这样处理的:将run_ETTh.py文件中的data_paser添加我的数据集,对应的dataloader是Dataset_Custom,并在ETTH_data_loader.py文件中添加了配置。加载应该是可以的,但是返回了报错。Etth数据集加载的代码结构和informer模型代码的很像,但在informer模型的代码中就没有报错。请问是为什么呢?

    opened by DomineeringDragon 3
  • Unable to reproduce the results

    Unable to reproduce the results

    I am surprised by the excellent performance of SCINet reported in the paper. However, I cannot reproduce the results. I download the code and use the same command and try to reproduce the results of the exchange dataset when the lookahead is 3. The result is rse 0.018 and corr 0.9738, which is greatly different from the result reported in the paper (rse 0.0147 and corr 0.9868). Could you please tell me what happens?

    opened by ZhangTP1996 3
  • 请问论文中Figure3是如何作出的呢?

    请问论文中Figure3是如何作出的呢?

    在下在阅读论文时有个疑问,请问 Figure 3: The prediction results (Horizon = 48) of SCINet, Informer (Zhou et al. 2021), and TCN on randomly-selected sequencesfrom ETTh1 dataset. 里的这些预测的对比图是怎么作出的呢?

    opened by DomineeringDragon 2
  • 你好,在financial_dataloader.py中,好像没有用到horizon参数,但是我看运行命令中,有使用horizon参数,请问是我理解有问题嘛

    你好,在financial_dataloader.py中,好像没有用到horizon参数,但是我看运行命令中,有使用horizon参数,请问是我理解有问题嘛

    def _batchify(self, idx_set, horizon): n = len(idx_set) X = torch.zeros((n, self.P, self.m)) Y = torch.zeros((n, self.h, self.m)) for i in range(n): end = idx_set[i] - self.h + 1 start = end - self.P X[i, :, :] = torch.from_numpy(self.dat[start:end, :]) # Y[i, :, :] = torch.from_numpy(self.dat[idx_set[i] - self.h:idx_set[i], :]) Y[i, :, :] = torch.from_numpy(self.dat[end:(idx_set[i]+1), :])

    opened by SCXCLY 2
  • Multi Input Single Output Net

    Multi Input Single Output Net

    Hello!

    I'm in a situation where I have to predict the future value of a variable that highly depends on another one, as I've seen in the state of the art, there exist models that take into account the future value from those exogenous variables to predict the future value of the desired variable.

    For example: I want to predict the heart beat frequency taking into account the activity that the user will be doing at the future.

    The first idea that came into my mind was to use the 'MS' feature (multi input single value I understood) value from your code instead of 'M' or 'S', to check if the model would learn it implicitly.

    The problem is that I think the option 'MS' is not implemented completely, am I wrong?

    In case that I wanted to develop it, which strategy do you think I should use, where should I start? Do you think it will work properly? Will it be hard to implement that condition?

    Thanks in advance!

    Aniol

    opened by acivit 2
  • ETTH result cannot be reproduced

    ETTH result cannot be reproduced

    Hi, I wonder the result on ETTH dataset in paper, are they Normed or Denormed? If they were denormed, the difference between my result and the paper is too large. This is my result, is this correct? (I used the latest version code) Final mean normed mse:0.3660,mae:0.3998 denormed mse:8.2375,mae:1.5608

    opened by ariellin10 2
  • Some questions about the results of the PEMS dataset

    Some questions about the results of the PEMS dataset

    Hello, your work is very rewarding! But I had some deviations in the MAE, MAPE, RMSE metrics when I performed the replication on the PEMS dataset, and I only used the model part of the code you provided. When using your code completely, MAE, MAPE, RMSE and the results in the paper are about the same, so I would like to ask you if you use any tips in data preprocessing.

    opened by liuaoy 2
  • How to run SCINet on a custom dataset ?

    How to run SCINet on a custom dataset ?

    Hello again, What are the changes to be made in run_ETTh.py, exp_ETTh.py and etth_data_loader.py to be able to fit this custom dataset and successfully run the code targeting Voltage Dataset to be used : dataset4.csv

    Thank You in advance

    opened by vinayakrajurs 2
  • Code isn't executing for Test data

    Code isn't executing for Test data

    Hello, I'm trying to run the Code for the ETTh1 dataset using the following run command in Google Colab: !pythonrun_ETTh_10.py --data ETTh1 --features S --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 1 --levels 3 --lr 3e-3 --batch_size 8 --dropout 0.5 --model_name etth1_M_I48_O24_lr3e-3_bs8_dp0.5_h4_s1l3 and it runs successfully for train before early stopping at epoch 17 `Args in experiment: Namespace(INN=1, RIN=False, batch_size=8, c_out=1, checkpoints='exp/ETT_checkpoints/', cols=None, concat_len=0, data='ETTh1', data_path='ETTh1.csv', dec_in=1, detail_freq='h', devices='0', dilation=1, dropout=0.5, embed='timeF', enc_in=1, evaluate=False, features='S', freq='h', gpu=0, groups=1, hidden_size=4.0, inverse=False, itr=0, kernel=5, label_len=48, lastWeight=1.0, levels=3, loss='mae', lr=0.003, lradj=1, model='SCINet', model_name='etth1_M_I48_O24_lr3e-3_bs8_dp0.5_h4_s1l3', num_decoder_layer=1, num_workers=0, patience=5, positionalEcoding=False, pred_len=48, resume=False, root_path='./datasets/', save=False, seq_len=96, single_step=0, single_step_output_One=0, stacks=1, target='OT', train_epochs=100, use_amp=False, use_gpu=True, use_multi_gpu=False, window_size=12) SCINet( (blocks1): EncoderTree( (SCINet_Tree): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) (SCINet_Tree_odd): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) (SCINet_Tree_odd): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) (SCINet_Tree_even): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) ) (SCINet_Tree_even): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) (SCINet_Tree_odd): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) (SCINet_Tree_even): SCINet_Tree( (workingblock): LevelSCINet( (interact): InteractorLevel( (level): Interactor( (split): Splitting() (phi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (psi): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (P): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) (U): Sequential( (0): ReplicationPad1d((3, 3)) (1): Conv1d(1, 4, kernel_size=(5,), stride=(1,)) (2): LeakyReLU(negative_slope=0.01, inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Conv1d(4, 1, kernel_size=(3,), stride=(1,)) (5): Tanh() ) ) ) ) ) ) ) ) (projection1): Conv1d(96, 48, kernel_size=(1,), stride=(1,), bias=False) (div_projection): ModuleList() )

    start training : SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0>>>>>>>>>>>>>>>>>>>>>>>>>> train 8497 val 2833 test 2833 exp/ETT_checkpoints/SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0 iters: 100, epoch: 1 | loss: 0.2635144 speed: 0.0918s/iter; left time: 9735.2018s iters: 200, epoch: 1 | loss: 0.2746293 speed: 0.0640s/iter; left time: 6782.9779s iters: 300, epoch: 1 | loss: 0.2532458 speed: 0.0641s/iter; left time: 6787.8207s iters: 400, epoch: 1 | loss: 0.2308514 speed: 0.0644s/iter; left time: 6817.4635s iters: 500, epoch: 1 | loss: 0.3040747 speed: 0.0651s/iter; left time: 6883.0310s iters: 600, epoch: 1 | loss: 0.2578846 speed: 0.0644s/iter; left time: 6798.4408s iters: 700, epoch: 1 | loss: 0.2459396 speed: 0.0634s/iter; left time: 6689.3999s iters: 800, epoch: 1 | loss: 0.2914965 speed: 0.0653s/iter; left time: 6883.0213s iters: 900, epoch: 1 | loss: 0.2554513 speed: 0.0641s/iter; left time: 6750.0606s iters: 1000, epoch: 1 | loss: 0.2524573 speed: 0.0650s/iter; left time: 6838.8867s Epoch: 1 cost time: 71.33614897727966 --------start to validate----------- normed mse:0.0814, mae:0.2147, rmse:0.2852, mape:1.3623, mspe:25.9642, corr:0.8572 denormed mse:6.8514, mae:1.9706, rmse:2.6175, mape:0.1642, mspe:0.0813, corr:0.8572 --------start to test----------- normed mse:0.0892, mae:0.2317, rmse:0.2987, mape:0.1735, mspe:0.0476, corr:0.8131 denormed mse:7.5120, mae:2.1258, rmse:2.7408, mape:inf, mspe:inf, corr:0.8131 Epoch: 1, Steps: 1062 | Train Loss: 0.2954810 valid Loss: 0.2147395 Test Loss: 0.2316557 Validation loss decreased (inf --> 0.214739). Saving model ... Updating learning rate to 0.00285 iters: 100, epoch: 2 | loss: 0.2651560 speed: 0.2249s/iter; left time: 23618.5159s iters: 200, epoch: 2 | loss: 0.3282328 speed: 0.0653s/iter; left time: 6857.5758s iters: 300, epoch: 2 | loss: 0.2604796 speed: 0.0648s/iter; left time: 6791.7008s iters: 400, epoch: 2 | loss: 0.2489426 speed: 0.0648s/iter; left time: 6790.2499s iters: 500, epoch: 2 | loss: 0.3080887 speed: 0.0640s/iter; left time: 6695.5776s iters: 600, epoch: 2 | loss: 0.2923984 speed: 0.0637s/iter; left time: 6657.3868s iters: 700, epoch: 2 | loss: 0.3410122 speed: 0.0639s/iter; left time: 6673.1482s iters: 800, epoch: 2 | loss: 0.2606724 speed: 0.0651s/iter; left time: 6792.7796s iters: 900, epoch: 2 | loss: 0.2952042 speed: 0.0646s/iter; left time: 6730.5264s iters: 1000, epoch: 2 | loss: 0.1823040 speed: 0.0639s/iter; left time: 6656.2354s Epoch: 2 cost time: 68.4526846408844 --------start to validate----------- normed mse:0.0799, mae:0.2147, rmse:0.2827, mape:1.4790, mspe:30.6527, corr:0.8621 denormed mse:6.7300, mae:1.9705, rmse:2.5942, mape:0.1594, mspe:0.0701, corr:0.8621 --------start to test----------- normed mse:0.0586, mae:0.1883, rmse:0.2420, mape:0.1493, mspe:0.0399, corr:0.8178 denormed mse:4.9326, mae:1.7278, rmse:2.2210, mape:inf, mspe:inf, corr:0.8178 Epoch: 2, Steps: 1062 | Train Loss: 0.2776401 valid Loss: 0.2147304 Test Loss: 0.1882900 Validation loss decreased (0.214739 --> 0.214730). Saving model ... Updating learning rate to 0.0027075 iters: 100, epoch: 3 | loss: 0.3098924 speed: 0.2221s/iter; left time: 23097.3052s iters: 200, epoch: 3 | loss: 0.2770404 speed: 0.0643s/iter; left time: 6683.5361s iters: 300, epoch: 3 | loss: 0.3261764 speed: 0.0639s/iter; left time: 6627.7702s iters: 400, epoch: 3 | loss: 0.3463621 speed: 0.0645s/iter; left time: 6690.5621s iters: 500, epoch: 3 | loss: 0.2016839 speed: 0.0669s/iter; left time: 6931.5046s iters: 600, epoch: 3 | loss: 0.2773947 speed: 0.0660s/iter; left time: 6829.7108s iters: 700, epoch: 3 | loss: 0.2436916 speed: 0.0645s/iter; left time: 6672.7252s iters: 800, epoch: 3 | loss: 0.3100113 speed: 0.0636s/iter; left time: 6570.5793s iters: 900, epoch: 3 | loss: 0.2406910 speed: 0.0637s/iter; left time: 6570.7141s iters: 1000, epoch: 3 | loss: 0.2994752 speed: 0.0644s/iter; left time: 6641.0536s Epoch: 3 cost time: 68.57215976715088 --------start to validate----------- normed mse:0.0780, mae:0.2130, rmse:0.2793, mape:1.4954, mspe:33.6005, corr:0.8610 denormed mse:6.5708, mae:1.9548, rmse:2.5634, mape:0.1579, mspe:0.0679, corr:0.8610 --------start to test----------- normed mse:0.0572, mae:0.1855, rmse:0.2392, mape:0.1437, mspe:0.0355, corr:0.8176 denormed mse:4.8174, mae:1.7019, rmse:2.1949, mape:inf, mspe:inf, corr:0.8176 Epoch: 3, Steps: 1062 | Train Loss: 0.2742680 valid Loss: 0.2130265 Test Loss: 0.1854673 Validation loss decreased (0.214730 --> 0.213026). Saving model ... Updating learning rate to 0.0025721249999999998 iters: 100, epoch: 4 | loss: 0.3014244 speed: 0.2245s/iter; left time: 23101.4108s iters: 200, epoch: 4 | loss: 0.2271728 speed: 0.0643s/iter; left time: 6606.3076s iters: 300, epoch: 4 | loss: 0.3784584 speed: 0.0640s/iter; left time: 6569.4517s iters: 400, epoch: 4 | loss: 0.2752601 speed: 0.0653s/iter; left time: 6696.5213s iters: 500, epoch: 4 | loss: 0.3025605 speed: 0.0638s/iter; left time: 6536.2859s iters: 600, epoch: 4 | loss: 0.2795481 speed: 0.0638s/iter; left time: 6538.9267s iters: 700, epoch: 4 | loss: 0.2788646 speed: 0.0632s/iter; left time: 6465.8545s iters: 800, epoch: 4 | loss: 0.2323274 speed: 0.0640s/iter; left time: 6545.1154s iters: 900, epoch: 4 | loss: 0.2965076 speed: 0.0648s/iter; left time: 6620.8814s iters: 1000, epoch: 4 | loss: 0.2785395 speed: 0.0643s/iter; left time: 6555.9024s Epoch: 4 cost time: 68.27407765388489 --------start to validate----------- normed mse:0.0787, mae:0.2138, rmse:0.2805, mape:1.5257, mspe:33.8453, corr:0.8624 denormed mse:6.6266, mae:1.9617, rmse:2.5742, mape:0.1578, mspe:0.0676, corr:0.8624 --------start to test----------- normed mse:0.0564, mae:0.1850, rmse:0.2375, mape:0.1471, mspe:0.0391, corr:0.8203 denormed mse:4.7516, mae:1.6981, rmse:2.1798, mape:inf, mspe:inf, corr:0.8203 Epoch: 4, Steps: 1062 | Train Loss: 0.2719553 valid Loss: 0.2137705 Test Loss: 0.1850464 EarlyStopping counter: 1 out of 5 Updating learning rate to 0.0024435187499999996 iters: 100, epoch: 5 | loss: 0.3289642 speed: 0.2193s/iter; left time: 22341.4123s iters: 200, epoch: 5 | loss: 0.2475609 speed: 0.0641s/iter; left time: 6520.0001s iters: 300, epoch: 5 | loss: 0.2934369 speed: 0.0640s/iter; left time: 6501.5043s iters: 400, epoch: 5 | loss: 0.3514774 speed: 0.0639s/iter; left time: 6488.8877s iters: 500, epoch: 5 | loss: 0.3289756 speed: 0.0650s/iter; left time: 6592.7983s iters: 600, epoch: 5 | loss: 0.3147124 speed: 0.0660s/iter; left time: 6684.8574s iters: 700, epoch: 5 | loss: 0.2444675 speed: 0.0656s/iter; left time: 6642.7295s iters: 800, epoch: 5 | loss: 0.2227931 speed: 0.0648s/iter; left time: 6558.8881s iters: 900, epoch: 5 | loss: 0.2905650 speed: 0.0645s/iter; left time: 6519.5313s iters: 1000, epoch: 5 | loss: 0.2011140 speed: 0.0643s/iter; left time: 6490.7155s Epoch: 5 cost time: 68.49428033828735 --------start to validate----------- normed mse:0.0795, mae:0.2143, rmse:0.2820, mape:1.5023, mspe:32.9689, corr:0.8631 denormed mse:6.6963, mae:1.9665, rmse:2.5877, mape:0.1583, mspe:0.0665, corr:0.8631 --------start to test----------- normed mse:0.0568, mae:0.1897, rmse:0.2384, mape:0.1513, mspe:0.0402, corr:0.8214 denormed mse:4.7869, mae:1.7409, rmse:2.1879, mape:inf, mspe:inf, corr:0.8214 Epoch: 5, Steps: 1062 | Train Loss: 0.2680931 valid Loss: 0.2143007 Test Loss: 0.1897096 EarlyStopping counter: 2 out of 5 Updating learning rate to 0.0023213428124999992 iters: 100, epoch: 6 | loss: 0.2878237 speed: 0.2231s/iter; left time: 22487.2134s iters: 200, epoch: 6 | loss: 0.3053960 speed: 0.0642s/iter; left time: 6463.6262s iters: 300, epoch: 6 | loss: 0.2794231 speed: 0.0657s/iter; left time: 6608.5483s iters: 400, epoch: 6 | loss: 0.1824071 speed: 0.0658s/iter; left time: 6613.3665s iters: 500, epoch: 6 | loss: 0.3717845 speed: 0.0657s/iter; left time: 6599.7808s iters: 600, epoch: 6 | loss: 0.2623390 speed: 0.0657s/iter; left time: 6589.1822s iters: 700, epoch: 6 | loss: 0.2274510 speed: 0.0651s/iter; left time: 6523.5771s iters: 800, epoch: 6 | loss: 0.2571564 speed: 0.0665s/iter; left time: 6657.4378s iters: 900, epoch: 6 | loss: 0.2891446 speed: 0.0667s/iter; left time: 6670.0139s iters: 1000, epoch: 6 | loss: 0.3868507 speed: 0.0665s/iter; left time: 6639.8010s Epoch: 6 cost time: 69.7518322467804 --------start to validate----------- normed mse:0.0777, mae:0.2116, rmse:0.2788, mape:1.5445, mspe:35.4777, corr:0.8636 denormed mse:6.5435, mae:1.9422, rmse:2.5580, mape:0.1556, mspe:0.0658, corr:0.8636 --------start to test----------- normed mse:0.0489, mae:0.1677, rmse:0.2211, mape:0.1310, mspe:0.0322, corr:0.8212 denormed mse:4.1181, mae:1.5390, rmse:2.0293, mape:inf, mspe:inf, corr:0.8212 Epoch: 6, Steps: 1062 | Train Loss: 0.2668546 valid Loss: 0.2116495 Test Loss: 0.1677159 Validation loss decreased (0.213026 --> 0.211650). Saving model ... Updating learning rate to 0.0022052756718749992 iters: 100, epoch: 7 | loss: 0.2596520 speed: 0.2240s/iter; left time: 22334.3117s iters: 200, epoch: 7 | loss: 0.2324577 speed: 0.0640s/iter; left time: 6381.1203s iters: 300, epoch: 7 | loss: 0.2214808 speed: 0.0651s/iter; left time: 6474.6273s iters: 400, epoch: 7 | loss: 0.2045112 speed: 0.0632s/iter; left time: 6281.8838s iters: 500, epoch: 7 | loss: 0.2396872 speed: 0.0636s/iter; left time: 6316.9631s iters: 600, epoch: 7 | loss: 0.1907633 speed: 0.0644s/iter; left time: 6388.6488s iters: 700, epoch: 7 | loss: 0.2620018 speed: 0.0747s/iter; left time: 7404.8432s iters: 800, epoch: 7 | loss: 0.2821859 speed: 0.0652s/iter; left time: 6457.3174s iters: 900, epoch: 7 | loss: 0.2233998 speed: 0.0641s/iter; left time: 6339.8982s iters: 1000, epoch: 7 | loss: 0.2333842 speed: 0.0648s/iter; left time: 6402.4253s Epoch: 7 cost time: 69.41319704055786 --------start to validate----------- normed mse:0.0770, mae:0.2089, rmse:0.2776, mape:1.3294, mspe:25.3847, corr:0.8637 denormed mse:6.4876, mae:1.9170, rmse:2.5471, mape:0.1582, mspe:0.0734, corr:0.8637 --------start to test----------- normed mse:0.0935, mae:0.2430, rmse:0.3058, mape:0.1754, mspe:0.0452, corr:0.8126 denormed mse:7.8770, mae:2.2295, rmse:2.8066, mape:inf, mspe:inf, corr:0.8126 Epoch: 7, Steps: 1062 | Train Loss: 0.2652045 valid Loss: 0.2089042 Test Loss: 0.2429530 Validation loss decreased (0.211650 --> 0.208904). Saving model ... Updating learning rate to 0.0020950118882812493 iters: 100, epoch: 8 | loss: 0.2476663 speed: 0.2239s/iter; left time: 22095.3285s iters: 200, epoch: 8 | loss: 0.3140231 speed: 0.0660s/iter; left time: 6502.9781s iters: 300, epoch: 8 | loss: 0.2049506 speed: 0.0647s/iter; left time: 6368.1886s iters: 400, epoch: 8 | loss: 0.2751644 speed: 0.0648s/iter; left time: 6373.9229s iters: 500, epoch: 8 | loss: 0.3105533 speed: 0.0664s/iter; left time: 6520.4210s iters: 600, epoch: 8 | loss: 0.2195727 speed: 0.0640s/iter; left time: 6287.0444s iters: 700, epoch: 8 | loss: 0.2385361 speed: 0.0644s/iter; left time: 6313.8700s iters: 800, epoch: 8 | loss: 0.2514920 speed: 0.0661s/iter; left time: 6476.8801s iters: 900, epoch: 8 | loss: 0.2696154 speed: 0.0639s/iter; left time: 6250.8148s iters: 1000, epoch: 8 | loss: 0.2994070 speed: 0.0646s/iter; left time: 6314.7932s Epoch: 8 cost time: 69.0834846496582 --------start to validate----------- normed mse:0.0786, mae:0.2097, rmse:0.2803, mape:1.3397, mspe:24.8040, corr:0.8618 denormed mse:6.6175, mae:1.9241, rmse:2.5725, mape:0.1595, mspe:0.0748, corr:0.8618 --------start to test----------- normed mse:0.0803, mae:0.2185, rmse:0.2833, mape:0.1605, mspe:0.0409, corr:0.8191 denormed mse:6.7598, mae:2.0055, rmse:2.6000, mape:inf, mspe:inf, corr:0.8191 Epoch: 8, Steps: 1062 | Train Loss: 0.2626855 valid Loss: 0.2096790 Test Loss: 0.2185455 EarlyStopping counter: 1 out of 5 Updating learning rate to 0.0019902612938671868 iters: 100, epoch: 9 | loss: 0.2035707 speed: 0.2216s/iter; left time: 21633.7680s iters: 200, epoch: 9 | loss: 0.1929587 speed: 0.0643s/iter; left time: 6273.6812s iters: 300, epoch: 9 | loss: 0.2027658 speed: 0.0644s/iter; left time: 6275.3628s iters: 400, epoch: 9 | loss: 0.1670165 speed: 0.0655s/iter; left time: 6372.5070s iters: 500, epoch: 9 | loss: 0.3162191 speed: 0.0644s/iter; left time: 6264.6492s iters: 600, epoch: 9 | loss: 0.2530913 speed: 0.0642s/iter; left time: 6233.9174s iters: 700, epoch: 9 | loss: 0.2298067 speed: 0.0676s/iter; left time: 6557.6573s iters: 800, epoch: 9 | loss: 0.2281127 speed: 0.0661s/iter; left time: 6406.0165s iters: 900, epoch: 9 | loss: 0.2107817 speed: 0.0664s/iter; left time: 6426.1786s iters: 1000, epoch: 9 | loss: 0.2355524 speed: 0.0656s/iter; left time: 6345.4314s Epoch: 9 cost time: 69.29217648506165 --------start to validate----------- normed mse:0.0747, mae:0.2069, rmse:0.2732, mape:1.4617, mspe:31.3834, corr:0.8653 denormed mse:6.2872, mae:1.8984, rmse:2.5074, mape:0.1508, mspe:0.0608, corr:0.8653 --------start to test----------- normed mse:0.0819, mae:0.2269, rmse:0.2861, mape:0.1668, mspe:0.0438, corr:0.7987 denormed mse:6.8943, mae:2.0818, rmse:2.6257, mape:inf, mspe:inf, corr:0.7987 Epoch: 9, Steps: 1062 | Train Loss: 0.2619727 valid Loss: 0.2068796 Test Loss: 0.2268594 Validation loss decreased (0.208904 --> 0.206880). Saving model ... Updating learning rate to 0.0018907482291738273 iters: 100, epoch: 10 | loss: 0.2934171 speed: 0.2292s/iter; left time: 22127.8739s iters: 200, epoch: 10 | loss: 0.3041674 speed: 0.0652s/iter; left time: 6284.6789s iters: 300, epoch: 10 | loss: 0.2558330 speed: 0.0646s/iter; left time: 6225.6793s iters: 400, epoch: 10 | loss: 0.3225133 speed: 0.0643s/iter; left time: 6186.9587s iters: 500, epoch: 10 | loss: 0.2021957 speed: 0.0646s/iter; left time: 6214.6386s iters: 600, epoch: 10 | loss: 0.2634687 speed: 0.0644s/iter; left time: 6184.0262s iters: 700, epoch: 10 | loss: 0.3601710 speed: 0.0649s/iter; left time: 6231.3502s iters: 800, epoch: 10 | loss: 0.2715219 speed: 0.0643s/iter; left time: 6158.7077s iters: 900, epoch: 10 | loss: 0.3285161 speed: 0.0662s/iter; left time: 6337.7003s iters: 1000, epoch: 10 | loss: 0.2874655 speed: 0.0650s/iter; left time: 6218.2058s Epoch: 10 cost time: 69.2235357761383 --------start to validate----------- normed mse:0.0741, mae:0.2067, rmse:0.2721, mape:1.4288, mspe:32.2580, corr:0.8650 denormed mse:6.2362, mae:1.8968, rmse:2.4972, mape:0.1524, mspe:0.0658, corr:0.8650 --------start to test----------- normed mse:0.1252, mae:0.2862, rmse:0.3538, mape:0.2056, mspe:0.0598, corr:0.7859 denormed mse:10.5420, mae:2.6266, rmse:3.2468, mape:inf, mspe:inf, corr:0.7859 Epoch: 10, Steps: 1062 | Train Loss: 0.2617135 valid Loss: 0.2067047 Test Loss: 0.2862312 Validation loss decreased (0.206880 --> 0.206705). Saving model ... Updating learning rate to 0.001796210817715136 iters: 100, epoch: 11 | loss: 0.2604572 speed: 0.2211s/iter; left time: 21110.7798s iters: 200, epoch: 11 | loss: 0.1902495 speed: 0.0639s/iter; left time: 6093.1127s iters: 300, epoch: 11 | loss: 0.2706100 speed: 0.0645s/iter; left time: 6144.9446s iters: 400, epoch: 11 | loss: 0.2700502 speed: 0.0641s/iter; left time: 6099.0807s iters: 500, epoch: 11 | loss: 0.2039715 speed: 0.0644s/iter; left time: 6119.2977s iters: 600, epoch: 11 | loss: 0.2211753 speed: 0.0644s/iter; left time: 6116.2928s iters: 700, epoch: 11 | loss: 0.3060542 speed: 0.0641s/iter; left time: 6078.2290s iters: 800, epoch: 11 | loss: 0.2073108 speed: 0.0636s/iter; left time: 6031.3409s iters: 900, epoch: 11 | loss: 0.3166656 speed: 0.0642s/iter; left time: 6077.3573s iters: 1000, epoch: 11 | loss: 0.2857897 speed: 0.0636s/iter; left time: 6020.0385s Epoch: 11 cost time: 68.08029365539551 --------start to validate----------- normed mse:0.0780, mae:0.2101, rmse:0.2792, mape:1.4069, mspe:27.8713, corr:0.8645 denormed mse:6.5642, mae:1.9282, rmse:2.5621, mape:0.1562, mspe:0.0672, corr:0.8645 --------start to test----------- normed mse:0.0486, mae:0.1696, rmse:0.2206, mape:0.1346, mspe:0.0347, corr:0.8196 denormed mse:4.0965, mae:1.5563, rmse:2.0240, mape:inf, mspe:inf, corr:0.8196 Epoch: 11, Steps: 1062 | Train Loss: 0.2599722 valid Loss: 0.2101241 Test Loss: 0.1695920 EarlyStopping counter: 1 out of 5 Updating learning rate to 0.0017064002768293791 iters: 100, epoch: 12 | loss: 0.3560360 speed: 0.2200s/iter; left time: 20776.4963s iters: 200, epoch: 12 | loss: 0.2906877 speed: 0.0638s/iter; left time: 6017.2672s iters: 300, epoch: 12 | loss: 0.3136698 speed: 0.0640s/iter; left time: 6029.7119s iters: 400, epoch: 12 | loss: 0.4521513 speed: 0.0637s/iter; left time: 5992.6421s iters: 500, epoch: 12 | loss: 0.2683279 speed: 0.0635s/iter; left time: 5973.5166s iters: 600, epoch: 12 | loss: 0.2095424 speed: 0.0632s/iter; left time: 5932.0850s iters: 700, epoch: 12 | loss: 0.3217563 speed: 0.0646s/iter; left time: 6062.9666s iters: 800, epoch: 12 | loss: 0.2670196 speed: 0.0635s/iter; left time: 5954.2641s iters: 900, epoch: 12 | loss: 0.2306930 speed: 0.0639s/iter; left time: 5977.7809s iters: 1000, epoch: 12 | loss: 0.2080201 speed: 0.0633s/iter; left time: 5915.4393s Epoch: 12 cost time: 67.63420724868774 --------start to validate----------- normed mse:0.0752, mae:0.2047, rmse:0.2742, mape:1.3057, mspe:24.8466, corr:0.8630 denormed mse:6.3301, mae:1.8782, rmse:2.5160, mape:0.1537, mspe:0.0697, corr:0.8630 --------start to test----------- normed mse:0.1127, mae:0.2651, rmse:0.3357, mape:0.1911, mspe:0.0544, corr:0.7504 denormed mse:9.4922, mae:2.4327, rmse:3.0809, mape:inf, mspe:inf, corr:0.7504 Epoch: 12, Steps: 1062 | Train Loss: 0.2593013 valid Loss: 0.2046781 Test Loss: 0.2651025 Validation loss decreased (0.206705 --> 0.204678). Saving model ... Updating learning rate to 0.00162108026298791 iters: 100, epoch: 13 | loss: 0.2679453 speed: 0.2223s/iter; left time: 20751.6236s iters: 200, epoch: 13 | loss: 0.2244501 speed: 0.0640s/iter; left time: 5970.0265s iters: 300, epoch: 13 | loss: 0.2729070 speed: 0.0638s/iter; left time: 5944.9959s iters: 400, epoch: 13 | loss: 0.2141117 speed: 0.0642s/iter; left time: 5973.2411s iters: 500, epoch: 13 | loss: 0.2737395 speed: 0.0649s/iter; left time: 6035.3685s iters: 600, epoch: 13 | loss: 0.3773285 speed: 0.0655s/iter; left time: 6084.1809s iters: 700, epoch: 13 | loss: 0.3060603 speed: 0.0651s/iter; left time: 6037.3794s iters: 800, epoch: 13 | loss: 0.3271270 speed: 0.0639s/iter; left time: 5919.0781s iters: 900, epoch: 13 | loss: 0.2570842 speed: 0.0644s/iter; left time: 5959.6727s iters: 1000, epoch: 13 | loss: 0.1967695 speed: 0.0650s/iter; left time: 6008.8446s Epoch: 13 cost time: 68.51006627082825 --------start to validate----------- normed mse:0.0754, mae:0.2070, rmse:0.2745, mape:1.3459, mspe:27.9367, corr:0.8626 denormed mse:6.3463, mae:1.8996, rmse:2.5192, mape:0.1547, mspe:0.0686, corr:0.8626 --------start to test----------- normed mse:0.1270, mae:0.2863, rmse:0.3563, mape:0.2047, mspe:0.0591, corr:0.7302 denormed mse:10.6924, mae:2.6270, rmse:3.2699, mape:inf, mspe:inf, corr:0.7302 Epoch: 13, Steps: 1062 | Train Loss: 0.2578851 valid Loss: 0.2070073 Test Loss: 0.2862706 EarlyStopping counter: 1 out of 5 Updating learning rate to 0.0015400262498385146 iters: 100, epoch: 14 | loss: 0.3828691 speed: 0.2242s/iter; left time: 20692.6053s iters: 200, epoch: 14 | loss: 0.2255980 speed: 0.0655s/iter; left time: 6034.3079s iters: 300, epoch: 14 | loss: 0.2057881 speed: 0.0651s/iter; left time: 5999.9542s iters: 400, epoch: 14 | loss: 0.2044961 speed: 0.0654s/iter; left time: 6012.5291s iters: 500, epoch: 14 | loss: 0.1950546 speed: 0.0659s/iter; left time: 6053.8845s iters: 600, epoch: 14 | loss: 0.2513721 speed: 0.0663s/iter; left time: 6081.8917s iters: 700, epoch: 14 | loss: 0.3617742 speed: 0.0676s/iter; left time: 6199.0184s iters: 800, epoch: 14 | loss: 0.1818448 speed: 0.0660s/iter; left time: 6047.2192s iters: 900, epoch: 14 | loss: 0.2921709 speed: 0.0637s/iter; left time: 5831.9500s iters: 1000, epoch: 14 | loss: 0.4443547 speed: 0.0636s/iter; left time: 5812.0902s Epoch: 14 cost time: 69.46138763427734 --------start to validate----------- normed mse:0.0749, mae:0.2054, rmse:0.2737, mape:1.3964, mspe:29.3059, corr:0.8646 denormed mse:6.3101, mae:1.8853, rmse:2.5120, mape:0.1517, mspe:0.0638, corr:0.8646 --------start to test----------- normed mse:0.0582, mae:0.1803, rmse:0.2413, mape:0.1373, mspe:0.0348, corr:0.7685 denormed mse:4.9051, mae:1.6544, rmse:2.2147, mape:inf, mspe:inf, corr:0.7685 Epoch: 14, Steps: 1062 | Train Loss: 0.2567903 valid Loss: 0.2054437 Test Loss: 0.1802866 EarlyStopping counter: 2 out of 5 Updating learning rate to 0.0014630249373465886 iters: 100, epoch: 15 | loss: 0.2464596 speed: 0.2214s/iter; left time: 20196.9599s iters: 200, epoch: 15 | loss: 0.3060012 speed: 0.0648s/iter; left time: 5902.4050s iters: 300, epoch: 15 | loss: 0.3132369 speed: 0.0647s/iter; left time: 5892.1457s iters: 400, epoch: 15 | loss: 0.3352527 speed: 0.0645s/iter; left time: 5867.1592s iters: 500, epoch: 15 | loss: 0.2343948 speed: 0.0640s/iter; left time: 5814.1662s iters: 600, epoch: 15 | loss: 0.1983065 speed: 0.0636s/iter; left time: 5772.0216s iters: 700, epoch: 15 | loss: 0.1803256 speed: 0.0638s/iter; left time: 5782.6625s iters: 800, epoch: 15 | loss: 0.2608042 speed: 0.0640s/iter; left time: 5795.1071s iters: 900, epoch: 15 | loss: 0.3982482 speed: 0.0656s/iter; left time: 5930.5014s iters: 1000, epoch: 15 | loss: 0.2794555 speed: 0.0649s/iter; left time: 5859.7955s Epoch: 15 cost time: 68.22825241088867 --------start to validate----------- normed mse:0.0751, mae:0.2062, rmse:0.2740, mape:1.4129, mspe:29.4700, corr:0.8638 denormed mse:6.3221, mae:1.8922, rmse:2.5144, mape:0.1511, mspe:0.0607, corr:0.8638 --------start to test----------- normed mse:0.0743, mae:0.2067, rmse:0.2726, mape:0.1542, mspe:0.0414, corr:0.7309 denormed mse:6.2598, mae:1.8971, rmse:2.5020, mape:inf, mspe:inf, corr:0.7309 Epoch: 15, Steps: 1062 | Train Loss: 0.2556333 valid Loss: 0.2062003 Test Loss: 0.2067328 EarlyStopping counter: 3 out of 5 Updating learning rate to 0.001389873690479259 iters: 100, epoch: 16 | loss: 0.4233045 speed: 0.2250s/iter; left time: 20289.9657s iters: 200, epoch: 16 | loss: 0.2079662 speed: 0.0654s/iter; left time: 5887.5618s iters: 300, epoch: 16 | loss: 0.2657562 speed: 0.0672s/iter; left time: 6045.0847s iters: 400, epoch: 16 | loss: 0.1947590 speed: 0.0716s/iter; left time: 6432.6096s iters: 500, epoch: 16 | loss: 0.2555844 speed: 0.0686s/iter; left time: 6157.6966s iters: 600, epoch: 16 | loss: 0.2228586 speed: 0.0686s/iter; left time: 6153.4446s iters: 700, epoch: 16 | loss: 0.2561190 speed: 0.0693s/iter; left time: 6211.2939s iters: 800, epoch: 16 | loss: 0.2335158 speed: 0.0682s/iter; left time: 6099.8479s iters: 900, epoch: 16 | loss: 0.1972947 speed: 0.0681s/iter; left time: 6082.7818s iters: 1000, epoch: 16 | loss: 0.2210546 speed: 0.0684s/iter; left time: 6108.1396s Epoch: 16 cost time: 72.44369554519653 --------start to validate----------- normed mse:0.0767, mae:0.2066, rmse:0.2769, mape:1.3287, mspe:26.0432, corr:0.8614 denormed mse:6.4587, mae:1.8956, rmse:2.5414, mape:0.1549, mspe:0.0696, corr:0.8614 --------start to test----------- normed mse:0.0722, mae:0.2028, rmse:0.2686, mape:0.1513, mspe:0.0397, corr:0.7538 denormed mse:6.0773, mae:1.8609, rmse:2.4652, mape:inf, mspe:inf, corr:0.7538 Epoch: 16, Steps: 1062 | Train Loss: 0.2538822 valid Loss: 0.2065719 Test Loss: 0.2027889 EarlyStopping counter: 4 out of 5 Updating learning rate to 0.001320380005955296 iters: 100, epoch: 17 | loss: 0.2435877 speed: 0.2235s/iter; left time: 19918.0769s iters: 200, epoch: 17 | loss: 0.2681281 speed: 0.0643s/iter; left time: 5724.9103s iters: 300, epoch: 17 | loss: 0.2386408 speed: 0.0645s/iter; left time: 5732.2681s iters: 400, epoch: 17 | loss: 0.1940629 speed: 0.0645s/iter; left time: 5730.9041s iters: 500, epoch: 17 | loss: 0.1982391 speed: 0.0647s/iter; left time: 5735.3843s iters: 600, epoch: 17 | loss: 0.2039342 speed: 0.0645s/iter; left time: 5713.2374s iters: 700, epoch: 17 | loss: 0.2077632 speed: 0.0648s/iter; left time: 5738.5635s iters: 800, epoch: 17 | loss: 0.3183163 speed: 0.0640s/iter; left time: 5656.7522s iters: 900, epoch: 17 | loss: 0.2791365 speed: 0.0648s/iter; left time: 5723.3435s iters: 1000, epoch: 17 | loss: 0.2257991 speed: 0.0642s/iter; left time: 5660.2778s Epoch: 17 cost time: 68.39220643043518 --------start to validate----------- normed mse:0.0785, mae:0.2103, rmse:0.2803, mape:1.3704, mspe:27.2711, corr:0.8559 denormed mse:6.6141, mae:1.9294, rmse:2.5718, mape:0.1568, mspe:0.0703, corr:0.8559 --------start to test----------- normed mse:0.1018, mae:0.2485, rmse:0.3191, mape:0.1800, mspe:0.0498, corr:0.7221 denormed mse:8.5757, mae:2.2802, rmse:2.9284, mape:inf, mspe:inf, corr:0.7221 Epoch: 17, Steps: 1062 | Train Loss: 0.2532137 valid Loss: 0.2102546 Test Loss: 0.2484858 EarlyStopping counter: 5 out of 5 Early stopping save model in exp/ETT_checkpoints/SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0/ETTh148.bin testing : SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< test 2833 normed mse:0.1127, mae:0.2651, rmse:0.3357, mape:0.1911, mspe:0.0544, corr:0.7504 TTTT denormed mse:9.4922, mae:2.4327, rmse:3.0809, mape:inf, mspe:inf, corr:0.7504 Final mean normed mse:0.1127,mae:0.2651,denormed mse:9.4922,mae:2.4327`

    However Result Folder is not being formed containing trues.npy and preds.npy files. Why is the code not executing for the test data ? Is there another separate script? only the model is being saved in ETT_checkpoint Please help, so the results could be plotted.

    opened by vinayakrajurs 2
  • Cannot reproduce the SOTA performance on ETTh2 dataset.

    Cannot reproduce the SOTA performance on ETTh2 dataset.

    Hi,

    Thanks for sharing those reproducible code and examples. However, I cannot reproduce the metrics recorded in the paper.

    For example, as shown in the below Table 4 in the paper, for ETTh2 dataset, when the horizon is set to 720, MSE = 0.475, MAE = 0.488: image

    However, I ran the below command but cannot get such metrics: [Command I ran]: python run_ETTh.py --data ETTh2 --features M --seq_len 736 --label_len 720 --pred_len 720 --hidden-size 4 --stacks 1 --levels 5 --lr 1e-5 --batch_size 128 --dropout 0.5 --model_name etth2_M_I736_O720_lr1e-5_bs128_dp0.5_h4_s1l5

    [Results I got]: Final mean normed mse:1.0782,mae:0.7634

    Could you please advice how to repro the Multi-variates forecasting performance on ETTh2 dataset?

    Thanks a lot and look forward to your reply.

    opened by kehuo 0
  • MS features implementation

    MS features implementation

    Hi! I want to implement the MS features option, as I'm trying to apply SCINet for financial data forecasting. With financial datasets, you have tons of input data at each moment, so not using all of that data would be a major overlook in model design.

    As an author, how would you implement it in the current model?

    Thanks!

    opened by UCT10 1
Releases(SCINet-V1.0)
Owner
null
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stock price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting This is the origin Pytorch implementation of Informer in the followin

Haoyi 3.1k Dec 29, 2022
This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the time series forecasting research space.

TSForecasting This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the tim

Rakshitha Godahewa 80 Dec 30, 2022
Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Paper | Blog OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image gene

OFA Sys 1.4k Jan 8, 2023
The source code and data of the paper "Instance-wise Graph-based Framework for Multivariate Time Series Forecasting".

IGMTF The source code and data of the paper "Instance-wise Graph-based Framework for Multivariate Time Series Forecasting". Requirements The framework

Wentao Xu 24 Dec 5, 2022
Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting.

Non-AR Spatial-Temporal Transformer Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series For

Chen Kai 66 Nov 28, 2022
Code for the CIKM 2019 paper "DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting".

Dual Self-Attention Network for Multivariate Time Series Forecasting 20.10.26 Update: Due to the difficulty of installation and code maintenance cause

Kyon Huang 223 Dec 16, 2022
Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXとByteTrackを用いたMOT(Multiple Object Tracking)のPythonサン

KazuhitoTakahashi 12 Nov 9, 2022
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.8k Jan 8, 2023
Spectral Temporal Graph Neural Network (StemGNN in short) for Multivariate Time-series Forecasting

Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting This repository is the official implementation of Spectral Temporal Gr

Microsoft 306 Dec 29, 2022
Time Series Forecasting with Temporal Fusion Transformer in Pytorch

Forecasting with the Temporal Fusion Transformer Multi-horizon forecasting often contains a complex mix of inputs – including static (i.e. time-invari

Nicolás Fornasari 6 Jan 24, 2022
Event-forecasting - Event Forecasting Algorithms With Python

event-forecasting Event Forecasting Algorithms Theory Correlating events in comp

Intellia ICT 4 Feb 15, 2022
Forecasting for knowable future events using Bayesian informative priors (forecasting with judgmental-adjustment).

What is judgyprophet? judgyprophet is a Bayesian forecasting algorithm based on Prophet, that enables forecasting while using information known by the

AstraZeneca 56 Oct 26, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021) Citation Please cite as: @inproceedings{liu2020understan

Sunbow Liu 22 Nov 25, 2022
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Segmentation Transformer Implementation of Segmentation Transformer in PyTorch, a new model to achieve SOTA in semantic segmentation while using trans

Abhay Gupta 161 Dec 8, 2022
[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Fudan Zhang Vision Group 897 Jan 5, 2023
Sequence to Sequence Models with PyTorch

Sequence to Sequence models with PyTorch This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch At present it ha

Sandeep Subramanian 708 Dec 19, 2022
Sequence-to-Sequence learning using PyTorch

Seq2Seq in PyTorch This is a complete suite for training sequence-to-sequence models in PyTorch. It consists of several models and code to both train

Elad Hoffer 514 Nov 17, 2022