tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Overview

tsai

State-of-the-art Deep Learning for Time Series and Sequence Modeling. tsai is currently under active development by timeseriesAI.

CI PyPI Downloads

tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

  • Self-supervised learning: If you are interested in applying self-supervised learning to time series, you may check our new tutorial notebook: 08_Self_Supervised_TSBERT.ipynb
  • New visualization: We've also added a new PredictionDynamics callback that will display the predictions during training. This is the type of output you would get in a classification task for example:

Installation

You can install the latest stable version from pip using:

pip install tsai

Or you can install the cutting edge version of this library from github by doing:

pip install -Uqq git+https://github.com/timeseriesAI/tsai.git

Once the install is complete, you should restart your runtime and then run:

from tsai.all import *

Documentation

Here's the link to the documentation.

How to get started

To get to know the tsai package, we'd suggest you start with this notebook in Google Colab: 01_Intro_to_Time_Series_Classification

It provides an overview of a time series classification problem using fastai v2.

If you want more details, you can get them in nbs 00 and 00a.

To use tsai in your own notebooks, the only thing you need to do after you have installed the package is to add this:

from tsai.all import *

Citing tsai

If you use tsai in your research please use the following BibTeX entry:

@Misc{tsai,
    author =       {Ignacio Oguiza},
    title =        {tsai - A state-of-the-art deep learning library for time series and sequential data},
    howpublished = {Github},
    year =         {2020},
    url =          {https://github.com/timeseriesAI/tsai}
}
Comments
  • Training on Larger than Memory Datasets

    Training on Larger than Memory Datasets

    Thanks for providing these implementations, they're very high quality. I've successfully reproduced most of the state of the art results in the self-supervised BERT-like time series transformer paper. All of the datasets used in the paper fit nicely into memory, but many real-world production applications will require iterating over datasets that are too large to fit into memory.

    Dataset classes that support larger than memory datasets were proposed as a goal for FastAI 2.3 here. Linked in that discussion is NVTabular's Dataset class, an example of a dataloader that supports larger-than-memory datasets.

    Supporting larger-than-memory datasets is probably outside the scope of this project, but I wonder if you could provide any pointers on how one might try to implement this in the existing TSDatasets and TSDataloaders/NumpyDataLoaders classes? I find they are pretty complex, and I'm not familiar with the fastai DataLoaders and Datasets classes that they're subclassing.

    The NVTabular Dataset class randomly iterates over the partitions of a dask data frame, which supports multiple files on disk and larger-than-memory datasets. My understanding is that this could be challenging with fastai's training because the __getitem__ call requires random seeking of data, and dask does not support random seeks because it could require reading a different file on disk for each element. I wonder if the __getitem__ could just be tricked in some way to randomly iterate over a partition before moving on to the next random partition. NVTabular is using TorchAsyncIter to iterate over the lazily read dask dataframe.

    opened by xanderdunn 48
  • Memory Error with Very Large Dataset

    Memory Error with Very Large Dataset

    Dear team,

    I am in stuck when convert very large numpy array to your TSDatasets. These are what I have tried to fix my issue:

    • when building time-series, I used tensorflow.keras.preprocessing.timeseries_dataset_from_array. After this step, the memory is still fine
    • I concatenate all batch data into numpy array, this step produces problem so I use numpy memmap to avoid load data into RAM --> now, this step is ok
    • I don't know where in your library loads all the data into RAM so it crashes in the line dsets = TSDatasets(X_dl, y_dl, splits=splits, tfms=transformations, inplace=True)

    Is there any way to overcome this issue? Or I have to use your model only and design my own pipeline to use Tensorflow Data Generator

    enhancement good first issue 
    opened by HariWu1995 34
  • Serializing Time Series Forecasts

    Serializing Time Series Forecasts

    I've been getting great results on forecasting a multistep horizon for a multivariate time series, but am having a lot of trouble exporting or saving the model to use on other machines or even in the same Jupyter Notebook.

    I create the learner with ts_learner, train it, but when I use learner.save or learner.export, the imported model doesn't have the same predictions.

    Any help would be appreciated.

    under review 
    opened by DonRomaniello 29
  • RuntimeError: Can only calculate the mean of floating types. Got Long instead.

    RuntimeError: Can only calculate the mean of floating types. Got Long instead.

    RuntimeError: Can only calculate the mean of floating types. Got Long instead. image dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=64, batch_tfms=TSStandardize(by_var=True))

    if batch_tfms=TSStandardize(by_var=True) is removed image RuntimeError: expected scalar type Long but found Float

    under review 
    opened by maheshs11 20
  • add support for session windows

    add support for session windows

    TSAI has great support for sliding windows in the data preprocessing methods: https://github.com/timeseriesAI/tsai/blob/62e9348d9e29a6b5f628879bd77056c11db5c0ab/tsai/data/preparation.py#L119

    Ideally support for (event-) session based windows can be added as outlined in https://stackoverflow.com/questions/66783615/transform-time-series-to-event-window-sessions

    opened by geoHeil 20
  • AttributeError: 'TSRobustScale' object has no attribute '_setup'

    AttributeError: 'TSRobustScale' object has no attribute '_setup'

    Sorry Ignacio; I had what I thought was a bug in my code, but looks like it's associated with the 0.2.23 tsai package; confirmed not present in 0.2.20. From what I have read, this sort of error is commonly associated with mutual imports (eg, two modules importing from each other).

    from tsai.inference import load_learner
    
    model_path = f'/gdrive/MyDrive/***/LVEFCLASS-16735.pkl' 
    model      = load_learner(model_path)
    print(type(model))
    model.export(f'/gdrive/MyDrive/***/LVEFCLASS-16735_.pkl)
    
    <class 'fastai.learner.Learner'>
    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    <ipython-input-11-a9c25badaf5d> in <module>()
          4 model      = load_learner(model_path)
          5 print(type(model))
    ----> 6 model.export(f'/gdrive/MyDrive/Semler/QF+/Sensor Data Files/Cached_Data/models/InceptionTime/LVEFCLASS-16735_.pkl')
    
    12 frames
    /usr/local/lib/python3.7/dist-packages/tsai/data/preprocessing.py in setups(self, dl)
        327 
        328     def setups(self, dl: DataLoader):
    --> 329         if self._setup:
        330             if not self.use_single_batch:
        331                 o = dl.dataset.__getitem__([slice(None)])[0]
    
    AttributeError: 'TSRobustScale' object has no attribute '_setup'
    
    under review 
    opened by bob-mcrae 18
  • Tutorial notebook using Optuna

    Tutorial notebook using Optuna

    I have added a tutorial notebook that showcases how to use TSAI with Optuna for hyperparameter optimization. Would love to hear your feedbacks as this is only my first pull request on Github. Thank you for the efforts in building TSAI. I would love to contribute more as I learn.

    opened by dnth 17
  • limit on time points ?

    limit on time points ?

    Hello, I'm playing the package with the provided datasets. With sample in shape of batchvartime_points, I noticed that when the time_points is too big, the learner will throw error about dimension match.

    For example with dataset:

    dsid='HouseholdPowerConsumption1' X, y, splits = get_regression_data(dsid, split_data=False) print(dsid, X.shape, y.shape) HouseholdPowerConsumption1 (1431, 5, 1440) (1431,)

    It has 1440 time points.

    When I run: tfms = [None, [TSRegression()]] batch_tfms = TSStandardize(by_sample=True, by_var=True) dls = get_ts_dls(x, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=128) dls.one_batch() learn = ts_learner(dls, TSTPlus, metrics=[mae, rmse], cbs=ShowGraph()) learn.lr_find()

    I'll get error (from learn.lr_find() ) as below: RuntimeError: The size of tensor a (512) must match the size of tensor b (1440) at non-singleton dimension 3

    Please help !

    Thanks!

    opened by ymlea 17
  • fast preprocessing

    fast preprocessing

    I love your new SlidingWindowPanel, but as you can easily imagine for even a moderate size of devices the current implementation using python loops and numpy (=single threaded implementation) is rather slow.

    What do you think about adding a num-jobs parameter similar to scikit-learn which would use the CPU to auto-parallelize (perhaps via dask)?

    opened by geoHeil 17
  • Bug:I just can not get the same result after running get_pred func!!!!

    Bug:I just can not get the same result after running get_pred func!!!!

    I just can not get the same result after running get_pred func!!!! https://colab.research.google.com/github/timeseriesAI/tsai/blob/master/tutorial_nbs/01_Intro_to_Time_Series_Classification.ipynb I get the 0.9278 after running learn.get_preds first time , but i get 0.7111 then i run learn.get_preds again?????

    bug 
    opened by yifeiSunny 16
  • Error creating multivariate dataloader

    Error creating multivariate dataloader

    Hi, I was trying to follow the tutorial notebook on how to prepare data:

    https://github.com/timeseriesAI/tsai/blob/master/tutorial_nbs/00c_Time_Series_data_preparation.ipynb

    I opened this in Google Colab, and have tried with and without the stable flag. My versions in Colab from the top cell are: tsai : 0.2.13 fastai : 2.1.10 fastcore : 1.3.13 torch : 1.7.0+cu101

    Under the End-End examples/Single multivariate time series, I can load the first cell fine and see the df. However when I run the second cell to create the data loader, I get the following error:

        ---------------------------------------------------------------------------
        AssertionError                            Traceback (most recent call last)
        <ipython-input-3-dbbb5a3104e2> in <module>()
              7 seq_first = True
              8 
        ----> 9 X, y = SlidingWindow(window_length, stride=stride, start=start, get_x=get_x,  get_y=get_y, horizon=horizon, seq_first=seq_first)(df)
             10 splits = get_splits(y, valid_size=.2, stratify=True, random_state=23, shuffle=False)
             11 tfms  = [None, [Categorize()]]
        
        /usr/local/lib/python3.6/dist-packages/tsai/data/preparation.py in SlidingWindow(window_len, stride, start, get_x, get_y, y_func, horizon, seq_first, sort_by, ascending, check_leakage)
             93     if min_horizon <= 0 and y_func is None and get_y != [] and check_leakage:
             94         assert get_x is not None and  get_y is not None and len([y for y in _get_y if y in _get_x]) == 0,  \
        ---> 95         'you need to change either horizon, get_x, get_y or use a y_func to avoid leakage'
             96     stride = ifnone(stride, window_len)
             97 
        
        AssertionError: you need to change either horizon, get_x, get_y or use a y_func to avoid leakage
        ---------------------------
    

    Any suggestions?

    opened by chandrashan 16
  • TypeError: __init__() got an unexpected keyword argument 'custom_head'

    TypeError: __init__() got an unexpected keyword argument 'custom_head'

    I am trying to use TSRegressor with the TST architecture. and I get a TypeError: __init__() got an unexpected keyword argument 'custom_head'

    I tried the fix from #597, but it doesn't work, the issue too complex for my understanding. @oguiza can you please take a look at it?

    Here is the entire stacktrace

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    [<ipython-input-33-8d14c640de6e>](https://localhost:8080/#) in <module>
    ----> 1 learn = TSRegressor(X, y, batch_tfms=batch_tfms, splits=split, arch=TST,
          2                     metrics=mae, bs=512, train_metrics=True)
    
    3 frames
    [/usr/local/lib/python3.8/dist-packages/tsai/tslearner.py](https://localhost:8080/#) in __init__(self, X, y, splits, tfms, inplace, sel_vars, sel_steps, weights, partial_n, train_metrics, bs, batch_size, batch_tfms, shuffle_train, drop_last, num_workers, do_setup, device, arch, arch_config, pretrained, weights_path, exclude_head, cut, init, loss_func, opt_func, lr, metrics, cbs, wd, wd_bn_bias, train_bn, moms, path, model_dir, splitter, verbose)
        133             #     model = build_ts_model(arch, dls=dls, device=device, verbose=verbose, pretrained=pretrained, weights_path=weights_path,
        134             #                        exclude_head=exclude_head, cut=cut, init=init, arch_config=arch_config)
    --> 135             model = build_ts_model(arch, dls=dls, device=device, verbose=verbose, pretrained=pretrained, weights_path=weights_path,
        136                                 exclude_head=exclude_head, cut=cut, init=init, arch_config=arch_config)
        137         try:
    
    [/usr/local/lib/python3.8/dist-packages/tsai/models/utils.py](https://localhost:8080/#) in build_ts_model(arch, c_in, c_out, seq_len, d, dls, device, verbose, pretrained, weights_path, exclude_head, cut, init, arch_config, **kwargs)
        158             if v in arch.__name__]):
        159         pv(f'arch: {arch.__name__}(c_in={c_in} c_out={c_out} seq_len={seq_len} device={device}, arch_config={arch_config}, kwargs={kwargs})', verbose)
    --> 160         model = arch(c_in, c_out, seq_len=seq_len, **arch_config, **kwargs).to(device=device)
        161     elif 'xresnet' in arch.__name__ and not '1d' in arch.__name__:
        162         pv(f'arch: {arch.__name__}(c_in={c_in} c_out={c_out} device={device}, arch_config={arch_config}, kwargs={kwargs})', verbose)
    
    [/usr/local/lib/python3.8/dist-packages/fastcore/meta.py](https://localhost:8080/#) in __call__(cls, *args, **kwargs)
         38         if type(res)==cls:
         39             if hasattr(res,'__pre_init__'): res.__pre_init__(*args,**kwargs)
    ---> 40             res.__init__(*args,**kwargs)
         41             if hasattr(res,'__post_init__'): res.__post_init__(*args,**kwargs)
         42         return res
    
    [/usr/local/lib/python3.8/dist-packages/tsai/models/TST.py](https://localhost:8080/#) in __init__(self, c_in, c_out, seq_len, max_seq_len, n_layers, d_model, n_heads, d_k, d_v, d_ff, dropout, act, fc_dropout, y_range, verbose, **kwargs)
        172             self.new_q_len = True
        173             t = torch.rand(1, 1, seq_len)
    --> 174             q_len = nn.Conv1d(1, 1, **kwargs)(t).shape[-1]
        175             self.W_P = nn.Conv1d(c_in, d_model, **kwargs) # Eq 2
        176             pv(f'Conv1d with kwargs={kwargs} applied to input to create input encodings\n', verbose)
    
    TypeError: __init__() got an unexpected keyword argument 'custom_head'
    
    opened by deven-gqc 0
  • 'LSTMPlus' object has no attribute '__name__'

    'LSTMPlus' object has no attribute '__name__'

    Hi,I am trying to train a LSTM model and when I adjust params,it will show that" 'LSTMPlus' object has no attribute 'name' ". Do you know how to fix this? @oguiza Sorry to bother and Thanks. image image

    opened by Sure20220604 1
  • TSDatasets does not work with torch.Tensor

    TSDatasets does not work with torch.Tensor

    Hi TSAI contributors,

    I would like to create my own datasets with torch.Tensor data:

    for iter, (data, label) in enumerate(dataloader):
        if iter == 0:
            X = data
            y = label
        else:
            X = torch.concat([X, data], dim=0)
            y = torch.concat([y, label])
    
    splits = get_splits(y, valid_size=.2, stratify=True, random_state=23, shuffle=True)
    

    and after check_data(X, y, splits), I got

    X      - shape: [1472 samples x 3 features x 480 timesteps]  type: Tensor  dtype:torch.float64  isnan: 0
    y      - shape: torch.Size([1472])  type: Tensor  dtype:torch.int64  isnan: 0
    splits - n_splits: 2 shape: [1178, 294]  overlap: False
    

    Then I run

    tfms = [None, [Categorize()]]
    dsets = TSDatasets(X, y, tfms=tfms, splits=splits, inplace=True)
    

    and got

    KeyError                                  Traceback (most recent call last)
    File ~/miniconda3/lib/python3.8/site-packages/fastai/data/transforms.py:261, in Categorize.encodes(self, o)
        260 try:
    --> 261     return TensorCategory(self.vocab.o2i[o])
        262 except KeyError as e:
    
    KeyError: tensor(0)
    
    The above exception was the direct cause of the following exception:
    
    KeyError                                  Traceback (most recent call last)
    /Users/xiaochen/Project/icudelir_benchmarks/nbs/Baseline_Methods/Baseline_Data_Preparation.ipynb Cell 28 in <cell line: 2>()
          1 tfms = [None, [Categorize()]]
    ----> 2 dsets = TSDatasets(X, y, tfms=tfms, splits=splits, inplace=True)
    
    File ~/miniconda3/lib/python3.8/site-packages/tsai/data/core.py:422, in TSDatasets.__init__(self, X, y, items, sel_vars, sel_steps, tfms, tls, n_inp, dl_type, inplace, **kwargs)
        420     self.tls = L(lt(item, t, **kwargs) for lt,item,t in zip(lts, items, self.tfms))
        421     if len(self.tls) > 0 and len(self.tls[0]) > 0:
    --> 422         self.typs = [type(tl[0]) if isinstance(tl[0], torch.Tensor) else self.typs[i] for i,tl in enumerate(self.tls)]
        423     self.ptls = L([typ(stack(tl[:]))[...,self.sel_vars, self.sel_steps] if (i==0 and self.multi_index) else typ(stack(tl[:])) \
        424                     for i,(tl,typ) in enumerate(zip(self.tls,self.typs))]) if inplace else self.tls
        425 else:
    
    File ~/miniconda3/lib/python3.8/site-packages/tsai/data/core.py:422, in <listcomp>(.0)
    ...
        261     return TensorCategory(self.vocab.o2i[o])
        262 except KeyError as e:
    --> 263     raise KeyError(f"Label '{o}' was not included in the training dataset") from e
    
    KeyError: "Label '0' was not included in the training dataset"
    

    My y looks like tensor([0, 0, 0, ..., 1, 1, 1]).

    If I add

    X = X.numpy().astype(np.float64)
    y = y.numpy().astype(np.int64)
    

    everything works!

    Thanks a lot!

    opened by xcvil 0
  • Feature Importance & Step Importance Not working

    Feature Importance & Step Importance Not working

    Hi here is my system versioning,

    python          : 3.7.12
    tsai            : 0.3.4
    fastai          : 2.7.10
    fastcore        : 1.5.27
    torch           : 1.13.1+cu117
    device          : 1 gpu (['Tesla P100-PCIE-16GB'])
    cpu cores       : 28
    threads per cpu : 1
    RAM             : 251.23 GB
    GPU memory      : [16.0] GB
    os              : Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-redhat-7.9-Maipo
    python          : 3.7.12
    tsai            : 0.3.4
    fastai          : 2.7.10
    fastcore        : 1.5.27
    torch           : 1.13.1+cu117
    device          : 1 gpu (['Tesla P100-PCIE-16GB'])
    cpu cores       : 28
    threads per cpu : 1
    RAM             : 251.23 GB
    GPU memory      : [16.0] GB
    

    when I am running the following lines of code:

        X = np.random.randn(300, 2, 50)
        Y = np.zeros(300)
        Y[200:] = 1
        X_train , X_test, y_train, y_test = train_test_split(X, Y,  test_size = 0.2, random_state = 333)
        X, y, splits = combine_split_data([X_train, X_test], [y_train, y_test])
        tfms = [None, TSClassification()]
        dls = get_ts_dls(X, y, splits=splits, tfms=tfms) 
        learn = ts_learner(dls, XceptionTimePlus, 
                loss_func = LabelSmoothingCrossEntropyFlat(),
                # metrics = accuracy,)
                metrics= RocAucBinary())
        learn.fit_one_cycle(2, 1e-4)
        learn.step_importance()
    

    it throws an error:

    Traceback (most recent call last):                                                                                                      
      File "/gpfs/ysm/project/gerstein/yl2428/conda_envs/ABCD/lib/python3.7/site-packages/tsai/analysis.py", line 367, in step_importance
        try: value = metric(output[0][:, 1], output[1]).item()
      File "/gpfs/ysm/project/gerstein/yl2428/conda_envs/ABCD/lib/python3.7/site-packages/sklearn/metrics/_ranking.py", line 580, in roc_auc_score
        sample_weight=sample_weight,
      File "/gpfs/ysm/project/gerstein/yl2428/conda_envs/ABCD/lib/python3.7/site-packages/sklearn/metrics/_base.py", line 72, in _average_binary_score
        raise ValueError("{0} format is not supported".format(y_type))
    ValueError: continuous format is not supported
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/gpfs/ysm/home/yl2428/YL_ABCD/ABCD/ABCD/train/trainer.py", line 177, in <module>
        learn.step_importance()
      File "/gpfs/ysm/project/gerstein/yl2428/conda_envs/ABCD/lib/python3.7/site-packages/tsai/analysis.py", line 368, in step_importance
        except: value = metric(output[0], output[1]).item()
      File "/gpfs/ysm/project/gerstein/yl2428/conda_envs/ABCD/lib/python3.7/site-packages/sklearn/metrics/_ranking.py", line 580, in roc_auc_score
        sample_weight=sample_weight,
      File "/gpfs/ysm/project/gerstein/yl2428/conda_envs/ABCD/lib/python3.7/site-packages/sklearn/metrics/_base.py", line 72, in _average_binary_score
        raise ValueError("{0} format is not supported".format(y_type))
    ValueError: continuous-multioutput format is not supported
    
    opened by liyy2 0
Releases(0.3.4)
  • 0.3.4(Nov 18, 2022)

    New Features

    • compatibility with Pytorch 1.13 (#619)

    • added sel_vars to get_robustscale_params (#610)

    • added sel_steps to TSRandom2Value (#607)

    • new walk forward cross-validation in tsai (#582)

    Bugs Squashed

    • fixed issue when printing an empty dataset wo transforms NoTfmLists (#622)

    • fixed minor issue in get_robustscaler params with sel_vars (#615)

    • fixed issue when using tsai in dev with VSCode (#614)

    • issue when using lists as sel_vars and sel_steps in TSRandom2Value (#612)

    • fixed issue with feature_importance and step_importance when using metrics (#609)

    • renamed data processing tfms feature_idxs as sel_vars for consistency (#608)

    • fixed issue when importing 'GatedTabTransformer' (#536)

    Source code(tar.gz)
    Source code(zip)
  • 0.3.2(Oct 9, 2022)

    Breaking Changes

    • replaced TSOneHot preprocessor by TSOneHotEncode using a different API (#502)

    • replaced MultiEmbedding n_embeds, embed_dims and padding_idxs by n_cat_embeds, cat_embed_dims and cat_padding_idxs (#497)

    New Features

    • added GaussianNoise transform (#514)

    • added TSSequencer model based on Sequencer: Deep LSTM for Image Classification paper (#508)

    • added TSPosition to be able to pass any steps list that will be concatenated to the input (#504)

    • added TSPosition preprocessor to allow the concatenation of a custom position sequence (#503)

    • added TSOneHot class to encode a variable on the fly (#501)

    • added token_size and tokenizer arguments to tsai (#496)

    • SmeLU activation function not found (#495)

    • added example on how to perform inference, partial fit and fine tuning (#491)

    • added get_time_per_batch and get_dl_percent_per_epoch (#489)

    • added TSDropVars used to removed batch variables no longer needed (#488)

    • added SmeLU activation function (#458)

    • Feature request: gMLP and GatedTabTransformer. (#354)

    • Pay Attention to MLPs - gMLP (paper, implementation)

    • The GatedTabTransformer (paper, implementation);

    Bugs Squashed

    • after_batch tfms set to empty Pipeline when using dl.new() (#516)

    • 00b_How_to_use_numpy_arrays_in_fastai: AttributeError: attribute 'device' of 'torch._C._TensorBase' objects is not writable (#500)

    • getting regression data returns _check_X() argument error (#430)

    • I wonder why only 'Nor' is displayed in dls.show_batch(sharvey=True). (#416)

    Source code(tar.gz)
    Source code(zip)
  • 0.3.1(Apr 19, 2022)

    Release notes

    0.3.1

    New Features

    • added StratifiedSampler to handle imbalanced datasets (#479)

    • added seq_embed_size and seq_embed arguments to TSiT (#476)

    • added get_idxs_to_keep that can be used to filter indices based on different conditions (#469)

    • added SmeLU activation function (#458)

    • added split_in_chunks (#454)

    • upgraded min Python version to 3.7 (#450)

    • added sampler argument to NumpyDataLoader and TSDataLoader (#436)

    • added TSMask2Value transform which supports multiple masks (#431)

    • added TSGaussianStandardize for improved ood generalization (#428)

    • added get_dir_size function (#421)

    Bugs Squashed

    • slow import of MiniRocketMultivariate from sktime (#482)

    • Fixed install from source fails on Windows (UnicodeDecodeError) (#470)

    • TSDataset error oindex is not an attribute (#462)

    • split_in_chunks incorrectly calculated (#455)

    • _check_X() got an unexpected keyword argument 'coerce_to_numpy' (#415)

    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Mar 2, 2022)

    Release notes

    0.3.0

    New Features

    • Added function that pads sequences to same length (#410)

    • Added TSRandomStandardize preprocessing technique (#396)

    • New visualization techniques: model's feature importance and step importance (#393)

    • Allow from tsai.basics import * to speed up loading (#320)

    Bugs Squashed

    • Separate core from non-core dependencies in tsai - pip install tsaiextras. This is an important change that:
      • reduces the time to pip install tsai
      • avoid errors during installation
      • reduces the time to load tsai using from tsai.all import *
    Source code(tar.gz)
    Source code(zip)
  • 0.2.25(Feb 6, 2022)

    0.2.25

    Breaking Changes

    • updated forward_gaps removing nan_to_num (#331)

    • TSRobustScaler only applied by_var (#329)

    • remove add_na arg from TSCategorize (#327)

    New Features

    • added IntraClassCutMix1d (#384)

    • added learn.calibrate_model method (#379)

    • added analyze_array function (#378)

    • Added TSAddNan transform (#376)

    • added dummify function to create dummy data from original data (#366)

    • added Locality Self Attention to TSiT (#363)

    • added sel_vars argument to MVP callback (#349)

    • added sel_vars argument to TSNan2Value (#348)

    • added multiclass, weighted FocalLoss (#346)

    • added TSRollingMean batch transform (#343)

    • added recall_at_specificity metric (#342)

    • added train_metrics argument to ts_learner (#341)

    • added hist to PredictionDynamics for binary classification (#339)

    • add padding_idxs to MultiEmbedding (#330)

    Bugs Squashed

    • sort_by data may be duplicated in SlidingWindowPanel (#389)

    • create_script splits the nb name if multiple underscores are used (#385)

    • added torch functional dependency to plot_calibration_curve (#383)

    • issue when setting horizon to 0 in SlidingWindow (#382)

    • replace learn by self in calibrate_model patch (#381)

    • Argument d_head is not used in TSiTPlus (#380)

      • https://github.com/timeseriesAI/tsai/blob/6baf0ba2455895b57b54bf06744633b81cdcb2b3/tsai/models/TSiTPlus.py#L63
    • replace default relu activation by gelu in TSiT (#361)

    • sel_vars and sel_steps in TSDatasets and TSDalaloaders don't work when used simultaneously (#347)

    • ShowGraph fails when recoder.train_metrics=True (#340)

    • fixed 'se' always equal to 16 in MLSTM_FCN (#337)

    • ShowGraph doesn't work well when train_metrics=True (#336)

    • TSPositionGaps doesn't work on cuda (#333)

    • XResNet object has no attribute 'backbone' (#332)

    • import InceptionTimePlus in tsai.learner (#328)

    • df2Xy: Format correctly without the need to specify sort_by (#324)

    • bug in MVP code learn.model --> self.learn.model (#323)

    • Colab install issues: importing the lib takes forever (#315)

    • Calling learner.feature_importance on larger than memory dataset causes OOM (#310)

    Source code(tar.gz)
    Source code(zip)
  • 0.2.24(Dec 16, 2021)

    Release notes

    0.2.24

    Breaking Changes

    • removed InceptionTSiT, InceptionTSiTPlus, ConvTSiT & ConvTSiTPlus (#276)

    New Features

    • add stateful custom sklearn API type tfms: TSShrinkDataFrame, TSOneHotEncoder, TSCategoricalEncoder (#313)

    • Pytorch 1.10 compatibility (#311)

    • ability to pad at the start/ end of sequences and filter results in SlidingWindow (#307)

    • added bias_init to TSiT (#288)

    • plot permutation feature importance after a model's been trained (#286)

    • added separable as an option to MultiConv1d (#285)

    • Modified TSiTPlus to accept a feature extractor and/or categorical variables (#278)

    Bugs Squashed

    • learn modules takes too long to load (#312)

    • error in roll2d and roll3d when passing index 2 (#304)

    • TypeError: unhashable type: 'numpy.ndarray' (#302)

    • ValueError: only one element tensors can be converted to Python scalars (#300)

    • unhashable type: 'numpy.ndarray' when using multiclass multistep labels (#298)

    • incorrect data types in NumpyDatasets subset (#297)

    • create_future_mask creates a mask in the past (#293)

    • NameError: name 'X' is not defined in learner.feature_importance (#291)

    • TSiT test fails on cuda (#287)

    • MultiConv1d breaks when ni == nf (#284)

    • WeightedPerSampleLoss reported an error when used with LDS_weights (#281)

    • pos_encoding transfer weight in TSiT fails (#280)

    • MultiEmbedding cat_pos and cont_pos are not in state_dict() (#277)

    • fixed issue with MixedDataLoader (#229), thanks to @Wabinab

    Source code(tar.gz)
    Source code(zip)
  • 0.2.23(Nov 25, 2021)

    Release notes

    0.2.23

    Breaking Changes

    • removed torch-optimizer dependency (#228)

    New Features

    • added option to train MVP on random sequence lengths (#252)

    • added ability to pass an arch name (str) to learner instead of class (#217)

    • created convenience fns create_directory and delete_directory in utils (#213)

    • added option to create random array of given shapes and dtypes (#212)

    • my_setup() print your main system and package versions (#202)

    • added a new tutorial on how to train large datasets using tsai (#199)

    • added a new function to load any file as a module (#196)

    • Created CODE_OF_CONDUCT.md in https://github.com/timeseriesAI/tsai/pull/210

    • Add Optuna tutorial notebook by @dnth in https://github.com/timeseriesAI/tsai/pull/275

    Bugs Squashed

    • Loading code just for inference takes too long (#273)

    • Fixed out-of-memory issue with large datasets on disk (#126)

    • AttributeError: module 'torch' has no attribute 'nan_to_num' (#262)

    • Fixed TypeError: unhashable type: 'numpy.ndarray' (#250)

    • Wrong link in paper references (#249)

    • remove default PATH which overwrites custom PATH (#238)

    • Predictions where not properly decoded when using with_decoded. (#237)

    • SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame (#221)

    • InceptionTimePlus wasn't imported by TSLearners (#218)

    • get_subset_dl fn is not properly creating a subset dataloader (#211)

    • Bug in WeightedPersSampleLoss (#203)

    • Bump nokogiri from 1.11.4 to 1.12.5 in /docs by @dependabot in https://github.com/timeseriesAI/tsai/pull/222

    New Contributors

    • @geoHeil made their first contribution in https://github.com/timeseriesAI/tsai/pull/207
    • @dnth made their first contribution in https://github.com/timeseriesAI/tsai/pull/275

    Full Changelog: https://github.com/timeseriesAI/tsai/compare/0.2.20...0.2.23

    Source code(tar.gz)
    Source code(zip)
Owner
timeseriesAI
State-of-the-art deep learning applied to time series data
timeseriesAI
Multivariate Time Series Forecasting with efficient Transformers. Code for the paper "Long-Range Transformers for Dynamic Spatiotemporal Forecasting."

Spacetimeformer Multivariate Forecasting This repository contains the code for the paper, "Long-Range Transformers for Dynamic Spatiotemporal Forecast

QData 440 Jan 2, 2023
Implementation of ETSformer, state of the art time-series Transformer, in Pytorch

ETSformer - Pytorch Implementation of ETSformer, state of the art time-series Transformer, in Pytorch Install $ pip install etsformer-pytorch Usage im

Phil Wang 121 Dec 30, 2022
A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

null 195 Dec 7, 2022
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

null 63 Oct 17, 2022
Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-art fuzzing techniques

About Fuzzification Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-

gts3.org (SSLab@Gatech) 55 Oct 25, 2022
A selection of State Of The Art research papers (and code) on human locomotion (pose + trajectory) prediction (forecasting)

A selection of State Of The Art research papers (and code) on human trajectory prediction (forecasting). Papers marked with [W] are workshop papers.

Karttikeya Manglam 40 Nov 18, 2022
LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models

LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models. Developers can reproduce these SOTA methods and build their own methods.

TuZheng 405 Jan 4, 2023
The source code and data of the paper "Instance-wise Graph-based Framework for Multivariate Time Series Forecasting".

IGMTF The source code and data of the paper "Instance-wise Graph-based Framework for Multivariate Time Series Forecasting". Requirements The framework

Wentao Xu 24 Dec 5, 2022
Library for implementing reservoir computing models (echo state networks) for multivariate time series classification and clustering.

Framework overview This library allows to quickly implement different architectures based on Reservoir Computing (the family of approaches popularized

Filippo Bianchi 249 Dec 21, 2022
Time Series Forecasting with Temporal Fusion Transformer in Pytorch

Forecasting with the Temporal Fusion Transformer Multi-horizon forecasting often contains a complex mix of inputs – including static (i.e. time-invari

Nicolás Fornasari 6 Jan 24, 2022
Image Classification - A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

null 0 Jan 23, 2022
😇A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc

------ Update September 2018 ------ It's been a year since TorchMoji and DeepMoji were released. We're trying to understand how it's being used such t

Hugging Face 865 Dec 24, 2022
The GitHub repository for the paper: “Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction“.

SCINet This is the original PyTorch implementation of the following work: Time Series is a Special Sequence: Forecasting with Sample Convolution and I

null 386 Jan 1, 2023
Quickly comparing your image classification models with the state-of-the-art models (such as DenseNet, ResNet, ...)

Image Classification Project Killer in PyTorch This repo is designed for those who want to start their experiments two days before the deadline and ki

null 349 Dec 8, 2022
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting This is the origin Pytorch implementation of Informer in the followin

Haoyi 3.1k Dec 29, 2022
Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting.

Non-AR Spatial-Temporal Transformer Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series For

Chen Kai 66 Nov 28, 2022
Spectral Temporal Graph Neural Network (StemGNN in short) for Multivariate Time-series Forecasting

Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting This repository is the official implementation of Spectral Temporal Gr

Microsoft 306 Dec 29, 2022