Pytorch implementation of OCNet series and SegFix.

Overview

openseg.pytorch

PWC

PWC

PWC

PWC

PWC

News

  • 2021/09/14 MMSegmentation has supported our ISANet and refer to ISANet for more details.

  • 2021/08/13 We have released the implementation for HRFormer and the combination of HRFormer and OCR achieves better semantic segmentation performance.

  • 2021/03/12 The late ACCPET is finally here, our "OCNet: Object context network for scene parsing" has been accepted by IJCV-2021, which consists of two of our previous technical reports: OCNet and ISA. Congratulations to all the co-authors!

  • 2021/02/16 Support pytorch-1.7, mixed-precision, and distributed training. Based on the PaddleClas ImageNet pretrained weights, we achieve 83.22% on Cityscapes val, 59.62% on PASCAL-Context val (new SOTA), 45.20% on COCO-Stuff val (new SOTA), 58.21% on LIP val and 47.98% on ADE20K val. Please checkout branch pytorch-1.7 for more details.

  • 2020/12/07 PaddleSeg has supported our ISA and HRNet + OCR. Jittor also has supported our ResNet-101 + OCR.

  • 2020/08/16 MMSegmentation has supported our HRNet + OCR.

  • 2020/07/20 The researchers from AInnovation have achieved Rank#1 on ADE20K Leaderboard via training our HRNet + OCR with a semi-supervised learning scheme. More details are in their Technical Report.

  • 2020/07/09 OCR (Spotlight) and SegFix have been accepted by the ECCV-2020. Notably, the reseachers from Nvidia set a new state-of-the-art performance on Cityscapes leaderboard: 85.4% via combining our HRNet + OCR with a new hierarchical mult-scale attention scheme.

  • 2020/05/11 We have released the checkpoints/logs of "HRNet + OCR" on all the 5 benchmarks including Cityscapes, ADE20K, LIP, PASCAL-Context and COCO-Stuff in the Model Zoo. Please feel free to try our method on your own dataset.

  • 2020/04/18 We have released some of our checkpoints/logs of OCNet, ISA, OCR and SegFix. We highly recommend you to use our SegFix to improve your segmentation results as it is super easy & fast to use.

  • 2020/03/12 Our SegFix could be used to improve the performance of various SOTA methods on both semantic segmentation and instance segmentation, e.g., "PolyTransform + SegFix" achieves Rank#2 on Cityscapes leaderboard (instance segmentation track) with performance as 41.2%.

  • 2020/01/13 The source code for reproduced HRNet+OCR has been made public.

  • 2020/01/09 "HRNet + OCR + SegFix" achieves Rank#1 on Cityscapes leaderboard with mIoU as 84.5%.

  • 2019/09/25 We have released the paper OCR, which is method of our Rank#2 entry to the leaderboard of Cityscapes.

  • 2019/07/31 We have released the paper ISA, which is very easy to use and implement while being much more efficient than OCNet or DANet based on conventional self-attention.

  • 2019/07/23 We (HRNet + OCR w/ ASP) achieve Rank#1 on the leaderboard of Cityscapes (with a single model) on 3 of 4 metrics.

  • 2019/05/27 We achieve SOTA on 6 different semantic segmentation benchmarks including: Cityscapes, ADE20K, LIP, Pascal-Context, Pascal-VOC, COCO-Stuff. We provide the source code for our approach on all the six benchmarks.

Model Zoo and Baselines

We provide a set of baseline results and trained models available for download in the Model Zoo.

Introduction

This is the official code of OCR, OCNet, ISA and SegFix. OCR, OCNet, and ISA focus on better context aggregation mechanisms (in the semantic segmentation task) and ISA focuses on addressing the boundary errors (in both semantic segmentation and instance segmentation tasks). We highlight the overall framework of OCR and SegFix in the figures as shown below:

OCR

Fig.1 - Illustrating the pipeline of OCR. (i) form the soft object regions in the pink dashed box. (ii) estimate the object region representations in the purple dashed box. (iii) compute the object contextual representations and the augmented representations in the orange dashed box.

SegFix

Fig.2 - Illustrating the SegFix framework: In the training stage, we first send the input image into a backbone to predict a feature map. Then we apply a boundary branch to predict a binary boundary map and a direction branch to predict a direction map and mask it with the binary boundary map. We apply boundary loss and direction loss on the predicted boundary map and direction map separately. In the testing stage, we first convert the direction map to offset map and then refine the segmentation results of any existing methods according to the offset map.

Citation

Please consider citing our work if you find it helps you,

@article{YuanW18,
  title={Ocnet: Object context network for scene parsing},
  author={Yuhui Yuan and Jingdong Wang},
  journal={arXiv preprint arXiv:1809.00916},
  year={2018}
}

@article{HuangYGZCW19,
  title={Interlaced Sparse Self-Attention for Semantic Segmentation},
  author={Lang Huang and Yuhui Yuan and Jianyuan Guo and Chao Zhang and Xilin Chen and Jingdong Wang},
  journal={arXiv preprint arXiv:1907.12273},
  year={2019}
}

@article{YuanCW20,
  title={Object-Contextual Representations for Semantic Segmentation},
  author={Yuhui Yuan and Xilin Chen and Jingdong Wang},
  journal={arXiv preprint arXiv:1909.11065},
  year={2020}
}

@article{YuanXCW20,
  title={SegFix: Model-Agnostic Boundary Refinement for Segmentation},
  author={Yuhui Yuan and Jingyi Xie and Xilin Chen and Jingdong Wang},
  journal={arXiv preprint arXiv:2007.04269},
  year={2020}
}

@article{YuanFHZCW21,
  title={HRT: High-Resolution Transformer for Dense Prediction},
  author={Yuhui Yuan and Rao Fu and Lang Huang and Weihong Lin and Chao Zhang and Xilin Chen and Jingdong Wang},
  booktitle={arXiv preprint arXiv:2110.09408},
  year={2021}
}

Acknowledgment

This project is developed based on the segbox.pytorch and the author of segbox.pytorch donnyyou retains all the copyright of the reproduced Deeplabv3, PSPNet related code.

Comments
  • questions/issues on training segfix with own data

    questions/issues on training segfix with own data

    I was excited to try segfix training on my own data.

    I could produce the mat files for train and val data. Training works with run_h_48_d_4_segfix.sh and loss convergences. But on the validation the IoU is more or less random (I have 2 classes)

    2020-08-20 10:47:41,932 INFO [base.py, 32] Result for mask 2020-08-20 10:47:41,932 INFO [base.py, 48] Mean IOU: 0.7853758111568029 2020-08-20 10:47:41,933 INFO [base.py, 49] Pixel ACC: 0.9692584678389714 2020-08-20 10:47:41,933 INFO [base.py, 54] F1 Score: 0.7523384841507573 Precision: 0.7928424176432377 Recall: 0.7157718538603068 2020-08-20 10:47:41,933 INFO [base.py, 32] Result for dir (mask) 2020-08-20 10:47:41,933 INFO [base.py, 48] Mean IOU: 0.5390945167184129 2020-08-20 10:47:41,933 INFO [base.py, 49] Pixel ACC: 0.7248566725097775 2020-08-20 10:47:41,933 INFO [base.py, 32] Result for dir (GT) 2020-08-20 10:47:41,934 INFO [base.py, 48] Mean IOU: 0.41990305666871003 2020-08-20 10:47:41,934 INFO [base.py, 49] Pixel ACC: 0.6007717101395131

    to investigate the issue further I tried to analyse the predicted mat files with bash scripts/cityscapes/segfix/run_h_48_d_4_segfix.sh segfix_pred_val 1

    with "input_size": [640, 480] this exception happens: File "/home/rsa-key-20190908/openseg.pytorch/lib/datasets/tools/collate.py", line 108, in collate assert pad_height >= 0 and pad_width >= 0 after fixing it more or less, iv got similar results as val during training They were around 3Kb instead of ~70kb btw, it took "input_size": [640, 480] config from "test": { leave instead "val": {

    is it possible validation only works with "input_size": [2048, 1024],? Can you give me any hints how to manually verify the .mat files of there correctness? Currently I'm diving into 2007.04269.pdf and the code of dt_offset_generator.py to get an understanding.

    opened by marcok 18
  • How to prepare the Cityscapes data

    How to prepare the Cityscapes data

    Hello. I'm trying to reproduce your CityScapes results for our BMVC paper.

    after I followed the data directory format in the config.profile file and running bash ./scripts/cityscapes/hrnet/run_h_48_d_4_ocr.sh val 1 I get this error:

    ERROR: Found no prediction for ground truth /home/arash/openseg.pytorch/dataset/cityscapes/val/label/munster_000027_000019_gtFine_labelIds.png

    could you explain how did you prepare the data? Thanks

    opened by arashash 15
  • about json file, the input size and crop size should based on what

    about json file, the input size and crop size should based on what

    my dataset image size is 256*256,and i dont know how to modifiy the json file

    {
        "dataset": "BDCI",
        "method": "fcn_segmentor",
        "data": {
          "image_tool": "cv2",
          "input_mode": "BGR",
          "num_classes": 7,
          "label_list": [0, 1, 2, 3, 4, 5, 6, 255],
          "data_dir": "~/DataSet/BDCI",
          "workers": 8
        },
       "train": {
          "batch_size": 16,
          "data_transformer": {
            "size_mode": "fix_size",
            "input_size": [256, 256],
            "align_method": "only_pad",
            "pad_mode": "random"
          }
        },
        "val": {
          "batch_size": 4,
          "mode": "ss_test",
          "data_transformer": {
            "size_mode": "fix_size",
            "input_size": [256, 256],
            "align_method": "only_pad"
          }
        },
        "test": {
          "batch_size": 4,
          "mode": "ss_test",
          "out_dir": "~/DataSet/BDCI/seg_result/BDCI",
          "data_transformer": {
            "size_mode": "fix_size",
            "input_size": [256, 256],
            "align_method": "only_pad"
          }
        },
        "train_trans": {
          "trans_seq": ["random_resize", "random_crop", "random_hflip", "random_brightness"],
          "random_brightness": {
            "ratio": 1.0,
            "shift_value": 10
          },
          "random_hflip": {
            "ratio": 0.5,
            "swap_pair": []
          },
          "random_resize": {
            "ratio": 1.0,
            "method": "random",
            "scale_range": [0.5, 2.0],
            "aspect_range": [0.9, 1.1]
          },
          "random_crop":{
            "ratio": 1.0,
            "crop_size": [256, 256],
            "method": "random",
            "allow_outside_center": false
          }
        },
        "val_trans": {
          "trans_seq": []
        },
        "normalize": {
          "div_value": 255.0,
          "mean_value": [0.485, 0.456, 0.406],
          "mean": [0.485, 0.456, 0.406],
          "std": [0.229, 0.224, 0.225]
        },
        "checkpoints": {
          "checkpoints_name": "fs_baseocnet_BDCI_seg",
          "checkpoints_dir": "./checkpoints/BDCI",
          "save_iters": 500
        },
        "network":{
          "backbone": "deepbase_resnet101_dilated8",
          "multi_grid": [1, 1, 1],
          "model_name": "base_ocnet",
          "bn_type": "inplace_abn",
          "stride": 8,
          "factors": [[8, 8]],
          "loss_weights": {
            "corr_loss": 0.01,
            "aux_loss": 0.4,
            "seg_loss": 1.0
          }
        },
        "logging": {
          "logfile_level": "info",
          "stdout_level": "info",
          "log_file": "./log/BDCI/fs_baseocnet_BDCI_seg.log",
          "log_format": "%(asctime)s %(levelname)-7s %(message)s",
          "rewrite": true
        },
        "lr": {
          "base_lr": 0.01,
          "metric": "iters",
          "lr_policy": "lambda_poly",
          "step": {
            "gamma": 0.5,
            "step_size": 100
          }
        },
        "solver": {
          "display_iter": 10,
          "test_interval": 1000,
          "max_iters": 40000
        },
        "optim": {
          "optim_method": "sgd",
          "adam": {
            "betas": [0.9, 0.999],
            "eps": 1e-08,
            "weight_decay": 0.0001
          },
          "sgd": {
            "weight_decay": 0.0005,
            "momentum": 0.9,
            "nesterov": false
          }
        },
        "loss": {
          "loss_type": "fs_auxce_loss",
          "params": {
            "ce_weight": [0.8373, 0.9180, 0.8660, 1.0345, 1.0166, 0.9969, 0.9754,
                          1.0489, 0.8786, 1.0023, 0.9539, 0.9843, 1.1116, 0.9037,
                          1.0865, 1.0955, 1.0865, 1.1529, 1.0507],
            "ce_reduction": "elementwise_mean",
            "ce_ignore_index": -1,
            "ohem_minkeep": 100000,
            "ohem_thresh": 0.9
          }
        }
    }
    
    

    here is my json file, and when i try to train my dataset, there is such sizemisbatch error...like: image image and so on, environment should be satisfied: image

    this is my val error: image and the config.profile: image this is my log file screenshot: image image image image

    opened by ShiMinghao0208 12
  • problem occured in  hrnet_backbone.py

    problem occured in hrnet_backbone.py

    Dear Author,

    Thank you for your excellent work, but some errors are reported for backbones.

    checkpoint names:
    checkpoints/cityscapes/hrnet_w48_ocr_1_latest.pth
    
    
    commands:
    (for HRNet-W48:)
    python -u main.py --configs configs/cityscapes/H_48_D_4.json --drop_last y --backbone hrnet48 --model_name hrnet_w48_ocr --checkpoints_name hrnet_w48_ocr_1 --phase test --gpu 0 --resume ./checkpoints/cityscapes/hrnet_w48_ocr_1_latest.pth --loss_type fs_auxce_loss --test_dir input_images --out_dir output_images
    

    Error messages:

    2020-07-15 21:00:10,470 INFO [module_runner.py, 44] BN Type is inplace_abn. Traceback (most recent call last): File "main.py", line 214, in model = Tester(configer)
    File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/segmentor/tester.py", line 69, in init self._init_model() File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/segmentor/tester.py", line 72, in _init_model self.seg_net = self.model_manager.semantic_segmentor() File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/model_manager.py", line 81, in semantic_segmentor model = SEG_MODEL_DICTmodel_name File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/nets/hrnet.py", line 105, in init self.backbone = BackboneSelector(configer).get_backbone() File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/backbones/backbone_selector.py", line 34, in get_backbone model = HRNetBackbone(self.configer)(**params) File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/backbones/hrnet/hrnet_backbone.py", line 598, in call bn_momentum=0.1) File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/backbones/hrnet/hrnet_backbone.py", line 307, in init self.bn1 = ModuleHelper.BatchNorm2d(bn_type=bn_type)(64, momentum=bn_momentum) TypeError: 'NoneType' object is not callable

    Could you please tell me what is wrong? thank you.

    opened by daixiaolei623 12
  • Problem with OCR similarity map

    Problem with OCR similarity map

    Thanks for sharing this wonderful work with us!

    I have a problem with the computing of similarity map in the OCR module. In line 131 in lib/models/seg_hrnet_orc.py sim_map = (self.key_channels**-.5) * sim_map Why multiply a small value (self.key_channels**-.5) to sim_map before softmax?

    During validation, I have printed the final result of sim_map and I found all values in this map are very close to 0.0526 (equals to 1/19), which means the probabilities of a pixel i belong to different classesk are almost equal. Is this contradicting the assumption that the similarity map should represent the relation between the _i_th pixel and the _k_th object region?

    #######################

    Your former answer:

    • Multiplying the small value is following the original self-attention scheme. Please refer to the last paragraph of 3.2.1 in the paper "Attention Is All You Need". However, we find this small factor does not influence the segmentation performance.

    • As the final result of the sim_map, we do not understand why all the values are almost the same in your case. What checkpoints are you testing? How about the performance of the used checkpoint? Please provide more information so that we can help you.

    #########################

    Thanks a lot for your reply! I used the checkpoint posted on HRNet-OCR. The segmentation performance is good ad the mIoU is 81.6, too. Screenshot from 2020-06-15 12-07-11 In inference, I have printed 10 random rows in the sim_map like below: Screenshot from 2020-06-15 12-19-24 All values in this map are very close to 0.0526 (equals to 1/19).

    opened by Mayy1994 11
  • SegFix paper link

    SegFix paper link

    Hi!

    Thanks for your nice work. It is really impressive. I'm interested in the SegFix algorithm. Could you send a copy of the paper "SegFix: Model-Agnostic Boundary Refinement for Segmentation", since I cannot find it on arXiv.

    Best, David

    opened by davidblom603 8
  • The performance of renset101-ocr

    The performance of renset101-ocr

    Hi, I want to reproduce the results of ocr paper, specially for pascal context and ade20k. Should I use the HRNet-OCR repo or this repo? In fact, I follow the default settings of HRNet-OCR and just replace HRNet with resnet101, but I can not reproduce the results on pascal context (54.8%mIoU) and ade20k (45.3%mIoU).

    opened by ydhongHIT 7
  • Test sets results

    Test sets results

    For comparison in our paper, we are looking for the detailed test set results (class IoUs) of these prediction files that you shared: https://drive.google.com/drive/folders/156vMABydr7btdPDBU6b9J-e0jJHuPI73 Do you happen to have a snapshot of the submission results obtained with these predictions? Thank you for your consideration.

    opened by arashash 7
  • class-id mapping for mapillary dataset

    class-id mapping for mapillary dataset

    I cannot find any class-id mapping in README or the config file. Just like Road in the ground truth with a label of 0 and traffic light is 1, the unlabeled is 255, etc.

    could you provide the mapping for v1.2 of mapillary?

    opened by lingorX 6
  • when i use H_SEGFIX.json to train cityscapes datasets meet the error:

    when i use H_SEGFIX.json to train cityscapes datasets meet the error:

    In loss_heleper.py In the calculation of loss function, the input is two tensors[1,8,128,128] /[1,2,128,128], and the corresponding label of single is three tensors.[1,512,512],[1,512,512],[1,512,512]

    targets=targets_.clone().unsqueeze(1).float() AttributeError:'list' object has no attribute 'clone'

    opened by qingchengboy 6
  • How to draw pictures

    How to draw pictures

    01 02 03 Coarse Label Map,Offset Map,Refined Label Map,Distance Map, Direction Map and the last one,How to draw them。Which drawing software is used, which is a program, what is the name of the software, and can the program be open source?I want to apply Figure 2 and Figure 3 to my own grayscale map. If it can be open sourced, will it be possible in the near future?Thanks you very much. 您好,抱歉我的英语太渣了,想了解一下这3张图是如何制作的。哪些图用了画图软件,是什么软件,哪些用了程序,程序可以开源吗。我想把图2和图3应用到自己的灰度图上,如果可以开源,近期可以吗?谢谢各位大佬,万分感谢。

    opened by Klaviersonate 5
  • need *.mat when I want to train segfix on my own dataset

    need *.mat when I want to train segfix on my own dataset

    I want to train segfix on my own dataset with script "scripts/cityscapes/segfix/run_hx_20_d_2_segfix_trainval.sh", but it seems that it needs file like *.mat, how to solve this problem? thank you.

    image

    opened by jhyin12 0
  • preprocess scripts for LIP

    preprocess scripts for LIP

    Thanks for your work!

    I download LIP dataset from here, and get dataset folder structure as below:

    .
    |-- ATR
    |   `-- humanparsing
    |       |-- JPEGImages
    |       `-- SegmentationClassAug
    |-- CIHP
    |   `-- instance-level_human_parsing
    |       |-- Testing
    |       |   `-- Images
    |       |-- Training
    |       |   |-- Categories
    |       |   |-- Category_ids
    |       |   |-- Human
    |       |   |-- Human_ids
    |       |   |-- Images
    |       |   |-- Instance_ids
    |       |   `-- Instances
    |       `-- Validation
    |           |-- Categories
    |           |-- Category_ids
    |           |-- Human
    |           |-- Human_ids
    |           |-- Images
    |           |-- Instance_ids
    |           `-- Instances
    `-- LIP
    

    that is different from the structure you mentioned in GETTING_STARTED.md:

    
    ├── lip
    │   ├── atr
    │   │   ├── edge
    │   │   ├── image
    │   │   └── label
    │   ├── cihp
    │   │   ├── image
    │   │   └── label
    │   ├── train
    │   │   ├── edge
    │   │   ├── image
    │   │   └── label
    │   ├── val
    │   │   ├── edge
    │   │   ├── image
    │   │   └── label
    
    

    could you please provide the scripts to preprocess LIP dataset? Thanks for a lot!

    opened by shouyanxiang 0
  • Result of refinement by SegFix on HRNet / HRNet-Semantic-Segmentation open source

    Result of refinement by SegFix on HRNet / HRNet-Semantic-Segmentation open source

    From my understanding, the two open source (HRNet-Semantic-Segmentation & openseg.pytorch) doesn't differ greatly.

    So I applied SegFix to results generated from HRNet-Semantic-Segmentation. The original mIoU is like below.

    image

    Obviously, I assumed that the final mIoU after applying SegFix would increase. However, that's not the case. mIoU actually decreased to 80.29.

    I applied SegFix the way described in MODEL_ZOO.md (below)

    image

    Is this the correct way to apply SegFix? Or is there any other way to apply SegFix?

    opened by Jonnyboyyyy 0
  • question about flops

    question about flops

    how do you calculate the flops in the figure 4 if I want to calculate input size of 512 * 97 * 97? i use the underline formula but the result is much larger than expectation. 捕获

    opened by HaoGuo98 0
Owner
openseg-group
openseg-group
PyTorch implementation of Soft-DTW: a Differentiable Loss Function for Time-Series in CUDA

Soft DTW Loss Function for PyTorch in CUDA This is a Pytorch Implementation of Soft-DTW: a Differentiable Loss Function for Time-Series which is batch

Keon Lee 76 Dec 20, 2022
Implementation of ETSformer, state of the art time-series Transformer, in Pytorch

ETSformer - Pytorch Implementation of ETSformer, state of the art time-series Transformer, in Pytorch Install $ pip install etsformer-pytorch Usage im

Phil Wang 121 Dec 30, 2022
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.8k Jan 8, 2023
text_recognition_toolbox: The reimplementation of a series of classical scene text recognition papers with Pytorch in a uniform way.

text recognition toolbox 1. 项目介绍 该项目是基于pytorch深度学习框架,以统一的改写方式实现了以下6篇经典的文字识别论文,论文的详情如下。该项目会持续进行更新,欢迎大家提出问题以及对代码进行贡献。 模型 论文标题 发表年份 模型方法划分 CRNN 《An End-t

null 168 Dec 24, 2022
a Pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in 2021"

A pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in 2021" 1. Notes This is a pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in

null 91 Dec 26, 2022
Time Series Forecasting with Temporal Fusion Transformer in Pytorch

Forecasting with the Temporal Fusion Transformer Multi-horizon forecasting often contains a complex mix of inputs – including static (i.e. time-invari

Nicolás Fornasari 6 Jan 24, 2022
Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting.

Non-AR Spatial-Temporal Transformer Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series For

Chen Kai 66 Nov 28, 2022
This project is a loose implementation of paper "Algorithmic Financial Trading with Deep Convolutional Neural Networks: Time Series to Image Conversion Approach"

Stock Market Buy/Sell/Hold prediction Using convolutional Neural Network This repo is an attempt to implement the research paper titled "Algorithmic F

Asutosh Nayak 136 Dec 28, 2022
An implementation of the [Hierarchical (Sig-Wasserstein) GAN] algorithm for large dimensional Time Series Generation

Hierarchical GAN for large dimensional financial market data Implementation This repository is an implementation of the [Hierarchical (Sig-Wasserstein

null 11 Nov 29, 2022
A series of convenience functions to make basic image processing operations such as translation, rotation, resizing, skeletonization, and displaying Matplotlib images easier with OpenCV and Python.

imutils A series of convenience functions to make basic image processing functions such as translation, rotation, resizing, skeletonization, and displ

Adrian Rosebrock 4.3k Jan 8, 2023
Calculates carbon footprint based on fuel mix and discharge profile at the utility selected. Can create graphs and tabular output for fuel mix based on input file of series of power drawn over a period of time.

carbon-footprint-calculator Conda distribution ~/anaconda3/bin/conda install anaconda-client conda-build ~/anaconda3/bin/conda config --set anaconda_u

Seattle university Renewable energy research 7 Sep 26, 2022
ALBERT-pytorch-implementation - ALBERT pytorch implementation

ALBERT-pytorch-implementation developing... 모델의 개념이해를 돕기 위한 구현물로 현재 변수명을 상세히 적었고

BG Kim 3 Oct 6, 2022
AntroPy: entropy and complexity of (EEG) time-series in Python

AntroPy is a Python 3 package providing several time-efficient algorithms for computing the complexity of time-series. It can be used for example to e

Raphael Vallat 153 Dec 27, 2022
Library for implementing reservoir computing models (echo state networks) for multivariate time series classification and clustering.

Framework overview This library allows to quickly implement different architectures based on Reservoir Computing (the family of approaches popularized

Filippo Bianchi 249 Dec 21, 2022
The GitHub repository for the paper: “Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction“.

SCINet This is the original PyTorch implementation of the following work: Time Series is a Special Sequence: Forecasting with Sample Convolution and I

null 386 Jan 1, 2023
The source code and data of the paper "Instance-wise Graph-based Framework for Multivariate Time Series Forecasting".

IGMTF The source code and data of the paper "Instance-wise Graph-based Framework for Multivariate Time Series Forecasting". Requirements The framework

Wentao Xu 24 Dec 5, 2022
YOLOv5 Series Multi-backbone, Pruning and quantization Compression Tool Box.

YOLOv5-Compression Update News Requirements 环境安装 pip install -r requirements.txt Evaluation metric Visdrone Model mAP mAP@50 Parameters(M) GFLOPs FPS@

ZhangYuan 719 Jan 2, 2023
Predict and time series avocado hass

RECOMMENDER SYSTEM MARKETING TỔNG QUAN VỀ HỆ THỐNG DỮ LIỆU 1. Giới thiệu - Tiki là một hệ sinh thái thương mại "all in one", trong đó có tiki.vn, là

hieulmsc 3 Jan 10, 2022
A unified framework for machine learning with time series

Welcome to sktime A unified framework for machine learning with time series We provide specialized time series algorithms and scikit-learn compatible

The Alan Turing Institute 6k Jan 8, 2023