Datasets, Transforms and Models specific to Computer Vision

Related tags

Deep Learning vision
Overview

vision

Datasets, Transforms and Models specific to Computer Vision

Installation

  • First install the nightly version of OneFlow
python3 -m pip install oneflow -f https://staging.oneflow.info/branch/master/cu102
  • Then install the latest stable release of flowvision
pip install flowvision==0.0.4
  • Or install the nightly release of flowvision
pip install -i https://test.pypi.org/simple/ flowvision==0.0.4

Supported Model

All of the supported models can be found in our model summary page here.

Usage

Quick Start
  • list supported model
from flowvision import ModelCreator
ModelCreator.model_table()
  • search supported model by wildcard
from flowvision import ModelCreator
ModelCreator.model_table("*vit*", pretrained=True)
ModelCreator.model_table("*vit*", pretrained=False)
ModelCreator.model_table('alexnet')
  • create model use ModelCreator
from flowvision import ModelCreator
model = ModelCreator.create_model('alexnet', pretrained=True)
ModelCreator
  • Create model in a simple way
from flowvision.models import ModelCreator
model = ModelCreator.create_model('alexnet', pretrained=True)

the pretrained weight will be saved to ./checkpoints

  • Supported model table
from flowvision.models import ModelCreator
ModelCreator.model_table()
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ alexnet      │ true       │
│ vit_b_16_224 │ false      │
│ vit_b_16_384 │ true       │
│ vit_b_32_224 │ false      │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘

show all of the supported model in the table manner

  • List models with pretrained weights
from flowvision.models import ModelCreator
ModelCreator.model_table(pretrained=True)
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ alexnet      │ true       │
│ vit_b_16_384 │ true       │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘
  • Search for model by Wildcard
from flowvision.models import ModelCreator
ModelCreator.model_table('vit*')
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ vit_b_16_224 │ false      │
│ vit_b_16_384 │ true       │
│ vit_b_32_224 │ false      │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘
  • Search for model with pretrained weights by Wildcard
from flowvision.models import ModelCreator
ModelCreator.model_table('vit*', pretrained=True)
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ vit_b_16_384 │ true       │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘

Model Zoo

We have conducted all the tests under the same setting, please refer to the model page here for more details.

Disclaimer on Datasets

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

Comments
  • Support Poolformer

    Support Poolformer

    Support Poolformer

    • [x] build poolformer model
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo
    • [x] update docs
    • [x] update changelog
    • [x] pytorch speed comparison oneflow版本过慢,待解决
    New Features Priority: 0 
    opened by thinksoso 16
  • delete flowvision.models._util

    delete flowvision.models._util

    1. flowvision.models下面有_utils.pyutils.py
    2. IntermediateLayerGetter方法在flowvision.models._utils.pyflowvision.models.segmentation.seg_utils.py重复。

    所以删除flowvision.models._utils.py,并暂时引用flowvision.models.segmentation.seg_utils.py

    Priority: 1 Improvements 
    opened by kaijieshi7 9
  • pickle module :EOFError Ran out of input

    pickle module :EOFError Ran out of input

    when I want to use the model of vit_tiny_patch16_224 from flowvison module ,it prompt this EOFError: Ran out of input. 环境就是OneFlow实训平台的3090显卡:oneflow-0.7.0+torch-1.8.1-cu11.2-cudnn8

    opened by WanShaw 8
  • Support UniFormer

    Support UniFormer

    Support Uniformer

    • [x] build uniformer model
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo small_plus
    • [x] update docs
    • [x] update changelog
    • [x] pytorch speed comparison
    New Features 
    opened by thinksoso 6
  • add LeViT

    add LeViT

    Add LeViT

    • [x] build model
    • [x] update init.py in models
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo
    • [x] update docs
    • [x] update readme
    • [x] update changelog
    • [x] pytorch speed comparison
    opened by kaijieshi7 5
  • 解压预训练权重文件时报错

    解压预训练权重文件时报错

    使用 models 中的模型时,例如 model = vgg11(pretrained=True) ,成功下载 zip 权重文件后,解压过程出错,导致解压中断、参数文件不完整。如果自行将下载的 zip 解压,就能正常使用。多个模型都有同样的问题。

    Traceback (most recent call last):
      File "temp.py", line 77, in <module>
        model = vgg11(pretrained=True)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/vgg.py", line 182, in vgg11
        return _vgg("vgg11", "A", False, pretrained, progress, **kwargs)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/vgg.py", line 156, in _vgg
        state_dict = load_state_dict_from_url(model_urls[arch], progress=progress)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/utils.py", line 146, in load_state_dict_from_url
        return _legacy_zip_load(cached_file, model_dir, map_location, delete_file)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/utils.py", line 78, in _legacy_zip_load
        f.extractall(model_dir)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 1636, in extractall
        self._extract_member(zipinfo, path, pwd)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 1691, in _extract_member
        shutil.copyfileobj(source, target)
      File "/usr/local/miniconda3/lib/python3.7/shutil.py", line 79, in copyfileobj
        buf = fsrc.read(length)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 930, in read
        data = self._read1(n)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 1006, in _read1
        data = self._decompressor.decompress(data, n)
    zlib.error: Error -2 while decompressing data: inconsistent stream state
    
    opened by Alive1024 5
  • module 'flowvision.models' has no attribute 'face_recognition'

    module 'flowvision.models' has no attribute 'face_recognition'

    Hello, I need method for create model iresnet. I saw in documentation that flowvision has model iresnet, but when I import and use resnest50 = flowvision.models.face_recognition.iresnest50(pretrained=False, progress=True), python says module 'flowvision.models' has no attribute 'face_recognition'. What can be problem?

    good first issue Bug Fixes 
    opened by PhilippShemetov 4
  • add model: regionvit

    add model: regionvit

    Add RegionViT

    • [x] build model (F.unfold 算子不支持 https://github.com/Oneflow-Inc/oneflow/issues/3785)
    • [x] update init.py in models
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo
    • [x] update docs
    • [x] update changelog
    • [x] pytorch speed comparison
    New Features 
    opened by kaijieshi7 4
  • Add speed test script

    Add speed test script

    脚本运行方式:

    cd ci/check
    bash run_speed_test.sh
    

    结果会输出到 当前目录下的 result 文件里

    目前通过测速脚本发现的问题

    import torch as flow 运行会崩

    • vit
    • conv_mixer
    • crossformer
    • cswin
    • mlp_mixer
    • pvt
    • res_mlp
    • vgg

    本身运行也会报错,输入是 224x224 的时候

    • efficientnet
    • res2net
    Priority: 0 Improvements Bug Fixes 
    opened by Ldpe2G 4
  • add useful model utils

    add useful model utils

    TODO

    Model relative

    • [x] freeze_bn
    • [ ] unfreeze_bn
    • [x] ActivationHook
    • [ ] freeze_unfreeze_fn

    Others

    • [x] random seed

    Test

    • [x] test freeze_bn
    • [ ] test activation_hook
    New Features Priority: 2 
    opened by rentainhe 4
  • bug: module 'oneflow.nn' has no attribute 'ReLU'

    bug: module 'oneflow.nn' has no attribute 'ReLU'

    oneflow/nn/init.py

    from oneflow.python.ops.math_ops import fused_scale_tril from oneflow.python.ops.math_ops import fused_scale_tril_softmax_dropout from oneflow.python.ops.math_ops import relu from oneflow.python.ops.math_ops import tril

    应该是 as ReLU? 难道我的oneflow版本装错了。。 flowvision-0.1.0 oneflow==0.7.0+cu102

    bug 
    opened by zhanggj821 3
  • flow.div 算子和 torch.div 没对齐

    flow.div 算子和 torch.div 没对齐

    image

    import oneflow as flow
    import torch
    import numpy as np
    
    a = np.random.randn(3,3).astype(np.float32)
    
    b = 2
    
    torch_a = torch.from_numpy(a)
    flow_a = flow.from_numpy(a)
    
    print(torch.div(torch_a,b,rounding_mode='floor'))
    print(flow.div(flow_a,b).floor())
    print(flow.div(flow_a,b,rounding_mode='floor'))
    
    opened by triple-Mu 0
  • ResNet-50 训练

    ResNet-50 训练

    ResNet-50 训练

    参照当前 vision 下的 project 复现 resnet-50 训练和精度对齐。

    参考

    主要目标

    • [ ] 2022.05.11 - 2022.5.12:熟悉 vision 下的分类模型训练代码,数据集配置并跑通。
    • [ ] 2022.05.12 - 2022.05.20:对照 timm 和 pytorch 复现 reset-50 训练代码,对齐相关训练条件,测试并使用多卡训练。
    • [ ] 2022.05.21 - 2022.05.27:对比精度差异调整并复现精度,最终将训练好的权重替换为 oneflow 版本。

    项目负责人:林松 预计完成时间:2022.05.27

    相关 PR

    罗列对应的 PR,以为一个 issue 可能会对应多个 PR,所以这里提供的是表格

    | PR | 作者 | reviewer | 日期 | | | ------------------------------------------------------------ | ---- | -------- | -------- | ---- | | 首次上传提交代码 | 林松 | zzzzzzz | 20220510 | |

    opened by triple-Mu 0
  • Vision有效性验证 - 完善Vision下的训练项目

    Vision有效性验证 - 完善Vision下的训练项目

    目前Vision下已经有的一个可以参考的projects,迁移了Swin-T的训练代码,用于Vision下进行模型的训练,但是vision中绝大部分模型的精度复现还无法保证,所以这里开启一个完善训练的projects: 用于复现vision下实现的模型的精度,并且在后续逐渐将vision下迁移的权重替换为oneflow自身训练的权重,这里是暂时的规划,需要2-3位实习生参与完成:

    可参考的projects:

    • https://github.com/rwightman/pytorch-image-models
    • https://github.com/microsoft/Swin-Transformer

    训练的任务,以及首批需要复现精度的模型:

    • 完善Vision下的这个projects: - https://github.com/Oneflow-Inc/vision/tree/main/projects/classification, 熟悉这个projects的用法(与Swin-T基本一致)
    • 这里我们列举一下第一阶段在vision下需要复现精度的模型以及相关paper:

    | Model | Paper | 认领人 | PR | |:----:|:----:|:----:|:----:| | ResNet50 | ResNet strikes back: An improved training procedure in timm | 林松 | | DeiT | Training data-efficient image transformers & distillation through attention | | | Swin-Transformer | Swin Transformer: Hierarchical Vision Transformer using Shifted Windows | 林德铝 | | DeiT III | DeiT III: Revenge of ViT | | |

    • 需要的硬件条件:8卡V100机器,能跑得下单卡256batchsize即可
    opened by rentainhe 0
Releases(v0.1.0)
  • v0.1.0(Feb 17, 2022)

    Flowvision V0.1.0 Stable Release

    New Features

    • Support trunc_normal_ in flowvision.layers.weight_init #92
    • Support DeiT model #115
    • Support PolyLRScheduler and TanhLRScheduler in flowvision.scheduler #85
    • Add resmlp_12_224_dino model and pretrained weight #128
    • Support ConvNeXt model #93
    • Add ReXNet weights #132

    Bug Fixes

    • Fix F.normalize usage in SSD #116
    • Fix bug in EfficientNet and Res2Net #122
    • Fix error pretrained weight usage in vit_small_patch32_384 and res2net50_48w_2s #128

    Improvements

    • Refator trunc_normal_ and linspace usage in Swin-T, Cross-Former, PVT and CSWin models #100
    • Refator Vision Transformer model #115
    • Refine flowvision.models.ModelCreator to support ModelCreator.model_list func #123
    • Refator README #124
    • Refine load_state_dict_from_url in flowvision.models.utils to support downloading pretrained weights to cache dir ~/.oneflow/flowvision_cache #127
    • Rebuild a cleaner model zoo and test all the model with pretrained weights released in flowvision #128

    Docs Update

    • Update Vision Transformer docs #115
    • Add Getting Started docs #124
    • Add resmlp_12_224_dino docs #128
    • Fix VGG docs bug #128
    • Add ConvNeXt docs #93

    Contributors

    A total of 5 developers contributed to this release. Thanks @rentainhe, @simonJJJ, @kaijieshi7, @lixiang007666, @Ldpe2G

    Source code(tar.gz)
    Source code(zip)
Owner
OneFlow
OneFlow
Experimenting with computer vision techniques to generate annotated image datasets from gameplay recordings automatically.

Experimenting with computer vision techniques to generate annotated image datasets from gameplay recordings automatically. The collected data will then be used to train a deep neural network that can detect enemy player models in real time, during gameplay. Finally, a virtual input device will adjust the player's crosshair based on live detections for greater accuracy.

Martin Valchev 3 Apr 24, 2022
Image Processing, Image Smoothing, Edge Detection and Transforms

opevcvdl-hw1 This project uses openCV and Qt to achieve the requirements. Version Python 3.7 opencv-contrib-python 3.4.2.17 Matplotlib 3.1.1 pyqt5 5.1

Kenny Cheng 3 Aug 17, 2022
Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms

FNet: Mixing Tokens with Fourier Transforms Pytorch implementation of Fnet : Mixing Tokens with Fourier Transforms. Citation: @misc{leethorp2021fnet,

Rishikesh (ऋषिकेश) 218 Jan 5, 2023
Image data augmentation scheduler for albumentations transforms

albu_scheduler Scheduler for albumentations transforms based on PyTorch schedulers interface Usage TransformMultiStepScheduler import albumentations a

null 19 Aug 4, 2021
Progressive Coordinate Transforms for Monocular 3D Object Detection

Progressive Coordinate Transforms for Monocular 3D Object Detection This repository is the official implementation of PCT. Introduction In this paper,

null 58 Nov 6, 2022
RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching

RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching This repository contains the source code for our paper: RAFT-Stereo: Multilevel

Princeton Vision & Learning Lab 328 Jan 9, 2023
functorch is a prototype of JAX-like composable function transforms for PyTorch.

functorch is a prototype of JAX-like composable function transforms for PyTorch.

Facebook Research 1.2k Jan 9, 2023
It's like Shape Editor in Maya but works with skeletons (transforms).

Skeleposer What is Skeleposer? Briefly, it's like Shape Editor in Maya, but works with transforms and joints. It can be used to make complex facial ri

Alexander Zagoruyko 1 Nov 11, 2022
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

VITA 59 Dec 28, 2022
GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon Research.

null 42 Dec 2, 2022
Build fully-functioning computer vision models with PyTorch

Detecto is a Python package that allows you to build fully-functioning computer vision and object detection models with just 5 lines of code. Inferenc

Alan Bi 576 Dec 29, 2022
Repository providing a wide range of self-supervised pretrained models for computer vision tasks.

Hierarchical Pretraining: Research Repository This is a research repository for reproducing the results from the project "Self-supervised pretraining

Colorado Reed 53 Nov 9, 2022
A framework for analyzing computer vision models with simulated data

3DB: A framework for analyzing computer vision models with simulated data Paper Quickstart guide Blog post Installation Follow instructions on: https:

3DB 112 Jan 1, 2023
An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results

EasyDatas An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results Installation pip install git+https

Ximing Yang 4 Dec 14, 2021
Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data.

Deep Learning Dataset Maker Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data. How to use Down

deepbands 25 Dec 15, 2022
Cl datasets - PyTorch image dataloaders and utility functions to load datasets for supervised continual learning

Continual learning datasets Introduction This repository contains PyTorch image

berjaoui 5 Aug 28, 2022
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
Meta Language-Specific Layers in Multilingual Language Models

Meta Language-Specific Layers in Multilingual Language Models This repo contains the source codes for our paper On Negative Interference in Multilingu

Zirui Wang 20 Feb 13, 2022
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

null 107 Dec 2, 2022