OpenMMLab Image and Video Editing Toolbox

Overview

Introduction

build docs codecov license

MMEditing is an open source image and video editing toolbox based on PyTorch. It is a part of the OpenMMLab project.

The master branch works with PyTorch 1.3 to 1.6.

Documentation: https://mmediting.readthedocs.io/en/latest/.

Major features

  • Modular design

    We decompose the editing framework into different components and one can easily construct a customized editor framework by combining different modules.

  • Support of multiple tasks in editing

    The toolbox directly supports popular and contemporary inpainting, matting, super-resolution and generation tasks.

  • State of the art

    The toolbox provides state-of-the-art methods in inpainting/matting/super-resolution/generation.

License

This project is released under the Apache 2.0 license.

Changelog

v0.6.0 was released in 2021-3-31.

Note that MMSR has been merged into this repo, as a part of MMEditing. With elaborate designs of the new framework and careful implementations, hope MMEditing could provide better experience.

Benchmark and model zoo

Please refer to model_zoo for more details.

Installation

Please refer to install.md for installation.

Get Started

Please see getting_started.md for the basic usage of MMEditing.

Contributing

We appreciate all contributions to improve MMEditing. Please refer to CONTRIBUTING.md in MMDetection for the contributing guideline.

Acknowledgement

MMEditing is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.

Projects in OpenMMLab

  • MMCV: OpenMMLab foundational library for computer vision.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMOCR: A Comprehensive Toolbox for Text Detection, Recognition and Understanding.
Comments
  • [Feature] Add metric module for FID and KID

    [Feature] Add metric module for FID and KID

    Related to https://github.com/open-mmlab/mmediting/issues/772

    Unlike common metrics like PSNR or SSIM, some metrics are used to calculate a distance of distributions between the predictions and the GTs. To compute this, feature vectors are extracted and stored from each image.

    In the implementation, feature vectors are extracted and stored each step in Dataset.evaluate, and then metrics are computed in EvalIterHook using these feature vectors.

    Thank you for any advice for the implementation.

    Final test results

    Since SRFolderDataset only use a pair of gt and lq, I computed and compared FID and KID results by using fixed fake images with the same length as real images of 50k fake images.

    ||FID|KID| |-|-|-| |StyleGAN|6.6291332|0.0022586| |Ours (StyleGAN weights)|6.6251227|0.0021804| |Ours (pytorch weights)|6.6007636|0.0021755|

    TODO

    • [x] Unit test for functionality
    • [x] Verify numerical correctness
    • [x] Documentation (Use Cases)
    • [x] Final test
    status/WIP priority/P1 kind/feature 
    opened by KKIEEK 30
  • [Feature] Support various scales in RRDBNet

    [Feature] Support various scales in RRDBNet

    Motivation

    1. Current RRDBNet only supports 4x upsampling.
    2. The parameter "radius" in config file of real_esrgan is wrong.

    Modification

    • Add scale parameter in rrdb_net
    • Edit config file of real_esrgan for supporting various scales
    • Edit config file of real_esrgan for fixing some errors (radius=50 -> kernel_size=51)
    opened by anse3832 19
  • BasicVSR++: reproduce ntire decompression results on track3

    BasicVSR++: reproduce ntire decompression results on track3

    Hi, Thanks for your great work. When I use BasicVSR++ to reproduce ntire decompression results on track3 with the trained model you provided and official testset. the settings include: basicvsr_plusplus_c128n25_600k_ntire_decompress_track3.py and basicvsr_plusplus_c128n25_ntire_decompress_track3_20210304_6daf4a40.pth.

    After testing, the Eval-PSNR is 30.0519, and the Eval-lq_PSNR is 28.3367. So I just gain 1.71dB improvements on track3. I set the num_input_frame as length of each sequence to use the full video sequence as inputs when I test.

    Can you give some advice?

    opened by sxd0071 17
  • Error occured during training BasicVSR

    Error occured during training BasicVSR

    During training, error as follows occured. I run it with official config in "configs/restorers/basicvsr/basicvsr_reds4.py".

    RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument "find_unused_parameters=True" to "torch.nn.parallel.DistributedDataParallel"; (2) making sure all "forward" function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's "forward" function. Please include the loss function and the structure of the return value of "forward" of your module when reporting this issue (e.g. list, dict, iterable).

    Though I solved it by adding find_unused_parameters=True in line 128, "apis/train.py" and removing self.generator.find_unused_parameters = False in line 84, "models/restorers/basicvsr.py", I still don't know why that error happened.

    In "models/restorers/basicvsr.py", author declares a variable "self.generator.find_unused_parameters = False", which seems that shouldn't have influence on program, for it's never used as a parameter in program. However, it indeed causes the error mentioned above.

    opened by NK-CS-ZZL 17
  • Progress of Chinese Documentation

    Progress of Chinese Documentation

    We are building Chinese Documentation now, PRs of translation from the community are welcomed.

    To make the community fully aware of the progress, we list the progress here. Please feel free to leave a message and create a PR if you are willing to translate any one of the documentation.

    • [x] docs/changelog.md @kai422
    • [x] docs/config.md @AlexZou14
    • [x] docs/config_generation.md @ckkelvinchan
    • [x] docs/config_inpainting.md @ckkelvinchan
    • [x] docs/config_matting.md @ckkelvinchan
    • [x] docs/config_restoration.md @AlexZou14
    • [x] docs/demo.md @ckkelvinchan
    • [x] docs/faq.md @Yshuo-Li
    • [x] docs/getting_started.md @nbei
    • [x] docs/install.md @LiUzHiAn
    • [x] docs/notes.md @Yshuo-Li
    • [x] docs/quick_run.md @Yshuo-Li
    • [x] docs/tools_scripts.md @ckkelvinchan
    • [x] configs/restorers/*/README.md (configs of 14 restorers) @ckkelvinchan
    • [x] configs/mattors/*/README.md (configs of 3 mattors) @Yshuo-Li
    • [x] configs/inpainting/*/README.md (configs of 4 inpainting models) @Yshuo-Li
    • [x] configs/synthesizers/*/README.md (configs of 2 synthesizers) @Yshuo-Li
    • [x] tools/data/super-resolution/*/README.md (4 super-resolution datasets) @LiUzHiAn
    • [x] tools/data/generation/*/README.md (2 generation datasets) @nbei
    • [x] tools/data/matting/comp1k/README.md @ckkelvinchan
    • [x] tools/data/inpainting/*/README.md (3 inpainting datasets) @nbei
    • [x] demo/restorer_basic_tutorial.ipynb @ckkelvinchan

    Guidence

    All the docs needed to be translated are now put in docs_zh_CN/. Upon finished, simply create a PR and wait to see how your contribution makes the change. Additional info about the contribution process can be found at https://zhuanlan.zhihu.com/p/387116301 (written in Chinese) and https://github.com/open-mmlab/mmediting/issues/432.

    help wanted good first issue 
    opened by ckkelvinchan 16
  • gca model missing keys

    gca model missing keys

    Hello, thank you very much for the framework you provided. When I used the GCA network training in matting, missing keys in source state_dict appeared. The weight of this backbone model is model_best_resnet34_en_nomixup.pth, Could you please give me a look at what might be the cause? Thank you very much!

    kind/bug status/need more info priority/P0 
    opened by Zenobia7 15
  • Reading data from S3

    Reading data from S3

    Hi! I was wondering if it is possible to read data from AWS S3 to train a model such as BasicVSR.

    I have been reading the docs and all indicates it could be done by modifying my config file, and setting the backend option to "ceph" instead of the default "disk".

    Then I sett the paths to: lq_folder = "S3://my-bucket/my-dataset/lq_folder" gt_folder = "S3://my-bucket/my-dataset/gt_folder"

    But I can't manage to make it work. Probably I am missing something. Could you provide me with some extra guidance? :)

    kind/enhancement good first issue 
    opened by jasoromir 14
  • Doesn't generalize to other data

    Doesn't generalize to other data

    I tried applying the video super resolution (EDVR) on other data, but I'm getting very weak results. The output barely seems to differ from the input in quality. Examples below (left is output, right is zoomed in input).

    I tried both the EDVR_REDS_SR_L and the EDVR_Vimeo90K_SR_L models, with varying input sizes, getting similar results. Is this to be expected? I would guess given the REDS4 dataset was also mostly street scenes, it should at least perform similarly.

    Screenshot 2020-03-07 at 22 11 36 Screenshot 2020-03-07 at 22 11 24 Screenshot 2020-03-07 at 22 11 15

    The code I'm using (adapted from test_Vid4_REDS4_with_GT.py and moved to the root folder of the repos. Although I tested it on the REDS4 dataset with no issues.

    Test Vid4 (SR) and REDS4 (SR-clean, SR-blur, deblur-clean, deblur-compression) datasets
    '''
    
    import sys
    sys.path.insert(0, 'codes')
    
    import os
    import os.path as osp
    import glob
    import logging
    import numpy as np
    import cv2
    import torch
    
    import utils.util as util
    import data.util as data_util
    import models.archs.EDVR_arch as EDVR_arch
    
    #################
    # configurations
    #################
    device = torch.device('cuda')
    os.environ['CUDA_VISIBLE_DEVICES'] = '0'
    data_mode = 'sharp_bicubic'  # Vid4 | sharp_bicubic | blur_bicubic | blur | blur_comp
    # Vid4: SR
    # REDS4: sharp_bicubic (SR-clean), blur_bicubic (SR-blur);
    #        blur (deblur-clean), blur_comp (deblur-compression).
    stage = 1  # 1 or 2, use two stage strategy for REDS dataset.
    flip_test = False
    ############################################################################
    #### model
    model_path = 'experiments/pretrained_models/EDVR_REDS_SR_L.pth'
    
    N_in = 5  # use N_in images to restore one HR image
    
    predeblur, HR_in = False, False
    back_RBs = 40
    model = EDVR_arch.EDVR(128, N_in, 8, 5, back_RBs, predeblur=predeblur, HR_in=HR_in)
    
    test_dataset_folder = 'datasets/streetscenes'
    
    #### evaluation
    crop_border = 0
    border_frame = N_in // 2  # border frames when evaluate
    # temporal padding mode
    if data_mode == 'Vid4' or data_mode == 'sharp_bicubic':
        padding = 'new_info'
    else:
        padding = 'replicate'
    save_imgs = True
    
    save_folder = 'results/streetscenes'
    util.mkdirs(save_folder)
    util.setup_logger('base', save_folder, 'test', level=logging.INFO, screen=True, tofile=True)
    logger = logging.getLogger('base')
    
    #### log info
    logger.info('Data: {} - {}'.format(data_mode, test_dataset_folder))
    logger.info('Padding mode: {}'.format(padding))
    logger.info('Model path: {}'.format(model_path))
    logger.info('Save images: {}'.format(save_imgs))
    logger.info('Flip test: {}'.format(flip_test))
    
    #### set up the models
    model.load_state_dict(torch.load(model_path), strict=True)
    model.eval()
    model = model.to(device)
    
    
    img_path_l = sorted(glob.glob(osp.join(test_dataset_folder, '*')))
    max_idx = len(img_path_l)
    if save_imgs:
        util.mkdirs(save_folder)
    
    #### read LQ and GT images
    imgs_LQ = data_util.read_img_seq(test_dataset_folder)
    
    # process each image
    for img_idx, img_path in enumerate(img_path_l):
        print(img_idx, img_path)
        img_name = osp.splitext(osp.basename(img_path))[0]
        select_idx = data_util.index_generation(img_idx, max_idx, N_in, padding=padding)
        print('select_idx:', select_idx)
        imgs_in = imgs_LQ.index_select(0, torch.LongTensor(select_idx)).unsqueeze(0).to(device)
        
        output = util.single_forward(model, imgs_in)
        output = util.tensor2img(output.squeeze(0))
    
        if save_imgs:
            cv2.imwrite(osp.join(save_folder, '{}.png'.format(img_name)), output)
    
    
    
    opened by jorenvs 13
  • Can't get the reported result in paper

    Can't get the reported result in paper

    Sorry, I've got a problem. I train RCAN , RDN and EDSR according to your setting. Data preprocess is also done by your scripts. But I can't get the reported results in those papers. Empirically there is always 0.09dB difference. I wonder if you have this difference too. If so, Is that a problem about data preprocess? Thank you.

    opened by greatlog 13
  • [Bug] RealBasicVSR training Error : torch.distributed.elastic.multiprocessing.api:failed

    [Bug] RealBasicVSR training Error : torch.distributed.elastic.multiprocessing.api:failed

    Prerequisite

    Task

    I'm using the official example scripts/configs for the officially supported tasks/models/datasets.

    Branch

    master branch https://github.com/open-mmlab/mmediting

    Environment

    sys.platform: linux Python: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] CUDA available: True GPU 0: NVIDIA GeForce RTX 2080 Ti CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 11.2, V11.2.152 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.10.2 PyTorch compiling details: PyTorch built with:

    • GCC 7.3
    • C++ Version: 201402
    • Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications
    • Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
    • OpenMP 201511 (a.k.a. OpenMP 4.5)
    • LAPACK is enabled (usually provided by MKL)
    • NNPACK is enabled
    • CPU capability usage: AVX2
    • CUDA Runtime 11.3
    • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
    • CuDNN 8.2
    • Magma 2.5.2
    • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

    TorchVision: 0.11.3 OpenCV: 4.5.4 MMCV: 1.5.0 MMCV Compiler: GCC 7.3 MMCV CUDA Compiler: 11.3 MMEditing: 0.16.0+7b3a8bd

    Reproduces the problem - code sample

    I just training again.

    Reproduces the problem - command or script

    ./tools/dist_train.sh ./configs/restorers/real_basicvsr/realbasicvsr_wogan_c64b20_2x30x8_lr1e-4_300k_reds.py 1
    

    Reproduces the problem - error message

      File "./tools/train.py", line 169, in <module>
        main()
      File "./tools/train.py", line 165, in main
        meta=meta)
      File "/home/gihwan/mmedit/mmedit/apis/train.py", line 104, in train_model
        meta=meta)
      File "/home/gihwan/mmedit/mmedit/apis/train.py", line 241, in _dist_train
        runner.run(data_loaders, cfg.workflow, cfg.total_iters)
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 134, in run
        iter_runner(iter_loaders[i], **kwargs)
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 59, in train
        data_batch = next(data_loader)
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 32, in __next__
        data = next(self.iter_loader)
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
        data = self._next_data()
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
        return self._process_data(data)
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
        data.reraise()
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/torch/_utils.py", line 434, in reraise
        raise exception
    av.codec.codec.UnknownCodecError: Caught UnknownCodecError in DataLoader worker process 0.
    Original Traceback (most recent call last):
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
        data = fetcher.fetch(index)
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "/home/gihwan/mmedit/mmedit/datasets/dataset_wrappers.py", line 31, in __getitem__
        return self.dataset[idx % self._ori_len]
      File "/home/gihwan/mmedit/mmedit/datasets/base_sr_dataset.py", line 52, in __getitem__
        return self.pipeline(results)
      File "/home/gihwan/mmedit/mmedit/datasets/pipelines/compose.py", line 42, in __call__
        data = t(data)
      File "/home/gihwan/mmedit/mmedit/datasets/pipelines/random_degradations.py", line 547, in __call__
        results = degradation(results)
      File "/home/gihwan/mmedit/mmedit/datasets/pipelines/random_degradations.py", line 465, in __call__
        results[key] = self._apply_random_compression(results[key])
      File "/home/gihwan/mmedit/mmedit/datasets/pipelines/random_degradations.py", line 434, in _apply_random_compression
        stream = container.add_stream(codec, rate=1)
      File "av/container/output.pyx", line 64, in av.container.output.OutputContainer.add_stream
      File "av/codec/codec.pyx", line 184, in av.codec.codec.Codec.__cinit__
      File "av/codec/codec.pyx", line 193, in av.codec.codec.Codec._init
    av.codec.codec.UnknownCodecError: libx264
    
    ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 10741) of binary: /home/gihwan/anaconda3/envs/openmmlab2/bin/python
    Traceback (most recent call last):
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in <module>
        main()
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main
        launch(args)
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch
        run(args)
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
        )(*cmd_args)
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
        return launch_agent(self._config, self._entrypoint, list(args))
      File "/home/gihwan/anaconda3/envs/openmmlab2/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
        failures=result.failures,
    torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
    

    Additional information

    I'm trying to train a Real BasicVSR to check if it trains in my environment.
    I have similar issue like issue. But that issue isn't resolved yet.

    kind/bug 
    opened by gihwan-kim 12
  • Questions About Distributed Training

    Questions About Distributed Training

    Hi, guys. I meet some issues when trying to use multi-gpus in distribution mode. I found it not faster than a single GPU. Further, I found more GPU take longer training time. So, I make an experiment to verify this. Take the config of configs/synthesizers/pix2pix/pix2pix_vanilla_unet_bn_a2b_1x1_219200_maps.py for example, the estimated time of the training process is 2h, 6h, 8h for 1GPU, 2GPU, 8GPU respectively. Is this a normal phenomenon (I guess not)? How to use distributed training correctly to speed up my experiment? I list the commands that I used below.

    1gpu: python ./tools/train.py configs/synthesizers/pix2pix/pix2pix_vanilla_unet_bn_a2b_1x1_219200_maps.py

    2gpus: bash ./tools/dist_train.sh configs/synthesizers/pix2pix/pix2pix_vanilla_unet_bn_a2b_1x1_219200_maps.py 2

    8gpus: bash ./tools/dist_train.sh configs/synthesizers/pix2pix/pix2pix_vanilla_unet_bn_a2b_1x1_219200_maps.py 8

    1gpu: image

    2gpus: image

    8gpus: image

    So, how to solve this problem?

    opened by Endeavour10020 12
  • [Docs]: Add metrics and dataset_prepare in user_guides

    [Docs]: Add metrics and dataset_prepare in user_guides

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Please describe the motivation of this PR and the goal you want to achieve through this PR.

    Modification

    Please briefly describe what modification is made in this PR.

    Who can help? @ them here!

    BC-breaking (Optional)

    Does the modification introduce changes that break the backward-compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

    Checklist

    Before PR:

    • [ ] I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
    • [ ] Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
    • [ ] Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
    • [ ] New functionalities are covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • [ ] The documentation has been modified accordingly, including docstring or example tutorials.

    After PR:

    • [ ] If the modification has potential influence on downstream or other related projects, this PR should be tested with some of those projects.
    • [ ] CLA has been signed and all committers have signed the CLA in this PR.
    opened by ruoningYu 1
  • [release] update changelog

    [release] update changelog

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Please describe the motivation of this PR and the goal you want to achieve through this PR.

    Modification

    Please briefly describe what modification is made in this PR.

    Who can help? @ them here!

    BC-breaking (Optional)

    Does the modification introduce changes that break the backward-compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

    Checklist

    Before PR:

    • [ ] I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
    • [ ] Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
    • [ ] Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
    • [ ] New functionalities are covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • [ ] The documentation has been modified accordingly, including docstring or example tutorials.

    After PR:

    • [ ] If the modification has potential influence on downstream or other related projects, this PR should be tested with some of those projects.
    • [ ] CLA has been signed and all committers have signed the CLA in this PR.
    opened by liuwenran 2
  • bump version to v1.0.0rc5

    bump version to v1.0.0rc5

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Please describe the motivation of this PR and the goal you want to achieve through this PR.

    Modification

    Please briefly describe what modification is made in this PR.

    Who can help? @ them here!

    BC-breaking (Optional)

    Does the modification introduce changes that break the backward-compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

    Checklist

    Before PR:

    • [ ] I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
    • [ ] Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
    • [ ] Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
    • [ ] New functionalities are covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • [ ] The documentation has been modified accordingly, including docstring or example tutorials.

    After PR:

    • [ ] If the modification has potential influence on downstream or other related projects, this PR should be tested with some of those projects.
    • [ ] CLA has been signed and all committers have signed the CLA in this PR.
    opened by liuwenran 2
  • [Feature] Support DreamFusion

    [Feature] Support DreamFusion

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Support Dreamfusion. Refers to https://github.com/ashawkey/stable-dreamfusion

    Who can help? @ them here!

    BC-breaking (Optional)

    Does the modification introduce changes that break the backward-compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

    Checklist

    Before PR:

    • [ ] I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
    • [ ] Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
    • [ ] Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
    • [ ] New functionalities are covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • [ ] The documentation has been modified accordingly, including docstring or example tutorials.

    After PR:

    • [ ] If the modification has potential influence on downstream or other related projects, this PR should be tested with some of those projects.
    • [ ] CLA has been signed and all committers have signed the CLA in this PR.
    opened by LeoXing1996 0
  • [Feature] Support IFRNet

    [Feature] Support IFRNet

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Support IFRNet model

    Modification

    Please briefly describe what modification is made in this PR.

    Who can help? @ them here!

    BC-breaking (Optional)

    Does the modification introduce changes that break the backward-compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

    Checklist

    Before PR:

    • [ ] I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
    • [ ] Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
    • [ ] Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
    • [ ] New functionalities are covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • [ ] The documentation has been modified accordingly, including docstring or example tutorials.

    After PR:

    • [ ] If the modification has potential influence on downstream or other related projects, this PR should be tested with some of those projects.
    • [ ] CLA has been signed and all committers have signed the CLA in this PR.
    opened by VongolaWu 1
Releases(v1.0.0rc4)
  • v1.0.0rc4(Dec 6, 2022)

    v1.0.0rc4 (06/12/2022)

    Highlights

    We are excited to announce the release of MMEditing 1.0.0rc4. This release supports 45+ models, 176+ configs and 175+ checkpoints in MMGeneration and MMEditing. We highlight the following new features

    • Support High-level APIs.
    • Support diffusion models.
    • Support Text2Image Task.
    • Support 3D-Aware Generation.

    New Features & Improvements

    • Refactor high-level APIs. (#1410)
    • Support disco-diffusion text-2-image. (#1234, #1504)
    • Support EG3D. (#1482, #1493, #1494, #1499)
    • Support NAFNet model. (#1369)

    Bug Fixes

    • fix srgan train config. (#1441)
    • fix cain config. (#1404)
    • fix rdn and srcnn train configs. (#1392)
    • Revise config and pretrain model loading in esrgan. (#1407)

    Contributors A total of 14 developers contributed to this release. Thanks @plyfager, @LeoXing1996, @Z-Fran, @zengyh1900, @VongolaWu, @gaoyang07, @ChangjianZhao, @zxczrx123, @jackghosts, @liuwenran, @CCODING04, @RoseZhao929, @shaocongliu, @liangzelong.

    New Contributors

    • @gaoyang07 made their first contribution in https://github.com/open-mmlab/mmediting/pull/1372
    • @ChangjianZhao made their first contribution in https://github.com/open-mmlab/mmediting/pull/1461
    • @zxczrx123 made their first contribution in https://github.com/open-mmlab/mmediting/pull/1462
    • @jackghosts made their first contribution in https://github.com/open-mmlab/mmediting/pull/1463
    • @liuwenran made their first contribution in https://github.com/open-mmlab/mmediting/pull/1410
    • @CCODING04 made their first contribution in https://github.com/open-mmlab/mmediting/pull/783
    • @RoseZhao929 made their first contribution in https://github.com/open-mmlab/mmediting/pull/1474
    • @shaocongliu made their first contribution in https://github.com/open-mmlab/mmediting/pull/1470
    • @liangzelong made their first contribution in https://github.com/open-mmlab/mmediting/pull/1488
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc3(Nov 10, 2022)

    Highlights

    We are excited to announce the release of MMEditing 1.0.0rc3. This release supports 43+ models, 170+ configs and 169+ checkpoints in MMGeneration and MMEditing. We highlight the following new features

    • convert mmdet and clip to optional requirements.

    New Features & Improvements

    • Support try_import for mmdet. (#1408)
    • Support try_import for flip. (#1420)
    • Complete requirements (#1419)
    • Update .gitignore. ($1416)
    • Set real_feat to cpu in inception_utils. (#1415)
    • Modify README and configs of StyleGAN2 and PEGAN (#1418)
    • Improve the rendering of Docs-API (#1373)

    Bug Fixes

    • Revise config and pretrain model loading in ESRGAN (#1407)
    • Revise config of LSGAN (#1409)
    • Revise config of CAIN (#1404)

    Contributors

    A total of 5 developers contributed to this release. @Z-Fran, @zengyh1900, @plyfager, @LeoXing1996, @ruoningYu.

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc2(Nov 3, 2022)

    Highlights

    We are excited to announce the release of MMEditing 1.0.0rc2. This release supports 43+ models, 170+ configs and 169+ checkpoints in MMGeneration and MMEditing. We highlight the following new features

    We want to sincerely thank our community for continuously improving MMEditing. 🥰🥰🥰

    New Features & Improvements

    • Support qualitative comparison tools. (#1303)
    • Support instance aware colorization. (#1370)
    • Support multi-metrics with different sample-model. (#1171)
    • Improve the implementation
      • refactoring evaluation metrics. (#1161)
      • Save gt images in PGGAN's forward. (#1328)
      • Improve type and change default number of preprocess_div2k_dataset.py. (#1380)
      • Support pixel value clip in visualizer. (#1365)
      • Support SinGAN Dataset and SinGAN demo. (#1363)
      • Avoid cast int and float in GenDataPreprocessor. (#1385)
    • Improve the documentation
      • Update a menu switcher. (#1162)
      • Fix TTSR's README. (#1325)
      • Revise docs (change PackGenInputs and GenDataSample). (#1382)

    Bug Fixes

    • Fix PPL bug. (#1172)
    • Fix RDN number of channels. (#1332)
    • Fix types of exceptions in demos. (#1372)
    • Fix realesrgan ema. (#1341)
    • Improve the assertion to ensuer GenerateFacialHeatmap as np.float32. (#1310)
    • Fix sampling behavior of unpaired_dataset.py and urls in cyclegan's README. (#1308)
    • Fix vsr models in pytorch2onnx. (#1300)
    • Fix incorrect settings in configs. (#1167,#1200,#1236,#1293,#1302,#1304,#1319,#1331,#1336,#1349,#1352,#1353,#1358,#1364,#1367,#1384,#1386,#1391,#1392,#1393)

    New Contributors

    • @gaoyang07 made their first contribution in https://github.com/open-mmlab/mmediting/pull/1372

    Contributors

    A total of 7 developers contributed to this release. Thanks @LeoXing1996, @Z-Fran, @zengyh1900, @plyfager, @ryanxingql, @ruoningYu, @gaoyang07.

    Full Changelog: https://github.com/open-mmlab/mmediting/compare/v1.0.0rc1...v1.0.0rc2

    Source code(tar.gz)
    Source code(zip)
  • v0.16.0(Nov 1, 2022)

    Deprecations

    VisualizationHook is deprecated. Users should use MMEditVisualizationHook instead. (#1375)

    Old Version Current Version
    visual_config = dict(  # config to register visualization hook
      type='VisualizationHook',
      output_dir='visual',
      interval=1000,
      res_name_list=[
          'gt_img', 'masked_img', 'fake_res', 'fake_img', 'fake_gt_local'
      ],
    )
    
    visual_config = dict(  # config to register visualization hook
      type='MMEditVisualizationHook',
      output_dir='visual',
      interval=1000,
      res_name_list=[
          'gt_img', 'masked_img', 'fake_res', 'fake_img', 'fake_gt_local'
      ],
    )
    

    New Features & Improvements

    • Improve arguments type in preprocess_div2k_dataset.py. (#1381)
    • Update docstring of RDN. (#1326)
    • Update the introduction in readme. (#1387)

    Bug Fixes

    • Fix FLAVR register in mmedit/models/video_interpolators when importing FLAVR. (#1186)
    • Fix data path processing in restoration_video_inference.py. (#1262)
    • Fix the number of channels in RDB. (#1292, #1311)

    Contributors

    A total of 5 developers contributed to this release. Thanks @LeoXing1996, @Z-Fran, @zengyh1900, @ryanxingql, @ruoningYu.

    Full Changelog: https://github.com/open-mmlab/mmediting/compare/v0.15.2...v0.16.0

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc1(Sep 24, 2022)

    v1.0.0rc1(23/9/2022)

    MMEditing 1.0.0rc1 has merged MMGeneration 1.x.

    • Support 42+ algorithms, 169+ configs and 168+ checkpoints.
    • Support 26+ loss functions, 20+ metrics.
    • Support tensorboard, wandb.
    • Support unconditional GANs, conditional GANs, image2image translation and internal learning.
    Source code(tar.gz)
    Source code(zip)
  • v0.15.2(Sep 9, 2022)

    Improvements

    • [Docs] Fix typos in docs. by @Yulv-git in https://github.com/open-mmlab/mmediting/pull/1079
    • [Docs] fix model_zoo and datasets docs link by @Z-Fran in https://github.com/open-mmlab/mmediting/pull/1043
    • [Docs] fix typos in readme. by @arch-user-france1 in https://github.com/open-mmlab/mmediting/pull/1078
    • [Improve] FLAVR demo by @Yshuo-Li in https://github.com/open-mmlab/mmediting/pull/954
    • [Fix] Update MMCV_MAX to 1.7 by @wangruohui in https://github.com/open-mmlab/mmediting/pull/1001
    • [Improve] Fix niqe_pris_params.npz path when installed as package by @ychfan in https://github.com/open-mmlab/mmediting/pull/995
    • [CI] update github workflow, circleci and github templates by @zengyh1900 in https://github.com/open-mmlab/mmediting/pull/1087

    New Contributors

    • @ychfan made their first contribution in https://github.com/open-mmlab/mmediting/pull/995
    • @arch-user-france1 made their first contribution in https://github.com/open-mmlab/mmediting/pull/1078
    • @Yulv-git made their first contribution in https://github.com/open-mmlab/mmediting/pull/1079
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc0(Sep 1, 2022)

  • v0.15.1(Jul 4, 2022)

    v0.15.1 (04/07/2022)

    Bug Fixes

    • [Fix] Update cain_b5_g1b32_vimeo90k_triplet.py (#929)
    • [Docs] Fix link to OST dataset (#933)

    Improvements

    • [Docs] Update instruction to OST dataset (#937)
    • [CI] No actual execution in CUDA envs (#921)
    • [Docs] Add watermark to demo video (#935)
    • [Tests] Add mim ci (#928)
    • [Docs] Update README.md of FLAVR (#919)
    • [Improve] Update md-format in .pre-commit-config.yaml (#917)
    • [Improve] Add miminstall.txt in setup.py (#916)
    • [Fix] Fix clutter in dim/README.md (#913)
    • [Improve] Skip problematic opencv-python versions (#833)

    Contributors

    @wangruohui @Yshuo-Li

    Source code(tar.gz)
    Source code(zip)
  • v0.15.0(Jun 1, 2022)

    v0.15.0 (01/06/2022)

    Highlights

    1. Support FLAVR
    2. Support AOT-GAN
    3. Support CAIN with ReduceLROnPlateau Scheduler

    New Features

    • Add configs for AOT-GAN (#681)
    • Support Vimeo90k-triplet dataset (#810)
    • Add default config for mm-assistant (#827)
    • Support CPU demo (#848)
    • Support use_cache and backend in LoadImageFromFileList (#857)
    • Support VFIVimeo90K7FramesDataset (#858)
    • Support ColorJitter for VFI (#859)
    • Support ReduceLrUpdaterHook (#860)
    • Support after_val_epoch in IterBaseRunner (#861)
    • Support FLAVR Net (#866, #867, #897)
    • Support MAE metric (#871)
    • Use mdformat (#888)
    • Support CAIN with ReduceLROnPlateau Scheduler (#906)

    Bug Fixes

    • Change - to _ for restoration_demo.py (#834)
    • Remove recommonmark in requirements/docs.txt (#844)
    • Move EDVR to VSR category in README.md (#849)
    • Remove , in multi-line F-string in crop.py (#855)
    • Modify double lq_path to gt_path in test_pipeline (#862)
    • Fix unittest of TOF-VFI (#873)
    • Fix wrong frames in VFI demo (#891)
    • Fix logo & contrib guideline on README (#898)
    • Normalizing trimap in indexnet_dimaug_mobv2_1x16_78k_comp1k.py (#901)

    Improvements

    • Add --cfg-options in train/test scripts (#826)
    • Update MMCV_MAX to 1.6 (#829)
    • Update TOFlow in README (#835)
    • Recover beirf installation steps & merge optional requirements (#836)
    • Use {MMEditing Contributors} in citation (#838)
    • Add tutorial for customizing losses (#839)
    • Add installation guide (wiki ver) in README (#845)
    • Add a 'need help to traslate' note on Chinese documentation (#850)
    • Add wechat QR code in README_zh-CN.md (#851)
    • Support non-zero frame index for SRFolderVideoDataset & Fix Typos (#853)
    • Create README.md for docker (#856)
    • Optimize IO for flow_warp (#881)
    • Move wiki/installation to docs (#883)
    • Add myst_heading_anchors (#887)
    • Use checkpoint link in inpainting demo (#892)

    Contributors

    @wangruohui @quincylin1 @nijkah @jayagami @ckkelvinchan @ryanxingql @NK-CS-ZZL @Yshuo-Li

    Source code(tar.gz)
    Source code(zip)
  • v0.14.0(Apr 1, 2022)

    v0.14.0 (01/04/2022)

    Highlights

    1. Support TOFlow in video frame interpolation

    New Features

    • Support AOT-GAN (#677)
    • Use --diff-seed to set different torch seed on different rank (#781)
    • Support streaming reading of frames in video interpolation demo (#790)
    • Support dist_train without slurm (#791)
    • Put LQ into CPU for restoration_video_demo (#792)
    • Support gray normalization constant in EDSR (#793)
    • Support TOFlow in video frame interpolation (#806, #811)
    • Support seed in DistributedSampler and sync seed across ranks (#815)

    Bug Fixes

    • Update link in README files (#782, #786, #819, #820)
    • Fix matting tutorial, and fix links to colab (#795)
    • Invert flip_ratio in RandomAffine pipeline (#799)
    • Update preprocess_div2k_dataset.py (#801)
    • Update SR Colab Demo Installation Method and Set5 link (#807)
    • Fix Y/GRB mistake in EDSR README (#812)
    • Replace pytorch install command to conda in README(_zh-CN).md (#816)

    Improvements

    • Update CI (#650)
    • Update requirements.txt (#725, #817)
    • Add Tutorial of dataset (#758), pipeline (#779), model (#766)
    • Update index and TOC tree (#767)
    • Make update_model_index.py compatible on windows (#768)
    • Update doc build system (#769)
    • Update keyword and classifier for setuptools (#773)
    • Renovate installation (#776, #800)
    • Update BasicVSR++ and RealBasicVSR docs (#778)
    • Update citation (#785, #787)
    • Regroup docs (#788)
    • Use full name of config as 'Name' in metafile (#798)
    • Update figure and video demo in README (#802)
    • Add clamp(0, 1) in test of video frame interpolation (#805)
    • Use hyphen for command line args in demo & tools (#808), and keep underline for required arguments in python files (#822)
    • Make dataset.pipeline a dedicated section in doc (#813)
    • Update mmcv-full>=1.3.13 to support DCN on CPU (#823)

    Contributors

    @wangruohui @ckkelvinchan @Yshuo-Li @nijkah @wdmwhh @freepoet @quincylin1

    Source code(tar.gz)
    Source code(zip)
  • v0.13.0(Mar 2, 2022)

    v0.13.0 (01/03/2022)

    Highlights

    1. Support CAIN
    2. Support EDVR-L
    3. Support running in Windows

    New Features

    • Add test-time ensemble for images and videos and support ensemble in BasicVSR series (#585)
    • Support AOT-GAN (work in progress) (#674, #675, #676)
    • Support CAIN (#683, #691, #709, #713)
    • Add basic interpolater (#687)
    • Add BaseVFIDataset and VFIVimeo90KDataset (#695, #697)
    • Add video interpolation demo (#688, #717)
    • Support various scales in RRDBNet (#699)
    • Support Ref-SR inference (#716)
    • Support EDVR-L on REDS (#719)
    • Support CPU training (#720)
    • Support running in Windows (#732, #738)
    • Support DCN on CPU (#735)

    Bug Fixes

    • Fix link address in docs (#703, #704)
    • Fix ARG MMCV in Dockerfile (#708)
    • Fix file permission of non-executable files (#718)
    • Fix some deprecation warning related to numpy (#728)
    • Delete __init__ in TestVFIDataset (#731)
    • Fix data type in docstring of several Datasets (#739)
    • Fix math notation in docstring (#741)
    • Fix missing folders in copyright commit hook (#754)
    • Delete duplicate test in loading (#756)

    Improvements

    • Update Pillow from 6.2.2 to 8.4 in CI (#693)
    • Add argument 'repeat' to SRREDSMultipleGTDataset (#672)
    • Deprecate the support for "python setup.py test" (#701)
    • Add setup multi-processing both in train and test (#707)
    • Add OpenMMLab website and platform links (#710)
    • Refact README files of all methods (#712)
    • Replace string version comparison with package.version.parse (#723)
    • Add docs of Ref-SR demo and video frame interpolation demo (#724)
    • Add interpolation and refact README.md (#726)
    • Update isort version in pre-commit hook (#727)
    • Redesign CI for Linux (#734)
    • Update install.md (#763)
    • Reorganizing OpenMMLab projects in readme (#764)
    • Add deprecation message for deploy tools (#765)

    Contributors

    @wangruohui @ckkelvinchan @Yshuo-Li @quincylin1 @Juggernaut93 @anse3832 @nijkah

    Source code(tar.gz)
    Source code(zip)
  • v0.12.0(Jan 4, 2022)

    Highlights

    1. Support RealBasicVSR
    2. Support Real-ESRGAN checkpoint

    New Features

    • Support video input and output in restoration demo (#622)
    • Support RealBasicVSR (#632, #633, #647, #680)
    • Support Real-ESRGAN checkpoint (#635)
    • Support conversion to y-channel when loading images (643)
    • Support random video compression during training (#646)
    • Support crop sequence (#648)
    • Support pixel_unshuffle (#684)

    Bug Fixes

    • Change 'target_size' for RandomResize from list to tuple (#617)
    • Fix folder creation in preprocess_df2k_ost_dataset.py (#623)
    • Change TDAN config path in README (#625)
    • Change 'radius' to 'kernel_size' for UnsharpMasking in Real-ESRNet config (#626)
    • Fix bug in MATLABLikeResize (#630)
    • Fix 'flow_warp' comment (#655)
    • Fix the error of Model Zoo and Datasets in docs (#664)
    • Fix bug in 'random_degradations' (#673)
    • Limit opencv-python version (#689)

    Improvements

    • Translate docs to Chinese (#576, #577, #578, #579, #581, #582, #584, #587, #588, #589, #590, #591, #592, #593, #594, #595, #596, #641, #647, #656, #665, #666)
    • Add UNetDiscriminatorWithSpectralNorm (#605)
    • Use PyTorch sphinx theme (#607, #608)
    • Update mmcv (#609), mmflow (#621), mmfewshot (#634) and mmhuman3d (#649) in docs
    • Convert minimum GCC version to 5.4 (#612)
    • Add tiff in SRDataset IMG_EXTENSIONS (#614)
    • Update metafile and update_model_index.py (#615)
    • Update preprocess_df2k_ost_dataset.py (#624)
    • Add Abstract to README (#628, #636)
    • Align NIQE to MATLAB results (#631)
    • Add official markdown lint hook (#639)
    • Skip CI when some specific files were changed (#640)
    • Update docs/conf.py (#644, #651)
    • Try to create a symbolic link on windows (#645)
    • Cancel previous runs that are not completed (#650)
    • Update path of configs in demo.md and getting_started.md (#658, #659)
    • Use mmcv root model registry (#660)
    • Update README.md (#654, #663)
    • Refactor the structure of documentation (#668)
    • Add script to crop REDS images into sub-images for faster IO (#669)
    • Capitalize the first letter of the task name in the metafile (#678)
    • Update FixedCrop for cropping image sequence (#682)

    Contributors @wangruohui @nbei @ckkelvinchan @Yshuo-Li @LeoXing1996 @RangiLyu @matrixgame2018 @huoshuai-dot @innerlee @okotaku @Adenialzz @kai422

    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Nov 3, 2021)

    Highlights

    • GLEAN for blind face image restoration #530
    • Real-ESRGAN model #546

    New Features

    • Exponential Moving Average Hook #542
    • Support DF2K_OST dataset #566

    Improvements

    • Add MATLAB-like bicubic interpolation #507
    • Support random degradations during training #504
    • Support torchserve #568

    Contributors @ckkelvinchan @Yshuo-Li @Adenialzz @kai422 @jiaqixuac @plygager @Ha0Tang @innerlee

    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(Aug 20, 2021)

    Highlights

    1. Support LIIF-RDN (CVPR'2021)
    2. Support BasicVSR++ (NTIRE'2021)

    New Features

    • Support loading annotation from file for video SR datasets (#423)
    • Support persistent worker (#426)
    • Support LIIF-RDN (#428, #440)
    • Support BasicVSR++ (#451, #467)
    • Support mim (#455)

    Bug Fixes

    • Fix bug in stat.py (#420)
    • Fix astype error in function tensor2img (#429)
    • Fix device error caused by torch.new_tensor when pytorch >= 1.7 (#465)
    • Fix _non_dist_train in .mmedit/apis/train.py (#473)
    • Fix multi-node distributed test (#478)

    Breaking Changes

    • Refactor LIIF for pytorch2onnx (#425)

    Improvements

    • Update Chinese docs (#415, #416, #418, #421, #424, #431, #442)
    • Add CI of pytorch 1.9.0 (#444)
    • Refactor README.md of configs (#452)
    • Avoid loading pretrained VGG in unittest (#466)
    • Support specifying scales in preprocessing div2k dataset (#472)
    • Support all formats in readthedocs (#479)
    • Use version_info of mmcv (#480)
    • Remove unnecessary codes in restoration_video_demo.py (#484)
    • Change priority of DistEvalIterHook to 'LOW' (#489)
    • Reset resource limit (#491)
    • Update QQ QR code in README_CN.md (#494)
    • Add myst_parser (#495)
    • Add license header (#496)
    • Fix typo of StyleGAN modules (#427)
    • Fix typo in docs/demo.md (#453, #454)
    • Fix typo in tools/data/super-resolution/reds/README.md (#469)

    We thank all the contributors of this release:

    @610265158, @AlexZou14, @Ha0Tang, @LiUzHiAn, @Yshuo-Li, @ckkelvinchan, @innerlee, @nbei, @orangeccc, @wileechou, @yivan-WYYGDSG

    Thank you! ❤️

    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Jul 6, 2021)

    Highlights

    1. Support DIC and DIC-GAN (CVPR'2020)
    2. Support GLEAN Cat 8x (CVPR'2021)
    3. Support TTSR-GAN (CVPR'2020)
    4. Add colab tutorial for super-resolution

    New Features

    Bug Fixes

    • Fix bug in restoration_video_inference.py (#379)
    • Fix Config of LIIF (#368)
    • Change the path to pre-trained EDVR-M (#396)
    • Fix normalization in restoration_video_inference (#406)
    • Fix [brush_stroke_mask] error in unittest (#409)

    Breaking Changes

    • Change mmcv minimum version to v1.3 (#378)

    Improvements

    • Correct Typos in code (#371)
    • Add Custom_hooks (#362)
    • Refactor unittest folder structure (#386)
    • Add documents and download link for Vid4 (#399)
    • Update model zoo for documents (#400)
    • Update metafile (407)
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(Jun 2, 2021)

    Highlights

    1. Support GLEAN (CVPR'2021)
    2. Support TTSR (CVPR'2020)
    3. Support TDAN (CVPR'2020)

    New Features

    Bug Fixes

    • Fix find_unused_parameters in PyTorch 1.8 for BasicVSR (#290)
    • Fix error in publish_model.py for pt>=1.6 (#291)
    • Fix PSNR when input is uint8 (#294)

    Improvements

    • Support backend in LoadImageFromFile (#293, #303)
    • Update metric_average_mode of video SR dataset (#319)
    • Add error message in restoration_demo.py (324)
    • Minor correction in getting_started.md (#339)
    • Update description for Vimeo90K (#349)
    • Support start_index in GenerateSegmentIndices (#338)
    • Support different filename templates in GenerateSegmentIndices (#325)
    • Support resize by scale-factor (#295, #310)
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(May 6, 2021)

    TL;DR

    1. Support BasicVSR (CVPR'2021)
    2. Support IconVSR (CVPR'2021)
    3. Support RDN (CVPR'2018)
    4. Add onnx evaluation tool

    New Features

    Bug Fixes

    • Fix onnx conversion of maxunpool2d (#243)
    • Fix inpainting in demo.md (#248)
    • Tiny fix of config file of EDSR (#251)
    • Fix link in README (#256)
    • Fix restoration_inference key missing bug (#270)
    • Fix the usage of channel_order in loading.py (#271)
    • Fix the command of inpainting (#278)
    • Fix preprocess_vimeo90k_dataset.py args name (#281)

    Improvements

    • Support empty_cache option in test.py (#261)
    • Update projects in README (#249, #276)
    • Support Y-channel PSNR and SSIM (#250)
    • Add zh-CN README (#262)
    • Update pytorch2onnx doc (#265)
    • Remove extra quotation in English readme (#268)
    • Change tags to comment (#269)
    • List model zoo in README (#284, #285, #286)
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Apr 8, 2021)

    Highlights

    1. Support Local Implicit Image Function (LIIF)
    2. Support exporting DIM and GCA from Pytorch to ONNX

    New Features

    • Add readthedocs config files and fix docstring (#92)
    • Add github action file (#94)
    • Support exporting DIM and GCA from Pytorch to ONNX (#105)
    • Support concatenating datasets (#106)
    • Support non_dist_train validation (#110)
    • Add matting colab tutorial (#111)
    • Support niqe metric (#114)
    • Support PoolDataLoader for parrots (#134)
    • Support collect-env (#137, #143)
    • Support pt1.6 cpu/gpu in CI (#138)
    • Support fp16 (139, #144)
    • Support publishing to pypi (#149)
    • Add modelzoo statistics (#171, #182, #186)
    • Add doc of datasets (194)
    • Support extended foreground option. (#195, #199, #200, #210)
    • Support nn.MaxUnpool2d (#196)
    • Add some FBA components (#203, #209, #215, #220)
    • Support random down sampling in pipeline (#222)
    • Support SR folder GT Dataset (#223)
    • Support Local Implicit Image Function (LIIF) (#224, #226, #227, #234, #239)

    Bug Fixes

    • Fix _non_dist_train in train api (#104)
    • Fix setup and CI (#109)
    • Fix redundant loop bug in Normalize (#121)
    • Fix get_hash in setup.py (#124)
    • Fix tool/preprocess_reds_dataset.py (#148)
    • Fix slurm train tutorial in getting_started.md (#162)
    • Fix pip install bug (#173)
    • Fix bug in config file (#185)
    • Fix broken links of datasets (#236)
    • Fix broken links of model zoo (#242)

    Breaking Changes

    • Refactor data loader configs (#201)

    Improvements

    • Updata requirements.txt (#95, #100)
    • Update teaser (#96)
    • Updata README (#93, #97, #98, #152)
    • Updata model_zoo (#101)
    • Fix typos (#102, #188, #191, #197, #208)
    • Adopt adjust_gamma from skimage and reduce dependencies (#112)
    • remove .gitlab-ci.yml (#113)
    • Update import of first party (#115)
    • Remove citation and contact (#122)
    • Update version file (#136)
    • Update download url (#141)
    • Update setup.py (#150)
    • Update the highest version of supported mmcv (#153, #154)
    • modify Crop to handle a sequence of video frames (#164)
    • Add links to other mm projects (#179, #180)
    • Add config type (#181)
    • Refector docs (#184)
    • Add config link (#187)
    • Update file structure (#192)
    • Update config doc (#202)
    • Update slurm_train.md script (#204)
    • Improve code style (#206, #207)
    • Use file_client in CompositeFg (#212)
    • Replace random with numpy.random (#213)
    • Refactor loader_cfg (#214)
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Oct 11, 2020)

    New Features

    • NIQE metric (#114)
    • Support FP16 training (#139)

    Improvements

    • Update version file (#136)
    • Update collect env function (#137)
    • Update download urls (#141)
    • Update docker file with pt1.6 (#144)
    Source code(tar.gz)
    Source code(zip)
OpenMMLab Detection Toolbox and Benchmark

MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project.

OpenMMLab 22.5k Jan 5, 2023
OpenMMLab Semantic Segmentation Toolbox and Benchmark.

Documentation: https://mmsegmentation.readthedocs.io/ English | 简体中文 Introduction MMSegmentation is an open source semantic segmentation toolbox based

OpenMMLab 5k Dec 31, 2022
OpenMMLab Pose Estimation Toolbox and Benchmark.

Introduction English | 简体中文 MMPose is an open-source toolbox for pose estimation based on PyTorch. It is a part of the OpenMMLab project. The master b

OpenMMLab 2.8k Dec 31, 2022
MIM: MIM Installs OpenMMLab Packages

MIM provides a unified API for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.

OpenMMLab 254 Jan 4, 2023
OpenMMLab Computer Vision Foundation

English | 简体中文 Introduction MMCV is a foundational library for computer vision research and supports many research projects as below: MMCV: OpenMMLab

OpenMMLab 4.6k Jan 9, 2023
Convert openmmlab (not only mmdetection) series model to tensorrt

MMDet to TensorRT This project aims to convert the mmdetection model to TensorRT model end2end. Focus on object detection for now. Mask support is exp

JinTian 4 Dec 17, 2021
Some pre-commit hooks for OpenMMLab projects

pre-commit-hooks Some pre-commit hooks for OpenMMLab projects. Using pre-commit-hooks with pre-commit Add this to your .pre-commit-config.yaml - rep

OpenMMLab 16 Nov 29, 2022
Mmrotate - OpenMMLab Rotated Object Detection Benchmark

OpenMMLab website HOT OpenMMLab platform TRY IT OUT ?? Documentation | ??️ Insta

OpenMMLab 1.2k Jan 4, 2023
An example to implement a new backbone with OpenMMLab framework.

Backbone example on OpenMMLab framework English | 简体中文 Introduction This is an template repo about how to use OpenMMLab framework to develop a new bac

Ma Zerun 22 Dec 29, 2022
A fast poisson image editing implementation that can utilize multi-core CPU or GPU to handle a high-resolution image input.

Poisson Image Editing - A Parallel Implementation Jiayi Weng (jiayiwen), Zixu Chen (zixuc) Poisson Image Editing is a technique that can fuse two imag

Jiayi Weng 110 Dec 27, 2022
Layered Neural Atlases for Consistent Video Editing

Layered Neural Atlases for Consistent Video Editing Project Page | Paper This repository contains an implementation for the SIGGRAPH Asia 2021 paper L

Yoni Kasten 353 Dec 27, 2022
A Python script that creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editing software such as FinalCut Pro for further adjustments.

Text to Subtitles - Python This python file creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editin

Dmytro North 9 Dec 24, 2022
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing

Anycost GAN video | paper | website Anycost GANs for Interactive Image Synthesis and Editing Ji Lin, Richard Zhang, Frieder Ganz, Song Han, Jun-Yan Zh

MIT HAN Lab 726 Dec 28, 2022
PyTorch implementation for SDEdit: Image Synthesis and Editing with Stochastic Differential Equations

SDEdit: Image Synthesis and Editing with Stochastic Differential Equations Project | Paper | Colab PyTorch implementation of SDEdit: Image Synthesis a

null 536 Jan 5, 2023
(ICCV 2021) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing."

Dressing in Order (DiOr) ?? [Paper] ?? [Webpage] ?? [Running this code] The official implementation of "Dressing in Order: Recurrent Person Image Gene

Aiyu Cui 277 Dec 28, 2022
Implements the training, testing and editing tools for "Pluralistic Image Completion"

Pluralistic Image Completion ArXiv | Project Page | Online Demo | Video(demo) This repository implements the training, testing and editing tools for "

Chuanxia Zheng 615 Dec 8, 2022
Official codebase for running the small, filtered-data GLIDE model from GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models.

GLIDE This is the official codebase for running the small, filtered-data GLIDE model from GLIDE: Towards Photorealistic Image Generation and Editing w

OpenAI 2.9k Jan 4, 2023
Colour detection is necessary to recognize objects, it is also used as a tool in various image editing and drawing apps.

Colour Detection On Image Colour detection is the process of detecting the name of any color. Simple isn’t it? Well, for humans this is an extremely e

Astitva Veer Garg 1 Jan 13, 2022
Official implementation for "Style Transformer for Image Inversion and Editing" (CVPR 2022)

Style Transformer for Image Inversion and Editing (CVPR2022) https://arxiv.org/abs/2203.07932 Existing GAN inversion methods fail to provide latent co

Xueqi Hu 153 Dec 2, 2022