Documentation: https://mmgeneration.readthedocs.io/
Introduction
English | 简体中文
MMGeneration is a powerful toolkit for generative models, especially for GANs now. It is based on PyTorch and MMCV. The master branch works with PyTorch 1.5+.
Major Features
- High-quality Training Performance: We currently support training on Unconditional GANs, Internal GANs, and Image Translation Models. Support for conditional models will come soon.
- Powerful Application Toolkit: A plentiful toolkit containing multiple applications in GANs is provided to users. GAN interpolation, GAN projection, and GAN manipulations are integrated into our framework. It's time to play with your GANs! (Tutorial for applications)
- Efficient Distributed Training for Generative Models: For the highly dynamic training in generative models, we adopt a new way to train dynamic models with
MMDDP
. (Tutorial for DDP) - New Modular Design for Flexible Combination: A new design for complex loss modules is proposed for customizing the links between modules, which can achieve flexible combination among different modules. (Tutorial for new modular design)
Highlight
- Positional Encoding as Spatial Inductive Bias in GANs (CVPR2021) has been released in
MMGeneration
. [Config], [Project Page]
Changelog
v0.1.0 was released on 20/04/2021. Please refer to changelog.md for details and release history.
ModelZoo
These methods have been carefully studied and supported in our frameworks:
Unconditional GANs (click to collapse)
-
✅ DCGAN (ICLR'2016) -
✅ WGAN-GP (NIPS'2017) -
✅ PGGAN (ICLR'2018) -
✅ StyleGANV1 (CVPR'2019) -
✅ StyleGANV2 (CVPR'2020) -
✅ Positional Encoding in GANs (CVPR'2021)
Internal Learing (click to collapse)
-
✅ SinGAN (ICCV'2019)
License
This project is released under the Apache 2.0 license. Some operations in MMGeneration
are with other licenses instead of Apache2.0. Please refer to LICENSES.md for the careful check, if you are using our code for commercial matters.
Installation
Please refer to get_started.md for installation.
Getting Started
Please see get_started.md for the basic usage of MMGeneration. docs/quick_run.md can offer full guidance for quick run. For other details and tutorials, please go to our documentation.
Contributing
We appreciate all contributions to improve MMGeneration. Please refer to CONTRIBUTING.md in MMCV for more details about the contributing guideline.
Citation
If you find this project useful in your research, please consider cite:
@misc{2021mmgeneration,
title={{MMGeneration}: OpenMMLab Generative Model Toolbox and Benchmark},
author={MMGeneration Contributors},
howpublished = {\url{https://github.com/open-mmlab/mmgeneration}},
year={2021}
}
Projects in OpenMMLab
- MMCV: OpenMMLab foundational library for computer vision.
- MMClassification: OpenMMLab image classification toolbox and benchmark.
- MMDetection: OpenMMLab detection toolbox and benchmark.
- MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
- MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
- MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
- MMTracking: OpenMMLab video perception toolbox and benchmark.
- MMPose: OpenMMLab pose estimation toolbox and benchmark.
- MMEditing: OpenMMLab image and video editing toolbox.
- MMOCR: A Comprehensive Toolbox for Text Detection, Recognition and Understanding.
- MMGeneration: OpenMMLab's next-generation toolbox for generative models.