StyleGanJittor (Tsinghua university computer graphics course)
Overview
Jittor 64*64 implementation of StyleGAN (Tsinghua university computer graphics course) This project is a repetition of StyleGAN based on python 3.8 + Jittor(计图) and The open source StyleGAN-Pytorch project. I train the model on the color_symbol_7k dataset for 40000 iterations. The model can generate 64×64 symbolic images.
StyleGAN is a generative adversarial network for image generation proposed by NVIDIA in 2018. According to the paper, the generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. The main improvement of this network model over previous models is the structure of the generator, including the addition of an eight-layer Mapping Network, the use of the AdaIn module, and the introduction of image randomness - these structures allow the generator to The overall features of the image are decoupled from the local features to synthesize images with better effects; at the same time, the network also has better latent space interpolation effects.
(Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 4401-4410.)
The training results are shown in Video1trainingResult.avi, Video2GenerationResult1.avi, and Video3GenerationResul2t.avi generated by the trained model.
The Checkpoint folder is the trained StyleGAN model, because it takes up a lot of storage space, the models have been deleted.The data folder is the color_symbol_7k dataset folder. The dataset is processed by the prepare_data file to obtain the LMDB database for accelerated training, and the database is stored in the mdb folder.The sample folder is the folder where the images are generated during the model training process, which can be used to traverse the training process. The generateSample folder is the sample image generated by calling StyleGenerator after the model training is completed.
The MultiResolutionDataset method for reading the LMDB database is defined in dataset.py, the Jittor model reproduced by Jittor is defined in model.py, train.py is used for the model training script, and VideoWrite.py is used to convert the generated image. output for video.
Environment and execution instructions
Project environment dependencies include jittor, ldbm, PIL, argparse, tqdm and some common python libraries.
First you need to unzip the dataset in the data folder. The model can be trained by the script in the terminal of the project environment python train.py --mixing "./mdb/color_symbol_7k_mdb"
Images can be generated based on the trained model and compared for their differences by the script python generate.py --size 64 --n_row 3 --n_col 5 --path './checkpoint/040000.model'
You can adjust the model training parameters by referring to the code in the args section of train.py and generate.py.
Details
The first is the data set preparation, using the LMDB database to accelerate the training. For model construction, refer to the model structure shown in the following figure in the original text, and the recurring Suri used in Pytorch open source version 1. Using the model-dependent framework shown in the second figure below, the original model is split into EqualConv2d, EqualLinear, StyleConvBlock , Convblock and other sub-parts are implemented, and finally built into a complete StyleGenerator and Discriminator.
In the model building and training part, follow the tutorial provided by the teaching assistant on the official website to help convert the torch method to the jittor method, and explore some other means to implement it yourself. Jittor's documentation is relatively incomplete, and some methods are different from Pytorch. In this case, I use a lower-level method for implementation.
For example: jt.sqrt(out.var(0, unbiased=False) + 1e-8)
is used in the Discrimination part of the model to solve the variance of the given dimension of the tensor, and there is no corresponding var() in the Jittor framework method, so I use ((out-out.mean(0)).sqr().sum(0)+1e-8).sqrt()
to implement the same function.
Results
Limited by the hardware, the model training time is long, and I don't have enough time to fine-tune various parameters, optimizers and various parameters, so the results obtained by training on Jittor are not as good as when I use the same model framework to train on Pytorch The result is good, but the progressive training process can be clearly seen from the video, and the generated symbols are gradually clear, and the results are gradually getting better.
Figures below are sample results obtained by training on Jittor and Pytorch respectively. For details, please refer to the video files in the folder. The training results of the same model and code on Pytorch can be found in the sample_torch folder.
To be continued