The Official PyTorch Implementation of "LSGM: Score-based Generative Modeling in Latent Space" (NeurIPS 2021)

Related tags

Deep Learning LSGM
Overview

PWC PWC

The Official PyTorch Implementation of "LSGM: Score-based Generative Modeling in Latent Space" (NeurIPS 2021)

Arash Vahdat*·Karsten Kreis*·Jan Kautz

(*equal contribution)

Project Page


LSGM trains a score-based generative model (a.k.a. a denoising diffusion model) in the latent space of a variational autoencoder. It currently achieves state-of-the-art generative performance on several image datasets.

Requirements

LSGM is built in Python 3.8 using PyTorch 1.8.0. Please use the following command to install the requirements:

pip install -r requirements.txt

Optionally, you can also install NVIDIA Apex. When apex is installed, our training scripts use the Adam optimizer from this library, which is faster than Pytorch's native Adam.

Set up file paths and data

This work builds on top of our previous work NVAE. Please follow the instructions in the NVAE repository to prepare your data for training and evaluation. Small datasets such as CIFAR-10, MNIST, and OMNIGLOT do not require any data preparation as they will be downloaded automatically. Below, $DATA_DIR indicates the path to a data directory that will contain all the datasets. $CHECKPOINT_DIR is a directory used for storing checkpoints, and $EXPR_ID is a unique ID for the experiment. $IP_ADDR is the IP address of the machine that will host the process with rank 0 during training (see here). $NODE_RANK is the index of each node among all the nodes that are running the job (setting $IP_ADDR and $NODE_RANK is only required for multi-node training). $FID_STATS_DIR is a directory containing the FID statistics computed on each dataset (see below).

Precomputing feature statistics on each dataset for FID evaluation

You can use the following command to compute FID statistics on the CIFAR-10 dataset as an example:

python scripts/precompute_fid_statistics.py --data $DATA_DIR/cifar10 --dataset cifar10 --fid_dir $FID_STATS_DIR

which will save the FID related statistics in a directory under $FID_STATS_DIR. For other datasets, simply change --data and --dataset accordingly.

Training and evaluation

Training LSGM is often done in two stages. In the first stage, we train our VAE backbone assuming that the prior is a standard Normal distribution. In the second stage, we swap the standard Normal prior with a score-based prior and we jointly train both the VAE backbone and the score-based prior in an end-to-end fashion. Please check Appendix G in our paper for implementation details. Below, we provide commands used for both stages. If for any reason your training is stopped, use the exact same commend with the addition of --cont_training to continue training from the last saved checkpoint. If you observe NaN, continuing the training using this flag will usually not fix the NaN issue.

CIFAR-10

We train 3 different VAEs with the following commands (see Table 7 in the paper).

  • 20 group NVAE with full KL annealing for the "balanced" model (using 8 16GB V100 GPUs):
python train_vae.py --data $DATA_DIR/cifar10 --root $CHECKPOINT_DIR --save $EXPR_ID/vae1 --dataset cifar10 \
    --num_channels_enc 128 --num_channels_dec 128 --num_postprocess_cells 2 --num_preprocess_cells 2 \
    --num_latent_scales 1 --num_cell_per_cond_enc 2 --num_cell_per_cond_dec 2 --num_preprocess_blocks 1 \
    --num_postprocess_blocks 1 --num_latent_per_group 9 --num_groups_per_scale 20 --epochs 600 --batch_size 32 \
    --weight_decay_norm 1e-2 --num_nf 0 --kl_anneal_portion 0.5 --kl_max_coeff 1.0 --channel_mult 1 2 --seed 1 \
    --arch_instance res_bnswish --num_process_per_node 8 --use_se
  • 20 group NVAE with partial KL annealing for the model with best FID (using 8 16GB V100 GPUs):
python train_vae.py --data $DATA_DIR/cifar10 --root $CHECKPOINT_DIR --save $EXPR_ID/vae2 --dataset cifar10 \
    --num_channels_enc 128 --num_channels_dec 128 --num_postprocess_cells 2 --num_preprocess_cells 2 \
    --num_latent_scales 1 --num_cell_per_cond_enc 2 --num_cell_per_cond_dec 2 --num_preprocess_blocks 1 \
    --num_postprocess_blocks 1 --num_latent_per_group 9 --num_groups_per_scale 20 --epochs 400 --batch_size 32 \
    --weight_decay_norm 1e-2 --num_nf 0 --kl_anneal_portion 1.0 --kl_max_coeff 0.7 --channel_mult 1 2 --seed 1 \
    --arch_instance res_bnswish --num_process_per_node 8 --use_se
  • 4 group NVAE with partial KL annealing for the model with best NLL (using 4 16GB V100 GPUs):
python train_vae.py --data $DATA_DIR/cifar10 --root $CHECKPOINT_DIR --save $EXPR_ID/vae3 --dataset cifar10 \
    --num_channels_enc 256 --num_channels_dec 256 --num_postprocess_cells 3 --num_preprocess_cells 3 \
    --num_latent_scales 1 --num_cell_per_cond_enc 3 --num_cell_per_cond_dec 3 --num_preprocess_blocks 1 \
    --num_postprocess_blocks 1 --num_latent_per_group 45 --num_groups_per_scale 4 --epochs 400 --batch_size 64 \
    --weight_decay_norm 1e-2 --num_nf 2 --kl_anneal_portion 1.0 --kl_max_coeff 0.7 --channel_mult 1 2 --seed 1 \
    --arch_instance res_bnswish --num_process_per_node 4 --use_se

With the resulting VAE checkpoints, we can train the three different LSGMs. The models are trained with the following commands on 2 nodes with 8 32GB V100 GPUs each.

  • LSGM (balanced):
mpirun --allow-run-as-root -np 2 -npernode 1 bash -c 
    'python train_vada.py --fid_dir $FID_STATS_DIR --data $DATA_DIR/cifar10 --root $CHECKPOINT_DIR \
    --save $EXPR_ID/lsgm1 --vae_checkpoint $EXPR_ID/vae1/checkpoint.pt --train_vae --custom_conv_dae --apply_sqrt2_res \
    --fir --dae_arch ncsnpp --embedding_scale 1000 --dataset cifar10 --learning_rate_dae 1e-4 \
    --learning_rate_min_dae 1e-4 --epochs 1875 --dropout 0.2 --batch_size 16 --num_channels_dae 512 --num_scales_dae 3 \
    --num_cell_per_scale_dae 8 --sde_type vpsde --beta_start 0.1 --beta_end 20.0 --sigma2_0 0.0 \
    --weight_decay_norm_dae 1e-2 --weight_decay_norm_vae 1e-2 --time_eps 0.01 --train_ode_eps 1e-6 --eval_ode_eps 1e-6 \
    --train_ode_solver_tol 1e-5 --eval_ode_solver_tol 1e-5 --iw_sample_p drop_all_iw --iw_sample_q reweight_p_samples \
    --arch_instance_dae res_ho_attn --num_process_per_node 8 --use_se --node_rank $NODE_RANK --num_proc_node 2 \
    --master_address $IP_ADDR '
  • LSGM (best FID):
mpirun --allow-run-as-root -np 2 -npernode 1 bash -c 
    'python train_vada.py --fid_dir $FID_STATS_DIR --data $DATA_DIR/cifar10 --root $CHECKPOINT_DIR \
    --save $EXPR_ID/lsgm2 --vae_checkpoint $EXPR_ID/vae2/checkpoint.pt --train_vae --custom_conv_dae --apply_sqrt2_res \
    --fir --cont_kl_anneal --dae_arch ncsnpp --embedding_scale 1000 --dataset cifar10 --learning_rate_dae 1e-4 \
    --learning_rate_min_dae 1e-4 --epochs 1875 --dropout 0.2 --batch_size 16 --num_channels_dae 512 --num_scales_dae 3 \
    --num_cell_per_scale_dae 8 --sde_type vpsde --beta_start 0.1 --beta_end 20.0 --sigma2_0 0.0 \
    --weight_decay_norm_dae 1e-2 --weight_decay_norm_vae 1e-2 --time_eps 0.01 --train_ode_eps 1e-6 --eval_ode_eps 1e-6 \
    --train_ode_solver_tol 1e-5 --eval_ode_solver_tol 1e-5 --iw_sample_p drop_all_iw --iw_sample_q reweight_p_samples \
    --arch_instance_dae res_ho_attn --num_process_per_node 8 --use_se --node_rank $NODE_RANK --num_proc_node 2 \
    --master_address $IP_ADDR '
  • LSGM (best NLL):
mpirun --allow-run-as-root -np 2 -npernode 1 bash -c 
    'python train_vada.py --fid_dir $FID_STATS_DIR --data $DATA_DIR/cifar10 --root $CHECKPOINT_DIR \
    --save $EXPR_ID/lsgm3 --vae_checkpoint $EXPR_ID/vae3/checkpoint.pt --train_vae --apply_sqrt2_res --fir \
    --cont_kl_anneal --dae_arch ncsnpp --embedding_scale 1000 --dataset cifar10 --learning_rate_dae 1e-4 \
    --learning_rate_min_dae 1e-4 --epochs 1875 --dropout 0.2 --batch_size 16 --num_channels_dae 512 --num_scales_dae 3 \
    --num_cell_per_scale_dae 8 --sde_type geometric_sde --sigma2_min 3e-5 --sigma2_max 0.999 --sigma2_0 3e-5 \
    --weight_decay_norm_dae 1e-2 --weight_decay_norm_vae 1e-2 --time_eps 0.0 --train_ode_eps 1e-6 --eval_ode_eps 1e-6 \
    --train_ode_solver_tol 1e-5 --eval_ode_solver_tol 1e-5 --iw_sample_p ll_uniform --iw_sample_q reweight_p_samples \
    --arch_instance_dae res_ho_attn --num_process_per_node 8 --use_se --node_rank $NODE_RANK --num_proc_node 2 \
    --master_address \${NGC_MASTER_ADDR} '

The following command can be used to evaluate the negative variational bound on the data log-likelihood as well as the FID score for any of the LSGMs trained on CIFAR-10 (on 2 nodes with 8 32GB V100 GPUs each):

mpirun --allow-run-as-root -np 2 -npernode 1 bash -c 
    'python evaluate_vada.py --data $DATA_DIR/cifar10 --root $CHECKPOINT_DIR --save $EXPR_ID/eval --eval_mode evaluate \
    --checkpoint $CHECKPOINT_DIR/EXPR_ID/lsgm/checkpoint.pt --fid_dir $FID_STATS_DIR --num_process_per_node 8 \
    --nll_ode_eval --fid_ode_eval --ode_eps 1e-6 --ode_solver_tol 1e-5 --batch_size 32 --node_rank $NODE_RANK \
    --num_proc_node 2 --master_address \${NGC_MASTER_ADDR} '
MNIST

We train the NVAE component using the following command on 2 16GB V100 GPUs:

python train_vae.py --data $DATA_DIR/mnist --root $CHECKPOINT_DIR --save $EXPR_ID/vae --dataset mnist \
      --batch_size 100 --epochs 200 --num_latent_scales 1 --num_groups_per_scale 2 --num_postprocess_cells 3 \
      --num_preprocess_cells 3 --num_cell_per_cond_enc 1 --num_cell_per_cond_dec 1 --num_latent_per_group 20 \
      --num_preprocess_blocks 2 --num_postprocess_blocks 2 --weight_decay_norm 1e-2 --num_channels_enc 64 \
      --num_channels_dec 64 --decoder_dist bin --kl_anneal_portion 1.0 --kl_max_coeff 0.7 --channel_mult 1 2 2 \
      --num_nf 0 --arch_instance res_mbconv --num_process_per_node 2 --use_se

We train LSGM using the following command on 4 16GB V100 GPUs:

python train_vada.py --data $DATA_DIR/mnist --root $CHECKPOINT_DIR --save $EXPR_ID/lsgm --dataset mnist --epochs 800 \
        --dropout 0.2 --batch_size 32 --num_scales_dae 2 --weight_decay_norm_vae 1e-2 \
        --weight_decay_norm_dae 0. --num_channels_dae 256 --train_vae  --num_cell_per_scale_dae 8 \
        --learning_rate_dae 3e-4 --learning_rate_min_dae 3e-4 --train_ode_solver_tol 1e-5 --cont_kl_anneal  \
        --sde_type vpsde --iw_sample_p ll_iw --num_process_per_node 4 --use_se \
        --vae_checkpoint $CHECKPOINT_DIR/EXPR_ID/vae/checkpoint.pt  --dae_arch ncsnpp --embedding_scale 1000 \
        --mixing_logit_init -6 --warmup_epochs 20 --drop_inactive_var --skip_final_eval --fid_dir $FID_STATS_DIR

To evaluate the negative variational bound on the data log-likelihood on 4 16GB V100 GPUs the following command can be used:

python evaluate_vada.py --data $DATA_DIR/mnist --root $CHECKPOINT_DIR --save $EXPR_ID/eval --eval_mode evaluate \
        --checkpoint $CHECKPOINT_DIR/EXPR_ID/lsgm/checkpoint.pt --num_process_per_node 4 --nll_ode_eval \
        --ode_eps 1e-5 --ode_solver_tol 1e-5 --batch_size 128
OMNIGLOT

We train the NVAE component using the following command on 2 16GB V100 GPUs.

python train_vae.py --data $DATA_DIR/omniglot --root $CHECKPOINT_DIR --save $EXPR_ID/vae --dataset omniglot \
      --batch_size 64 --epochs 200 --num_latent_scales 1 --num_groups_per_scale 3 --num_postprocess_cells 2 \
      --num_preprocess_cells 2 --num_cell_per_cond_enc 3 --num_cell_per_cond_dec 3 --num_latent_per_group 20 \
      --num_preprocess_blocks 1 --num_postprocess_blocks 1 --num_channels_enc 64 --num_channels_dec 64 \
      --weight_decay_norm 1e-2 --decoder_dist bin --kl_anneal_portion 1.0 --kl_max_coeff 1.0 --channel_mult 1 2 \
      --num_nf 0 --arch_instance res_mbconv --num_process_per_node 2 --use_se

We train LSGM using the following command on 4 16GB V100 GPUs.

python train_vada.py --data $DATA_DIR/omniglot --root $CHECKPOINT_DIR --save $EXPR_ID/lsgm --dataset omniglot --epochs 1500 \
        --dropout 0.2 --batch_size 32 --num_channels_dae 256 --num_scales_dae 3 --weight_decay_norm_vae 1e-2 \
        --weight_decay_norm_dae 1e-3 --train_vae  --num_cell_per_scale_dae 8 --learning_rate_dae 3e-4 \
        --learning_rate_min_dae 3e-4 --train_ode_solver_tol 1e-5 --cont_kl_anneal  --sde_type vpsde \
        --iw_sample_p ll_iw --num_process_per_node 4 --use_se \
        --vae_checkpoint $EXPR_ID/vae/checkpoint.pt  --dae_arch ncsnpp --embedding_scale 1000 --mixing_logit_init -6 \
        --warmup_epochs 20 --drop_inactive_var --skip_final_eval --fid_dir $FID_STATS_DIR

To evaluate the negative variational bound on the data log-likelihood on 4 16GB V100 GPUs the following command can be used:

python evaluate_vada.py --data $DATA_DIR/omniglot --root $CHECKPOINT_DIR --save $EXPR_ID/eval --eval_mode evaluate \
        --checkpoint $CHECKPOINT_DIR/EXPR_ID/lsgm/checkpoint.pt --num_process_per_node 4 --nll_ode_eval \
        --ode_eps 1e-5 --ode_solver_tol 1e-5 --batch_size 128
CelebA-HQ-256 Quantitative Model

We train the NVAE component using the following command on 2 nodes, each with 8 32GB V100 GPUs:

mpirun --allow-run-as-root  -np 2 -npernode 1 bash -c \
    'python train_vae.py --data $DATA_DIR/celeba/celeba-lmdb --root $CHECKPOINT_DIR --save $EXPR_ID/vae --dataset celeba_256 \
    --num_channels_enc 64 --num_channels_dec 64 --epochs 200 --num_postprocess_cells 2 --num_preprocess_cells 2 \
    --num_latent_per_group 20 --num_cell_per_cond_enc 2 --num_cell_per_cond_dec 2 --num_preprocess_blocks 1 \
    --num_postprocess_blocks 1 --weight_decay_norm 3e-2 --num_latent_scales 3 --num_groups_per_scale 8 --num_nf 2 \
    --batch_size 4 --kl_anneal_portion 1. --kl_max_coeff 1. --channel_mult 1 1 2 4 --num_x_bits 5 --decoder_dist dml \
    --progressive_input_vae input_skip --arch_instance res_mbconv --num_process_per_node 8 --use_se \
    --node_rank $NODE_RANK --num_proc_node 2 --master_address $IP_ADDR '

We train the LSGM using the following command on 2 nodes, each with 8 32GB V100 GPUs:

mpirun --allow-run-as-root  -np 2 -npernode 1 bash -c \
    'python train_vada.py --data $DATA_DIR/celeba/celeba-lmdb --root $CHECKPOINT_DIR --save $EXPR_ID/lsgm --dataset celeba_256 \
    --epochs 1000 --dropout 0.2 --num_channels_dae 256 --num_scales_dae 4 --train_vae  --weight_decay_norm_vae 1e-1 \
    --weight_decay_norm_dae 1e-2 --fir  --num_cell_per_scale_dae 8 --learning_rate_dae 1e-4 --learning_rate_min_dae 1e-4 \
    --batch_size 4 --sde_type vpsde --iw_sample_p drop_sigma2t_iw --iw_sample_q ll_iw --disjoint_training \
    --num_process_per_node 8 --use_se --vae_checkpoint $EXPR_ID/vae/checkpoint.pt  --dae_arch ncsnpp \
    --embedding_scale 1000 --mixing_logit_init -6 --warmup_epochs 20 --drop_inactive_var --skip_final_eval \
    --fid_dir $FID_STATS_DIR --node_rank $NODE_RANK --num_proc_node 2 --master_address $IP_ADDR '

To evaluate the negative variational bound on the data log-likelihood, we use the following command on similarly 2 nodes:

mpirun --allow-run-as-root -np 2 -npernode 1 bash -c \
    'python evaluate_vada.py  --data $DATA_DIR/celeba/celeba-lmdb --root $CHECKPOINT_DIR --save $EXPR_ID/eval_nll \
     --checkpoint $CHECKPOINT_DIR/EXPR_ID/lsgm/checkpoint_fid.pt --num_process_per_node 8 --eval_mode evaluate \
     --nll_ode_eval --ode_eps 1e-5 --ode_solver_tol 1e-5 --batch_size 64 \
     --fid_dir $FID_STATS_DIR --node_rank $NODE_RANK --num_proc_node 2 --master_address $IP_ADDR '

And, to evaluate the FID score, we use the following command on similarly 2 nodes:

mpirun --allow-run-as-root -np 2 -npernode 1 bash -c \
    'python evaluate_vada.py  --data $DATA_DIR/celeba/celeba-lmdb --root $CHECKPOINT_DIR --save $EXPR_ID/eval_fid \
     --checkpoint $CHECKPOINT_DIR/EXPR_ID/lsgm/checkpoint_fid.pt --num_process_per_node 8 --eval_mode evaluate \
     --fid_ode_eval --ode_eps 1e-5 --ode_solver_tol 1e-2 --batch_size 64 --vae_train_mode \
     --fid_dir $FID_STATS_DIR --node_rank $NODE_RANK --num_proc_node 2 --master_address $IP_ADDR '

Note that for the FID evaluation on this model, we observed that --ode_solver_tol 1e-5 gives a slightly worse FID score with much slower sampling speed (see Fig. 4 in the paper).

CelebA-HQ-256 Qualitative Model

We trained the qualitative model on the CelebA-HQ-256 dataset in 3 stages. In the first stage, we only trained the NVAE component, and in the second stage, we trained an LSGM (i.e., both the NVAE backbone and the SGM prior jointly) with the geometric VPSDE for likelihood weighting, similar to our other models. However, in the third stage, we discarded the SGM prior model and re-trained a new SGM prior with the reweighted objective. In this stage, we only trained the SGM prior and left the NVAE component fixed from the second stage.

We train the NVAE component using the following command on 2 nodes, each with 8 32GB V100 GPUs:

mpirun --allow-run-as-root  -np 2 -npernode 1 bash -c \
    'python train_vae.py --data $DATA_DIR/celeba/celeba-lmdb --root $CHECKPOINT_DIR --save $EXPR_ID/vae --dataset celeba_256 \
    --num_channels_enc 64 --num_channels_dec 64 --epochs 200 --num_postprocess_cells 2 --num_preprocess_cells 2 \
    --num_latent_per_group 20 --num_cell_per_cond_enc 2 --num_cell_per_cond_dec 2 --num_preprocess_blocks 1 \
    --num_postprocess_blocks 1 --weight_decay_norm 3e-2 --num_latent_scales 2 --num_groups_per_scale 10 --num_nf 2 \
    --batch_size 4 --kl_anneal_portion 1. --kl_max_coeff 1. --channel_mult 1 1 2 --num_x_bits 5 --decoder_dist dml \
    --progressive_input_vae input_skip --arch_instance res_mbconv --num_process_per_node 8 --use_se \
    --node_rank $NODE_RANK --num_proc_node 2 --master_address $IP_ADDR '

We train LSGM (both VAE and SGM prior jointly) in the second stage using the following command on 2 nodes, each with 8 32GB V100 GPUs:

mpirun --allow-run-as-root  -np 2 -npernode 1 bash -c \
    'python train_vada.py --data $DATA_DIR/celeba/celeba-lmdb --root $CHECKPOINT_DIR --save $EXPR_ID/lsgm --dataset celeba_256 \
     --epochs 1000 --dropout 0.2 --num_channels_dae 256 --num_scales_dae 5 --train_vae  --weight_decay_norm_vae 1e-1 \
     --weight_decay_norm_dae 1e-2 --fir  --num_cell_per_scale_dae 8 --learning_rate_dae 1e-4 --learning_rate_min_dae 1e-4 \
     --learning_rate_vae 8e-5 --batch_size 4 --sde_type geometric_sde --time_eps 0. --sigma2_0 3e-5 --sigma2_min 3e-5 \
     --sigma2_max 0.999 --iw_sample_p drop_sigma2t_iw --iw_sample_q ll_iw --disjoint_training  --update_q_ema  \
     --cont_kl_anneal --num_process_per_node 8 --use_se --vae_checkpoint $EXPR_ID/vae/checkpoint.pt --dae_arch ncsnpp \
     --embedding_scale 1000 --mixing_logit_init -6 --warmup_epochs 20 --drop_inactive_var --skip_final_eval \
     --fid_dir $FID_STATS_DIR --node_rank $NODE_RANK --num_proc_node 2 --master_address $IP_ADDR '

We re-train the SGM prior in the final stage using the following command on 2 nodes, each with 8 32GB V100 GPUs:

mpirun --allow-run-as-root  -np 2 -npernode 1 bash -c \
    'python train_vada.py --data $DATA_DIR/celeba/celeba-lmdb --root $CHECKPOINT_DIR --save $EXPR_ID/lsgm2 --dataset celeba_256 \
    --epochs 1500 --dropout 0.2 --num_channels_dae 320 --num_scales_dae 5 --weight_decay_norm_vae 1e-1 \
    --weight_decay_norm_dae 1e-2 --fir  --num_cell_per_scale_dae 8 --learning_rate_dae 6e-5 --learning_rate_min_dae 6e-5 \
    --batch_size 6 --sde_type vpsde --iw_sample_p drop_sigma2t_iw --num_process_per_node 8 \
    --use_se --vae_checkpoint $EXPR_ID/lsgm/checkpoint.pt  --dae_arch ncsnpp --embedding_scale 1000 \
    --mixing_logit_init -6 --warmup_epochs 20 --drop_inactive_var --skip_final_eval  \
    --fid_dir $FID_STATS_DIR --node_rank $NODE_RANK --num_proc_node 2 --master_address $IP_ADDR '

For evaluating the final model, we used the following command on 2 nodes, each with 8 32GB V100 GPUs:

mpirun --allow-run-as-root -np 2 -npernode 1 bash -c \
    'python evaluate_vada.py --data $DATA_DIR/celeba/celeba-lmdb --root $CHECKPOINT_DIR --save $EXPR_ID/eval \
    --checkpoint $CHECKPOINT_DIR/EXPR_ID/lsgm2/checkpoint_fid.pt --num_process_per_node 8 --eval_mode evaluate \
    --fid_ode_eval --ode_eps 1e-5 --ode_solver_tol 1e-5 --batch_size 64 --vae_train_mode \
    --fid_dir $FID_STATS_DIR --node_rank $NODE_RANK --num_proc_node 2 --master_address $IP_ADDR '

Note in the commands above --num_process_per_node sets the number of available GPUs for training. Set this argument to different values depending on the available GPUs in your system.

Evaluating NVAE models

Additionally, if you'd like to evaluate an NVAE trained in the first stage, you can use evaluate_vae.py using a command like:

python evaluate_vae.py --data $DATA_DIR/celeba/celeba-lmdb --root $CHECKPOINT_DIR --save $EXPR_ID/eval_vae --eval_mode evaluate \
        --checkpoint $CHECKPOINT_DIR/EXPR_ID/vae/checkpoint.pt --num_process_per_node 8 --fid_dir $FID_STATS_DIR \
        --fid_eval --nll_eval

However, please note that the NVAE models trained in the first stage with our commands are not always fully trained to convergence, as the KL warmup is often performed only partially during first stage training (and completed during the second end-to-end LSGM training stage) and the number of epochs is set to a small value. If you would like to fully train an NVAE model in the first stage use --kl_anneal_portion 0.3 --kl_max_coeff 1.0, and set the number of epochs such that you have about 100k to 400k training iterations.

Monitoring the training progress

We use Tensorboard to monitor the progress of training using a command like:

tensorboard --logdir $CHECKPOINT_DIR/$EXPR_ID/lsgm

Checkpoints

We provide pre-trained LSGM checkpoints for the MNIST, CIFAR-10, and CelebA-HQ-256 datasets at this location. In addition to LSGM models, each directory also contains the pre-trained NVAE checkpoint obtained at the end of the first VAE pre-training stage.

Common issues

Getting NaN in training

One of the main challenges in training very deep hierarchical VAEs are training instabilities that we discussed in the NVAE paper. The training commands provided above train LSGM models similar to the ones reported in the paper. However, if you encounter NaN during training, you can use these tricks to stabilize your training: (i) Increase the spectral regularization coefficients --weight_decay_norm_vae and --weight_decay_norm_dae. (ii) Decrease the learning rate. (iii) Disable training of the VAE component when training LSGM in the second stage by removing the --train_vae argument.

Note that some of our commands above have the --custom_conv_dae flag. This flag tells our spectral regularization (SR) class to look for particular convolution layer classes when applying this regularization on the SGM prior. Since these classes are not present in the NCSN++ architecture, this flag will disable SR on the conv layers of the SGM prior. In our experiments, we accidentally observed that providing this flag (i.e., disabling SR on the SGM prior), sometimes yields better generative performance due to the over-regularization of SR. However, this can come with instabilities at times. If you observe instability while having the --custom_conv_dae flag, we recommend removing this flag such that SR can be applied to the conv layers in the SGM prior as well.

Requirements installation

Installing ninja, pytorch and apex (optional) often requires installing a variant of these libraries that are compiled with the same cuda version. We installed these libraries on a Ubuntu system with python 3.8 and CUDA 10.1 using the following commands:

export CUDA_HOME=/usr/local/cuda-10.1/
pip3 install torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
pip3 install ninja
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

Alternatively, if you have difficulties with installing these libraries, we recommend running our code in docker containers. You can build a docker image on top of NVIDIA images in which these libraries are properly compiled. You can find our Dockerfile at scripts/Dockerfile.

License

Please check the LICENSE file. LSGM may be used with NVIDIA Processors non-commercially, meaning for research or evaluation purposes only. For business inquiries, please contact [email protected].

Bibtex

Cite our paper using the following bibtex item:

@inproceedings{vahdat2021score,
  title={Score-based Generative Modeling in Latent Space},
  author={Vahdat, Arash and Kreis, Karsten and Kautz, Jan},
  booktitle={Neural Information Processing Systems (NeurIPS)},
  year={2021}
}
Comments
  • reconstruct image through diffusion

    reconstruct image through diffusion

    To reconstruct image, there are two ways:

    1. origin_img -> [encoder] -> latent  -> [decoder] -> recon_img
    2. origin_img -> [encoder] -> latent  -> [diffuse] -> noise -> [reverse diffuse]-> latent -> [decoder] -> recon_img
    

    Apparently, the first way could get the reconstructed image which is the same as origin image.

    While in the second way, will the origin_img and the recon_img be same? I'm not sure that there are some wrongs in my code or this way couldn't do it.

    Thanks!

    opened by wangherr 5
  • error: unrecognized arguments: --arch_instance_dae res_ho_attn

    error: unrecognized arguments: --arch_instance_dae res_ho_attn

    When training cifar10:

        CUDA_VISIBLE_DEVICES=***** python train_vada.py --fid_dir $FID_STATS_DIR --data $DATA_DIR/cifar10 --root $CHECKPOINT_DIR \
        --save $EXPR_ID/lsgm3 --vae_checkpoint $EXPR_ID/vae3/checkpoint.pt --train_vae --apply_sqrt2_res --fir \
        --cont_kl_anneal --dae_arch ncsnpp --embedding_scale 1000 --dataset cifar10 --learning_rate_dae 1e-4 \
        --learning_rate_min_dae 1e-4 --epochs 1875 --dropout 0.2 --batch_size 16 --num_channels_dae 512 --num_scales_dae 3 \
        --num_cell_per_scale_dae 8 --sde_type geometric_sde --sigma2_min 3e-5 --sigma2_max 0.999 --sigma2_0 3e-5 \
        --weight_decay_norm_dae 1e-2 --weight_decay_norm_vae 1e-2 --time_eps 0.0 --train_ode_eps 1e-6 --eval_ode_eps 1e-6 \
        --train_ode_solver_tol 1e-5 --eval_ode_solver_tol 1e-5 --iw_sample_p ll_uniform --iw_sample_q reweight_p_samples \
        --arch_instance_dae res_ho_attn --num_process_per_node 8 --use_se
    

    I met the error:

    error: unrecognized arguments: --arch_instance_dae res_ho_attn
    

    using the pycharm's "Find in Files", I only found it in readme.md 图片

    but, when I use pretrained weight CIFAR-10 NLL/checkpoint_nll.p to run evaluate_vada.py , there is arch_instance_dae=res_ho_attn in Namespace 图片


    To train cifar10, Could I just delete the --arch_instance_dae res_ho_attn in the command or waiting?

    opened by wangherr 2
  • RuntimeError: Address already in use

    RuntimeError: Address already in use

    I am trying to run train_vada.py in colab, but got error in title.

    $ python train_vada.py

    the full error message looks like this:

    No Apex Available. Using PyTorch's native Adam. Install Apex for faster training. Experiment dir : /tmp/nvae-diff/expr/exp starting in debug mode Traceback (most recent call last): File "train_vada.py", line 512, in utils.init_processes(0, size, main, args) File "/content/util/utils.py", line 689, in init_processes dist.init_process_group(backend='nccl', init_method='env://', rank=rank, world_size=size) File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 500, in init_process_group store, rank, world_size = next(rendezvous_iterator) File "/usr/local/lib/python3.7/dist-packages/torch/distributed/rendezvous.py", line 190, in _env_rendezvous_handler store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout) RuntimeError: Address already in use

    I have checkes some common issues and find out the error often comes from a wrong reporting from torch distribution settings, how can I fix it, thanks!

    opened by pkulwj1994 1
  • reconstruction loss in training_obj_disjoint.py

    reconstruction loss in training_obj_disjoint.py

    Hi, I have a question about the reconstruction term in lines 43-45. I didn't see where the parameterization is applied to the encoder distribution to obtain a sample of z_0. Am I missing something? Great work and awesome code btw!

    opened by Xiaohui9607 1
  • Train on afhq dataset

    Train on afhq dataset

    In the readme, train vae on 'celeba'

    celeba
    size: 3*256*256
    len: 27000
    gpus: 16
    batch: 4
    epochs: 200
    

    if I want to train other dataset, such as afhq:

    afhq
    size: 3*256*256
    len: 5153
    gpus: 16
    batch: 4
    epochs: 200
    

    the length of two dataset is not equal, it's not suitable to just add epochs due to scheduler.

    could you please give some suggestions?

    opened by wangherr 0
  • CUDA out of memory

    CUDA out of memory

    Hi, I used 32x32 size images. With 12GB GPUs, I reduce the batch size from 16 to 2 or 1(per gpu).

    However, still I got CUDA out of memory error. Could you explain to me how to deal with it?

    I used the same hyper parameters that you suggested for CIFAR10.

    opened by jeeyung 0
  • Questions about the latent dimension

    Questions about the latent dimension

    Can someone explain to me what are the approximate dimensions of latent embedding for different datasets? For the following procedure:

    origin_img -> [encoder] -> latent -> [diffuse] -> noise -> [reverse diffuse]-> latent -> [decoder] -> recon_img

    ps: do you think it will still make sense to use diffusion models for a low-dimensional latent space? e.g. 10 or 20.

    opened by benjamin3344 0
  • can't run ‘train_vada.py' for best FID on cifar10

    can't run ‘train_vada.py' for best FID on cifar10

    commond:

    # cifar10
    # - LSGM (best FID):
    mpirun --allow-run-as-root -np 2 -npernode 1 bash -c 
        'python train_vada.py --fid_dir $FID_STATS_DIR --data $DATA_DIR/cifar10 --root $CHECKPOINT_DIR \
        --save $EXPR_ID/lsgm2 --vae_checkpoint $EXPR_ID/vae2/checkpoint.pt --train_vae --custom_conv_dae --apply_sqrt2_res \
        --fir --cont_kl_anneal --dae_arch ncsnpp --embedding_scale 1000 --dataset cifar10 --learning_rate_dae 1e-4 \
        --learning_rate_min_dae 1e-4 --epochs 1875 --dropout 0.2 --batch_size 16 --num_channels_dae 512 --num_scales_dae 3 \
        --num_cell_per_scale_dae 8 --sde_type vpsde --beta_start 0.1 --beta_end 20.0 --sigma2_0 0.0 \
        --weight_decay_norm_dae 1e-2 --weight_decay_norm_vae 1e-2 --time_eps 0.01 --train_ode_eps 1e-6 --eval_ode_eps 1e-6 \
        --train_ode_solver_tol 1e-5 --eval_ode_solver_tol 1e-5 --iw_sample_p drop_all_iw --iw_sample_q reweight_p_samples \
        --arch_instance_dae res_ho_attn --num_process_per_node 8 --use_se --node_rank $NODE_RANK --num_proc_node 2 \
        --master_address $IP_ADDR '
    

    wrong:

     File "/**/miniconda3/envs/lsgm/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
        self.run()
      File "/**/miniconda3/envs/lsgm/lib/python3.8/multiprocessing/process.py", line 108, in run
        self._target(*self._args, **self._kwargs)
      File "/**/lsgm/util/utils.py", line 690, in init_processes
        fn(args)
      File "train_vada.py", line 178, in main
        train_obj, global_step = train_vada_joint(train_queue, diffusion_cont, dae, dae_optimizer, vae, vae_optimizer,
      File "/**/lsgm/training_obj_joint.py", line 135, in train_vada_joint
        grad_scalar.scale(p_loss).backward()
      File "/**/miniconda3/envs/lsgm/lib/python3.8/site-packages/torch/tensor.py", line 245, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/**/miniconda3/envs/lsgm/lib/python3.8/site-packages/torch/autograd/__init__.py", line 145, in backward
        Variable._execution_engine.run_backward(
    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [18, 256, 3, 3]] is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
        buf = self._recv(4)
    

    my env:

    python                    3.8.13 
    pytorch                   1.8.0           py3.8_cuda11.1_cudnn8.0.5_0
    torchvision               0.9.0                py38_cu111
    
    opened by wangherr 0
  • Number of GPUs not set correctly during sampling

    Number of GPUs not set correctly during sampling

    Hi,

    I believe that in this line args should be replaced by eval_args in order to compute num_gpus according to the passed arguments.

    https://github.com/NVlabs/LSGM/blob/5eae2f385c014f2250c3130152b6be711f6a3a5a/evaluate_vada.py#L201

    Thanks for sharing your work!

    opened by jonasricker 0
Owner
NVIDIA Research Projects
NVIDIA Research Projects
Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021

Introduction Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021 Prerequisites Python 3.8 and conda, get Conda CUDA 11

null 51 Dec 3, 2022
Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Ng Kam Woh 71 Dec 22, 2022
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 7, 2022
Official implementation of Generalized Data Weighting via Class-level Gradient Manipulation (NeurIPS 2021).

Generalized Data Weighting via Class-level Gradient Manipulation This repository is the official implementation of Generalized Data Weighting via Clas

null 9 Nov 3, 2021
The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

machen 11 Nov 27, 2022
Official implementation of Neural Bellman-Ford Networks (NeurIPS 2021)

NBFNet: Neural Bellman-Ford Networks This is the official codebase of the paper Neural Bellman-Ford Networks: A General Graph Neural Network Framework

MilaGraph 136 Dec 21, 2022
Official implementation of NeurIPS'2021 paper TransformerFusion

TransformerFusion: Monocular RGB Scene Reconstruction using Transformers Project Page | Paper | Video TransformerFusion: Monocular RGB Scene Reconstru

Aljaz Bozic 118 Dec 25, 2022
Official Pytorch implementation of 'GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network' (NeurIPS 2020)

Official implementation of GOCor This is the official implementation of our paper : GOCor: Bringing Globally Optimized Correspondence Volumes into You

Prune Truong 71 Nov 18, 2022
Pytorch implementation of RED-SDS (NeurIPS 2021).

Recurrent Explicit Duration Switching Dynamical Systems (RED-SDS) This repository contains a reference implementation of RED-SDS, a non-linear state s

Abdul Fatir 10 Dec 2, 2022
The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL), NeurIPS-2021

Directed Graph Contrastive Learning The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL). In this paper, we present the first con

Tong Zekun 28 Jan 8, 2023
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

null 76 Jan 3, 2023
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

null 77 Dec 16, 2022
Official code for On Path Integration of Grid Cells: Group Representation and Isotropic Scaling (NeurIPS 2021)

On Path Integration of Grid Cells: Group Representation and Isotropic Scaling This repo contains the official implementation for the paper On Path Int

Ruiqi Gao 39 Nov 10, 2022
Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)

GS-WGAN This repository contains the implementation for GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators (NeurIPS

null 46 Nov 9, 2022
Official implementation for Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder at NeurIPS 2020

Likelihood-Regret Official implementation of Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder at NeurIPS 2020. T

Xavier 33 Oct 12, 2022
Official Implementation of Swapping Autoencoder for Deep Image Manipulation (NeurIPS 2020)

Swapping Autoencoder for Deep Image Manipulation Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei A. Efros, Richard Zhang UC

null 449 Dec 27, 2022
A tensorflow=1.13 implementation of Deconvolutional Networks on Graph Data (NeurIPS 2021)

GDN A tensorflow=1.13 implementation of Deconvolutional Networks on Graph Data (NeurIPS 2021) Abstract In this paper, we consider an inverse problem i

null 4 Sep 13, 2022
This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting.

GAN Memory for Lifelong learning This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting. Please consider citing our paper

Miaoyun Zhao 43 Dec 27, 2022