FID calculation with proper image resizing and quantization steps

Overview

clean-fid: Fixing Inconsistencies in FID


Project | Paper

The FID calculation involves many steps that can produce inconsistencies in the final metric. As shown below, different implementations use different low-level image quantization and resizing functions, the latter of which are often implemented incorrectly.

We provide an easy-to-use library to address the above issues and make the FID scores comparable across different methods, papers, and groups.

FID Steps


On Buggy Resizing Libraries and Surprising Subtleties in FID Calculation
Gaurav Parmar, Richard Zhang, Jun-Yan Zhu
arXiv 2104.11222, 2021
CMU and Adobe



Buggy Resizing Operations

The definitions of resizing functions are mathematical and should never be a function of the library being used. Unfortunately, implementations differ across commonly-used libraries. They are often implemented incorrectly by popular libraries.


The inconsistencies among implementations can have a drastic effect of the evaluations metrics. The table below shows that FFHQ dataset images resized with bicubic implementation from other libraries (OpenCV, PyTorch, TensorFlow, OpenCV) have a large FID score (≥ 6) when compared to the same images resized with the correctly implemented PIL-bicubic filter. Other correctly implemented filters from PIL (Lanczos, bilinear, box) all result in relatively smaller FID score (≤ 0.75).

JPEG Image Compression

Image compression can have a surprisingly large effect on FID. Images are perceptually indistinguishable from each other but have a large FID score. The FID scores under the images are calculated between all FFHQ images saved using the corresponding JPEG format and the PNG format.

Below, we study the effect of JPEG compression for StyleGAN2 models trained on the FFHQ dataset (left) and LSUN outdoor Church dataset (right). Note that LSUN dataset images were collected with JPEG compression (quality 75), whereas FFHQ images were collected as PNG. Interestingly, for LSUN dataset, the best FID score (3.48) is obtained when the generated images are compressed with JPEG quality 87.


Quick Start

  • install requirements

    pip install -r requirements.txt
    
  • install the library

    pip install clean-fid
    
  • Compute FID between two image folders

    from cleanfid import fid
    
    score = fid.compute_fid(fdir1, fdir2)
    
  • Compute FID between one folder of images and pre-computed datasets statistics (e.g., FFHQ)

    from cleanfid import fid
    
    score = fid.compute_fid(fdir1, dataset_name="FFHQ", dataset_res=1024)
    
    
  • Compute FID using a generative model and pre-computed dataset statistics:

    from cleanfid import fid
    
    # function that accepts a latent and returns an image in range[0,255]
    gen = lambda z: GAN(latent=z, ... , <other_flags>)
    
    score = fid.compute_fid(gen=gen, dataset_name="FFHQ",
            dataset_res=256, num_gen=50_000)
    
    

Supported Precomputed Datasets

We provide precompute statistics for the following configurations

Task Dataset Resolution split mode
Image Generation FFHQ 256,1024 train+val clean, legacy_pytorch, legacy_tensorflow
Image Generation LSUN Outdoor Churches 256 train clean, legacy_pytorch, legacy_tensorflow
Image to Image horse2zebra 128,256 train, test, train+test clean, legacy_pytorch, legacy_tensorflow

Using precomputed statistics In order to compute the FID score with the precomputed dataset statistics, use the corresponding options. For instance, to compute the clean-fid score on generated 256x256 FFHQ images use the command:

fid_score = fid.compute_fid(fdir1, dataset_name="FFHQ", dataset_res=256,  mode="clean")

Create Custom Dataset Statistics

  • dataset_path: folder where the dataset images are stored
  • Generate and save the inception statistics
    import numpy as np
    from cleanfid import fid
    dataset_path = ...
    feat = fid.get_folder_features(dataset_path, num=50_000)
    mu = np.mean(feats, axis=0)
    sigma = np.cov(feats, rowvar=False)
    np.savez_compressed("stats.npz", mu=mu, sigma=sigma)
    

Backwards Compatibility

We provide two flags to reproduce the legacy FID score.

  • mode="legacy_pytorch"
    This flag is equivalent to using the popular PyTorch FID implementation provided here
    The difference between using clean-fid with this option and code is ~1.9e-06
    See doc for how the methods are compared

  • mode="legacy_tensorflow"
    This flag is equivalent to using the official implementation of FID released by the authors. To use this flag, you need to additionally install tensorflow. The tensorflow cuda version may cause issues with the pytorch code. I have tested this with TensorFlow-cpu 2.2 (`pip install tensorflow-cpu==2.2)


CleanFID Leaderboard for common tasks


FFHQ @ 1024x1024

Model Legacy-FID Clean-FID
StyleGAN2 2.85 ± 0.05 3.08 ± 0.05
StyleGAN 4.44 ± 0.04 4.82 ± 0.04
MSG-GAN 6.09 ± 0.04 6.58 ± 0.06

Image-to-Image (horse->zebra @ 256x256) Computed using test images

Model Legacy-FID Clean-FID
CycleGAN 77.20 75.17
CUT 45.51 43.71

Building from source

python setup.py bdist_wheel
pip install dist/*

Citation

If you find this repository useful for your research, please cite the following work.

@article{parmar2021cleanfid,
  title={On Buggy Resizing Libraries and Surprising Subtleties in FID Calculation},
  author={Parmar, Gaurav and Zhang, Richard and Zhu, Jun-Yan},
  journal={arXiv preprint arXiv:2104.11222},
  year={2021}
}

Credits

PyTorch-StyleGAN2: code | License

PyTorch-FID: code | License

StyleGAN2: code | LICENSE

converted FFHQ weights: code | License

Comments
  • Resolution in image folders

    Resolution in image folders

    Hi,

    I am a little confused that should the resolution of images in 2 folders be the same or different (folder1: 256x256, folder2: 1024x1024)? If not, can we use PIL to resize it or use torch transform resize?

    Many thanks.

    opened by duongquangvinh 11
  • Different statistic of FFHQ256 with precompute statistic.

    Different statistic of FFHQ256 with precompute statistic.

    We follow the StyleGAN-ADA to extract FFHQ256 from tfrecord file. However, when I computed the statistic, it results in different statistics from your pre-computed trainval70K. I want to know whether the calculation steps have changed?

    opened by guyuchao 3
  • open-cv is missing from the dependencies

    open-cv is missing from the dependencies

    However, it is required when using fid.make_custom_stats(). I used pip.

    Also, the README instructs one to pip install -r requirements.txt while such a file is not present.

    opened by felixdivo 3
  • Feature request for centercrop

    Feature request for centercrop

    My dataset has nonsquare samples, so on training, I'm doing random crop and for validation, I'm calculating fid with center cropped copy of dataset. Would be cool to have some option for passing my own transform, for example.

    enhancement 
    opened by hadaev8 3
  • RuntimeError: MALFORMED INPUT: lanes don't match

    RuntimeError: MALFORMED INPUT: lanes don't match

    Hello, I tried using the package, but its throwing this runtime error. I checked image size mismatch, whether they are corrupted or not but no leads as to what is causing this.

    opened by hardik-uppal 3
  • About the resize function used by different libraries

    About the resize function used by different libraries

    Recently, I've come across a post on LinkedIn that describes how we should carefully choose the right resize function while stressing the fact that using different libraries/frameworks leads to different results. So, I decided to test it myself. Click here to find the post that I took the inspiration from.

    The following is the code snippet that I've edited(using this colab notebook) to give the correct way of using resize methods in different frameworks.

    import numpy as np
    import torch
    import torchvision.transforms.functional as F
    from torchvision import transforms
    from torchvision.transforms.functional import InterpolationMode
    from PIL import Image
    import tensorflow as tf
    import cv2
    import matplotlib.pyplot as plt
    from skimage import draw
    
    image = np.ones((128, 128), dtype=np.float64)
    rr, cc = draw.circle_perimeter(64, 64, radius=45, shape=image.shape)
    image[rr, cc] = 0
    plt.imshow(image, cmap='gray')
    print(f"Unique values of image: {np.unique(arr)}")
    print(image.dtype)
    output_size = 17
    def inspect_img(*, img):
        plt.imshow(img, cmap='gray')
        print(f"Value of pixel with coordinates (14,9): {img[14, 9]}")
    
    def resize_PIL(*, img, output_size):
        img = Image.fromarray(image)
        img = img.resize((output_size, output_size), resample=Image.BICUBIC)
        img = np.asarray(img,dtype=np.float64)
        inspect_img(img=img)
        return img
    def resize_pytorch(*, img, output_size):
        img = F.resize(Image.fromarray(np.float64(img)), # Provide a PIL image rather than a Tensor.
                       size=output_size, 
                       interpolation=InterpolationMode.BICUBIC)
        img = np.asarray(img, dtype=np.float64) 
        inspect_img(img=img)
        return img
    def resize_tensorflow(*, img, output_size):
        img = img[tf.newaxis, ..., tf.newaxis]
        img = tf.image.resize(img, size = [output_size] * 2, method="bicubic", antialias=True)
        img = img[0, ..., 0].numpy()
        inspect_img(img=img)
        return img
    image_PIL = resize_PIL(img=image, output_size=output_size)
    image_pytorch = resize_pytorch(img=image, output_size=output_size)
    image_tensorflow = resize_tensorflow(img=image, output_size=output_size)
    assert np.array_equal(image_PIL, image_pytorch) == True, 'Not Identical!'
    # assert np.array_equal(image_PIL, image_tensorflow) == True, 'Not Identical!'  --> fails
    assert np.allclose(image_PIL, image_tensorflow) == True, 'Not Close!'
    # assert np.array_equal(image_tensorflow, image_pytorch) == True, 'Not Identical!'  --> fails 
    assert np.allclose(image_tensorflow, image_pytorch) == True, 'Not Close!'
    # tensorflow gives a slightly different values than pytorch and PIL.
    

    which gives us the following results:

    result

    Therefore, TensorFlow, PyTorch, and PIL give similar results if the resize method is used properly like in the above snippet code.

    You can read my comments on linkedin to find out how I came to this solution.

    The only remaining library is OpenCV which I'll test in the future.

    Have a great day/night!

    opened by wiseaidev 3
  • clean-fid build_resizer Import error

    clean-fid build_resizer Import error

    I am working on vid2vid GAN model from Nvidia-Imaginaire library. It uses clean-fid library. While running the model on google colab. I encounter this error related to the clean-fid.

    from imaginaire.evaluation import compute_fid File "/content/imaginaire/imaginaire/evaluation/init.py", line 5, in from .fid import compute_fid, compute_fid_data File "/content/imaginaire/imaginaire/evaluation/fid.py", line 10, in from imaginaire.evaluation.common import load_or_compute_activations File "/content/imaginaire/imaginaire/evaluation/common.py", line 14, in from cleanfid.resize import build_resizer File "/usr/local/lib/python3.7/dist-packages/cleanfid/resize.py", line 10, in from cleanfid.utils import * File "/usr/local/lib/python3.7/dist-packages/cleanfid/utils.py", line 5, in from cleanfid.resize import build_resizer ImportError: cannot import name 'build_resizer' from 'cleanfid.resize' (/usr/local/lib/python3.7/dist-packages/cleanfid/resize.py)

    The Issue is it is not able import the build_resizer from cleanfid.resize. I don't have this issue like two days back now I am having this issue. Is it due to the new version.

    Thanks in advance

    opened by moulimatsa 2
  • Enabling usage on windows

    Enabling usage on windows

    Running the code on my windows machine, gave me the two following errors: the /tmp is different on windows and DataLoader crashes while using num_workers > 0. I made the /tmp folder general and added a condition for the num_workers argument.

    opened by dome272 2
  • A possible bug

    A possible bug

    Hi Gaurav, thanks for sharing this amazing tool! I spot a block of suspicious lines that might worth your attention. Specifically, this resizing function of the default "clean" resizer: https://github.com/GaParmar/clean-fid/blob/d2a10b1f4f44e79ea08717a10702fb1c674b1830/cleanfid/resize.py#L43-L52

    It seems to me that the output_size in L47 (s1, s2) supposes to be a (w, h) tuple whilie L48 expects it as (h, w). What do you think? This might mean that the default resizer only works for square output resolution.

    Shouldn't be a big problem since it does not affect default behavior.

    opened by hangg7 2
  • FID clip crashes when evaluated on `cpu` device

    FID clip crashes when evaluated on `cpu` device

    Passing device="cpu" and model_name="clip_vit_b_32" crashes with the following error:

    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper___slow_conv2d_forward)
    
    opened by Sumith1896 1
  • Compute FID from generator doesn't fully respect num_gen

    Compute FID from generator doesn't fully respect num_gen

    Looks like the actual number of generations is rounded up to the nearest multiple of batch size:

    https://github.com/GaParmar/clean-fid/blob/fca67180659eae81d2ea207b11e12acd84735171/cleanfid/fid.py#L199

    opened by justinpinkney 1
  • Typo in the compute_kid()

    Typo in the compute_kid()

    for the Line 391 and 396 in compute_kid(), I think the 'None' should actually be 'feat_model' https://github.com/GaParmar/clean-fid/blob/55ec1683ce3b2615bdbee12cb611f6ea0dc6457f/cleanfid/fid.py#L391

    opened by chacorp 1
  • Uppercase JPEG extension ignored by get_folder_features

    Uppercase JPEG extension ignored by get_folder_features

    Hi! Uppercase JPEG extension ignored by get_folder_features I've noticed this issue while making custom statistics from a folder with .JPEG files. It seems like it has been thought of here for processing .zip https://github.com/GaParmar/clean-fid/blob/b1d8934d7ebb7e0c471f7bdb4c12872fe62f6cc4/cleanfid/fid.py#L138 but not here for processing folders https://github.com/GaParmar/clean-fid/blob/b1d8934d7ebb7e0c471f7bdb4c12872fe62f6cc4/cleanfid/fid.py#L140-L141

    Probably the easiest fix is to expand the EXTENSIONS with the upper-case versions

    opened by Reason239 2
  • Can KID be negative number

    Can KID be negative number

    I am trying to compute KID, but it is generating negative values. Can KID be a negative number?

    Here is the code that I used: ` from cleanfid import fid fdir1 = my_folder1_path fdir2 = my_folder2_path

    kid_score = fid.compute_kid(fdir1, fdir2) `

    Each folder has only 6 images. My kid_score is -0.0406.

    Could someone please help me understand why the KID is less that zero?

    Thank you, Chandrakanth

    opened by chandrakanth-gudavalli 1
  • Cannot compute fid for a generator, using images in fdir2.

    Cannot compute fid for a generator, using images in fdir2.

    This code is from fid.py line 459-470.

    elif gen is not None:
        if not verbose:
            print(f"compute FID of a model with {dataset_name}-{dataset_res} statistics")
        score = fid_model(gen, dataset_name, dataset_res, dataset_split,
                model=feat_model, z_dim=z_dim, num_gen=num_gen,
                mode=mode, num_workers=num_workers, batch_size=batch_size,
                device=device, verbose=verbose)
        return score
    
    # compute fid for a generator, using images in fdir2
    elif gen is not None and fdir2 is not None:
    

    There is no way we enter the last elif, so I can't compare my generator with images in fdir2. Is this intentional?

    opened by visittor 1
  • How to speed up the fid?

    How to speed up the fid?

    Hi, thanks for sharing your work. In my case, I have a reference folder (fdir1), and about 50 target folders (fdir2), each containing hundreds of images. It takes a long time to calculate the score by default fid.compute_fid(fdir1, fdir2). It seems it is chocked by the CPU. Is there any way to speed up it?

    opened by wingvortex 2
Owner
Gaurav Parmar
MSR student
Gaurav Parmar
DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.

Differentiable Model Compression via Pseudo Quantization Noise DiffQ performs differentiable quantization using pseudo quantization noise. It can auto

Facebook Research 145 Dec 30, 2022
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.

HAWQ: Hessian AWare Quantization HAWQ is an advanced quantization library written for PyTorch. HAWQ enables low-precision and mixed-precision uniform

Zhen Dong 293 Dec 30, 2022
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.

Nonuniform-to-Uniform Quantization This repository contains the training code of N2UQ introduced in our CVPR 2022 paper: "Nonuniform-to-Uniform Quanti

Zechun Liu 60 Dec 28, 2022
Compute FID scores with PyTorch.

FID score for PyTorch This is a port of the official implementation of Fréchet Inception Distance to PyTorch. See https://github.com/bioinf-jku/TTUR f

null 2.1k Jan 6, 2023
A series of convenience functions to make basic image processing operations such as translation, rotation, resizing, skeletonization, and displaying Matplotlib images easier with OpenCV and Python.

imutils A series of convenience functions to make basic image processing functions such as translation, rotation, resizing, skeletonization, and displ

Adrian Rosebrock 4.3k Jan 8, 2023
Inkscape extensions for figure resizing and editing

Academic-Inkscape: Extensions for figure resizing and editing This repository contains several Inkscape extensions designed for editing plots. Scale P

null 192 Dec 26, 2022
Regression Metrics Calculation Made easy for tensorflow2 and scikit-learn

Regression Metrics Installation To install the package from the PyPi repository you can execute the following command: pip install regressionmetrics I

Ashish Patel 11 Dec 16, 2022
Finite-temperature variational Monte Carlo calculation of uniform electron gas using neural canonical transformation.

CoulombGas This code implements the neural canonical transformation approach to the thermodynamic properties of uniform electron gas. Building on JAX,

FermiFlow 9 Mar 3, 2022
torchsummaryDynamic: support real FLOPs calculation of dynamic network or user-custom PyTorch ops

torchsummaryDynamic Improved tool of torchsummaryX. torchsummaryDynamic support real FLOPs calculation of dynamic network or user-custom PyTorch ops.

Bohong Chen 1 Jan 7, 2022
FAST Aiming at the problems of cumbersome steps and slow download speed of GNSS data

FAST Aiming at the problems of cumbersome steps and slow download speed of GNSS data, a relatively complete set of integrated multi-source data download terminal software fast is developed. The software contains most of the data sources required in the process of GNSS scientific research and learning. The way of parallel download greatly improves the efficiency of download.

ChangChuntao 23 Dec 31, 2022
Proximal Backpropagation - a neural network training algorithm that takes implicit instead of explicit gradient steps

Proximal Backpropagation Proximal Backpropagation (ProxProp) is a neural network training algorithm that takes implicit instead of explicit gradient s

Thomas Frerix 40 Dec 17, 2022
codes for "Scheduled Sampling Based on Decoding Steps for Neural Machine Translation" (long paper of EMNLP-2022)

Scheduled Sampling Based on Decoding Steps for Neural Machine Translation (EMNLP-2021 main conference) Contents Overview Background Quick to Use Furth

Adaxry 13 Jul 25, 2022
MQBench: Towards Reproducible and Deployable Model Quantization Benchmark

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark We propose a benchmark to evaluate different quantization algorithms on vari

null 494 Dec 29, 2022
YOLOv5 Series Multi-backbone, Pruning and quantization Compression Tool Box.

YOLOv5-Compression Update News Requirements 环境安装 pip install -r requirements.txt Evaluation metric Visdrone Model mAP mAP@50 Parameters(M) GFLOPs FPS@

ZhangYuan 719 Jan 2, 2023
QKeras: a quantization deep learning library for Tensorflow Keras

QKeras github.com/google/qkeras QKeras 0.8 highlights: Automatic quantization using QKeras; Stochastic behavior (including stochastic rouding) is disa

Google 437 Jan 3, 2023
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

null 54 Dec 15, 2022