Unofficial implementation of MUSIQ (Multi-Scale Image Quality Transformer)

Related tags

Deep Learning MUSIQ
Overview

MUSIQ: Multi-Scale Image Quality Transformer

Unofficial pytorch implementation of the paper "MUSIQ: Multi-Scale Image Quality Transformer" (paper link: https://arxiv.org/abs/2108.05997)

This code doesn't exactly match what the paper describes.

  • It only works on the KonIQ-10k dataset. Or it works on the database which resolution is 1024(witdh) x 768(height).
  • Instead of using 5-layer Resnet as a backbone network, we use ResNet50 pretrained on ImageNet database.
  • We need to implement Earth Mover Distance (EMD) loss to train on other databases.
  • We additionally use ranking loss to improve the performance (we will upload the training code including ranking loss later)

The environmental settings are described below. (I cannot gaurantee if it works on other environments)

  • Pytorch=1.7.1 (with cuda 11.0)
  • einops=0.3.0
  • numpy=1.18.3
  • cv2=4.2.0
  • scipy=1.4.1
  • json=2.0.9
  • tqdm=4.45.0

Train & Validation

First, you need to download weights of ResNet50 pretrained on ImageNet database.

Second, you need to download the KonIQ-10k dataset.

  • Download the database from this website (http://database.mmsp-kn.de/koniq-10k-database.html)
  • set the database path in "train.py" (It is represented as "db_path" in "train.py")
  • Please check "koniq-10k.txt" is in "IQA_list" folder
  • "koniq-10k.txt" file includes [scene number / image name / ground truth score] information

After those settings, you can run the train & validation code by running "train.py"

  • python3 train.py (execution code)
  • This code works on single GPU. If you want to train this code in muti-gpu, you need to change this code
  • Options are all included in "train.py". So you should change the variable "config" in "train.py" image

Belows are the validation performance on KonIQ-10k database (I'm still training the code, so the results will be updated later)

  • SRCC: 0.9023 / PLCC: 0.9232 (after training 105 epochs)
  • If the codes are implemented exactly the same as the paper, the performance can be further improved

Inference

First, you need to specify variables in "inference.py"

  • dirname: root folder of test images
  • checkpoint: checkpoint file (trained on KonIQ-10k dataset)
  • result_score_txt: inference score will be saved on this txt file image

After those settings, you can run the inference code by running "inference.py"

  • python3 inference.py (execution code)

Acknolwdgements

We refer to the following website to implement the transformer (https://paul-hyun.github.io/transformer-01/)

You might also like...
Multi-Scale Geometric Consistency Guided Multi-View Stereo

ACMM [News] The code for ACMH is released!!! [News] The code for ACMP is released!!! About ACMM is a multi-scale geometric consistency guided multi-vi

Unofficial TensorFlow  implementation of the Keyword Spotting Transformer model
Unofficial TensorFlow implementation of the Keyword Spotting Transformer model

Keyword Spotting Transformer This is the unofficial TensorFlow implementation of the Keyword Spotting Transformer model. This model is used to train o

Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers by Apple Inc.
Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers by Apple Inc.

aft-pytorch Unofficial PyTorch implementation of Attention Free Transformer's layers by Zhai, et al. [abs, pdf] from Apple Inc. Installation You can i

Unofficial PyTorch implementation of MobileViT based on paper
Unofficial PyTorch implementation of MobileViT based on paper "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer".

MobileViT RegNet Unofficial PyTorch implementation of MobileViT based on paper MOBILEVIT: LIGHT-WEIGHT, GENERAL-PURPOSE, AND MOBILE-FRIENDLY VISION TR

Unofficial TensorFlow  implementation of the Keyword Spotting Transformer model
Unofficial TensorFlow implementation of the Keyword Spotting Transformer model

Keyword Spotting Transformer This is the unofficial TensorFlow implementation of the Keyword Spotting Transformer model. This model is used to train o

VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch

Transformer in Transformer Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image c

Unofficial PyTorch Implementation of UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation
Unofficial PyTorch Implementation of UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation

UnivNet UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation This is an unofficial PyTorch

Unofficial PyTorch Implementation of UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation
Unofficial PyTorch Implementation of UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation

UnivNet UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation This is an unofficial PyTorch

Comments
  • MSU Video Quality Metrics Benchmark Invitation

    MSU Video Quality Metrics Benchmark Invitation

    Hello! We kindly invite you to participate in our video quality metrics benchmark. You can submit MUSIQ to the benchmark, following the submission steps, described here. The dataset distortions refer to compression artifacts on professional and user-generated content. The full dataset is used to measure methods overall performance, so we do not share it to avoid overfitting. Nevertheless, we provided the open part of it (around 1,000 videos) within our paper "Video compression dataset and benchmark of learning-based video-quality metrics", accepted to NeurIPS 2022.

    opened by msm1rnov 0
  • Training on custom datasets

    Training on custom datasets

    Hello,

    Thank you very much for this nice implementation.

    Can you please explain what are the limits for training on custom datasets?

    Thank you very much.

    Best,

    Nicolas

    opened by nicolasch96 0
  • resnet50.pth&pre-trained weight

    resnet50.pth&pre-trained weight

    Hello, author! Thank you so much for sharing the code. Is "resnet50.pth " a pre-trained weight for the entire network? Similar to the pre-training weights provided in the TensorFlow code?

    opened by cherryolg 0
  • Patch Embedding Module gradient update

    Patch Embedding Module gradient update

    https://github.com/anse3832/MUSIQ/blob/f7e45268da2af7c883f310fa48dd7180ad4dc39e/trainer.py#L47

    I was wandering why PEM - backbone does not get updated, although it's parameters are passed in to the optimizer

    opened by Abdurrahheem 0
Owner
null
Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

Han Xu 129 Dec 11, 2022
Implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification

CrossViT : Cross-Attention Multi-Scale Vision Transformer for Image Classification This is an unofficial PyTorch implementation of CrossViT: Cross-Att

Rishikesh (ऋषिकेश) 103 Nov 25, 2022
Official implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification

CrossViT This repository is the official implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification. ArXiv If

International Business Machines 168 Dec 29, 2022
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"

FLASH - Pytorch Implementation of the Transformer variant proposed in the paper Transformer Quality in Linear Time Install $ pip install FLASH-pytorch

Phil Wang 209 Dec 28, 2022
Unofficial implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" (https://arxiv.org/abs/2103.14030)

Swin-Transformer-Tensorflow A direct translation of the official PyTorch implementation of "Swin Transformer: Hierarchical Vision Transformer using Sh

null 52 Dec 29, 2022
Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding

Vision Longformer This project provides the source code for the vision longformer paper. Multi-Scale Vision Longformer: A New Vision Transformer for H

Microsoft 209 Dec 30, 2022
Unofficial pytorch implementation of the paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution"

DFSA Unofficial pytorch implementation of the ICCV 2021 paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution" (p

null 2 Nov 15, 2021
A data annotation pipeline to generate high-quality, large-scale speech datasets with machine pre-labeling and fully manual auditing.

About This repository provides data and code for the paper: Scalable Data Annotation Pipeline for High-Quality Large Speech Datasets Development (subm

Appen Repos 86 Dec 7, 2022
Asterisk is a framework to generate high-quality training datasets at scale

Asterisk is a framework to generate high-quality training datasets at scale

Mona Nashaat 44 Apr 25, 2022
Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

ImageProcessingTransformer Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

null 61 Jan 1, 2023