Implementation for "Exploiting Aliasing for Manga Restoration" (CVPR 2021)

Overview

[CVPR Paper](To appear) | [Project Website](To appear) | BibTex

Introduction

As a popular entertainment art form, manga enriches the line drawings details with bitonal screentones. However, manga resources over the Internet usually show screentone artifacts because of inappropriate scanning/rescaling resolution. In this paper, we propose an innovative two-stage method to restore quality bitonal manga from degraded ones. Our key observation is that the aliasing induced by downsampling bitonal screentones can be utilized as informative clues to infer the original resolution and screentones. First, we predict the target resolution from the degraded manga via the Scale Estimation Network (SE-Net) with spatial voting scheme. Then, at the target resolution, we restore the region-wise bitonal screentones via the Manga Restoration Network (MR-Net) discriminatively, depending on the degradation degree. Specifically, the original screentones are directly restored in pattern-identifiable regions, and visually plausible screentones are synthesized in pattern-agnostic regions. Quantitative evaluation on synthetic data and visual assessment on real-world cases illustrate the effectiveness of our method.

Example Results

Belows shows an example of our restored manga image. The image comes from the Manga109 dataset.

Degraded Restored

Pretrained models

Download the models below and put it under release_model/.

MangaRestoration

Run

  1. Requirements:
    • Install python3.6
    • Install pytorch (tested on Release 1.1.0)
  2. Testing:
    • Place your test images under datazip/manga1/test.
    • Prepare images filelist using flist.py.
    • Modify manga.json to set path to data.
    • Run python testreal.py -c [config_file] -n [model_name] -s [image_size] .
    • For example, python testreal.py -c configs/manga.json -n resattencv -s 256
    • You can also use python testreal.py -c [config_file] -n [model_name] -s [image_size] -sl [scale] to specify the scale factor.
    • Note that the Convex interpolation refinement requires large GPU memory, you can enable it by setting (bilinear=False) in MangaRestorator to restore images. Defaultly, we set bilinear=True.

Citation

If any part of our paper and code is helpful to your work, please generously cite with:

@inproceedings{xie2021exploiting,
  author = {Minshan Xie and Menghan Xia and Tien-Tsin Wong},
  title = {Exploiting Aliasing for Manga Restoration},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2021}
}

Reference

You might also like...
Official pytorch implementation of Rainbow Memory (CVPR 2021)
Official pytorch implementation of Rainbow Memory (CVPR 2021)

Rainbow Memory: Continual Learning with a Memory of Diverse Samples

Implementation for Panoptic-PolarNet (CVPR 2021)
Implementation for Panoptic-PolarNet (CVPR 2021)

Panoptic-PolarNet This is the official implementation of Panoptic-PolarNet. [ArXiv paper] Introduction Panoptic-PolarNet is a fast and robust LiDAR po

Implementation of
Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

Official Pytorch implementation of
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Implementation of
Implementation of "Efficient Regional Memory Network for Video Object Segmentation" (Xie et al., CVPR 2021).

RMNet This repository contains the source code for the paper Efficient Regional Memory Network for Video Object Segmentation. Cite this work @inprocee

Pytorch implementation for
Pytorch implementation for "Adversarial Robustness under Long-Tailed Distribution" (CVPR 2021 Oral)

Adversarial Long-Tail This repository contains the PyTorch implementation of the paper: Adversarial Robustness under Long-Tailed Distribution, CVPR 20

Unofficial implementation of the Involution operation from CVPR 2021
Unofficial implementation of the Involution operation from CVPR 2021

involution_pytorch Unofficial PyTorch implementation of "Involution: Inverting the Inherence of Convolution for Visual Recognition" by Li et al. prese

Official Implementation and Dataset of
Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Portrait Photo Retouching with PPR10K Paper | Supplementary Material PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask an

Python and C++ implementation of
Python and C++ implementation of "MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation". Accepted at LXCV @ CVPR 2021.

MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation This is a PyTorch and LibTorch implementation of MarkerPose: a

Comments
  • Testing with 1 image leads to error

    Testing with 1 image leads to error

    File "/content/MangaRestoration/core/datasetreal.py", line 25, in __init__
        self.data = [i for i in np.genfromtxt(os.path.join(data_args['flist_root'], data_args['name2'], split+'.flist'), dtype=np.str, encoding='utf-8',delimiter="\n")]
    TypeError: iteration over a 0-d array
    

    With an flist made with !python scripts/flist.py --path datazip/manga1/test --output flist/manga1/test.flist and one grayscale PNG in the test path.

    opened by torridgristle 0
  • Pretrained model host

    Pretrained model host

    Since the pretrained model is only about 10MB, I believe you can upload it to the Github repo. It'd be a shame if Google suddenly revoked access to it.

    opened by torridgristle 0
  • Incomplete guide, led to faulty installation

    Incomplete guide, led to faulty installation

    Hello, I have tried this project on a "pure" machine with freshly installed python and torch (python 3.9.5, and torch version 1.8.5, cuda toolkit ver. 11.3). I noticed that there are more dependencies than specified in the README.md, namely "matplotlib," "numpy," and "opencv-python." I believe it would be nice if you added them to the README.

    Moving on to the instructions, I believe they are really unclear, here are some issues I came across while reading them:

    1. 'Prepare images filelist using flist.py' instruction I think suggests us to run "flist.py" with --path and --output arguments, with the first one being the "datazip/manga1/test" (if we follow the instructions word-to-word) and the second one being...... unknown? Taking a look at the code, it seems that this argument is being used on the np.savetxt(args.output, images, fmt='%s') command, which requires a file name (and while not needed, an extension too). Since this file is going to be used internally by the program, it is unclear on what name and/or extension this file has to have in order to be accessed. Personally, I would suggest that it's safer for the program to create the file with a hardcoded name inside, without the user meddling with it.
    2. On the "configs/manga.json" file, there's a key, in the "data_loader"'s value, called "flist_root" with a value of "./flist." However, there's no such directory when one clones this repo into their machine. Without knowing if I'm right, I assumed that this was the folder the file created by the flist.py is located. So, the program throws an error when run, because no such folder exists (or do we have to create it before running flist.py? or before running testreal.py???). I know the README says to modify this file, but it's unclear as to what exactly are we supposed to do. Guess: Is the "flist_root" value supposed to be where the output of "flist.py" is supposed to be? Because if so, as it's obvious from line 26 of core/datasetreal.py, that file is supposed to have an "flist" file extension. If so, why not predefine that file's extension directly from flist.py?

    I tried to fix these little uncertainties by guessing and inevitably manipulating some parts of the code, (for example, in line 26 of code/datasetreal.py, there's supposed to be a file named "train.flist" but no such file exists from the initial clone, and therefore, the program returned an error of being unable to locate the file.)

    I don't know how useful the following traceback is going to be, but I'll mention what I have manipulated to get through several errors in the process of attempting to try out the program.

    So, I tried to first rename the folder "scripts" to "flist" and then, running flist.py again, recreated the filelist, this time with a file name of "test.flist" and then I replaced the term "train" with "test" in line 19 of core/datasetreal.py. With these, it successfully located the file, skipped file not found errors,

    Traceback (most recent call last):
      File "C:\Users\aquap\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 986, in _try_get_data
        data = self._data_queue.get(timeout=timeout)
      File "C:\Users\aquap\AppData\Local\Programs\Python\Python39\lib\queue.py", line 179, in get
        raise Empty
    _queue.Empty
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "C:\Users\aquap\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\multiprocessing\spawn.py", line 59, in _wrap
        fn(i, *args)
      File "E:\NEURALSTUFF\MangaRestoration-main\testreal.py", line 87, in main_worker
        for idx, (images, names) in enumerate(dataloader):
      File "C:\Users\aquap\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 517, in __next__
        data = self._next_data()
      File "C:\Users\aquap\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 1182, in _next_data
        idx, data = self._get_data()
      File "C:\Users\aquap\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 1138, in _get_data
        success, data = self._try_get_data()
      File "C:\Users\aquap\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 999, in _try_get_data
        raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
    RuntimeError: DataLoader worker (pid(s) 8424, 2476, 10100) exited unexpectedly```
    opened by aquapaulo 2
Owner
null
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 8, 2023
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 585 Jan 4, 2023
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Official implementation for (Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation, CVPR-2021)

FRSKD Official implementation for Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation (CVPR-2021) Requirements Pytho

null 75 Dec 28, 2022
Official PyTorch implementation of RobustNet (CVPR 2021 Oral)

RobustNet (CVPR 2021 Oral): Official Project Webpage Codes and pretrained models will be released soon. This repository provides the official PyTorch

Sungha Choi 173 Dec 21, 2022
PyTorch implementation for COMPLETER: Incomplete Multi-view Clustering via Contrastive Prediction (CVPR 2021)

Completer: Incomplete Multi-view Clustering via Contrastive Prediction This repo contains the code and data of the following paper accepted by CVPR 20

XLearning Group 72 Dec 7, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
Implementation for the paper SMPLicit: Topology-aware Generative Model for Clothed People (CVPR 2021)

SMPLicit: Topology-aware Generative Model for Clothed People [Project] [arXiv] License Software Copyright License for non-commercial scientific resear

Enric Corona 225 Dec 13, 2022
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022