A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

Overview

A Benchmark for Rough Sketch Cleanup

This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

This code computes the metrics described in the paper and generates the benchmark website to compare the output of various sketch cleanup algorithms.

The Directory Structure

Data directories are defined in the file cfg.yaml:

  • dataset_dir: User puts the dataset here. Needed by the website.
  • alg_dir: User puts automatic results here. Needed by the website.
  • web_dir: We generate the website here. Image paths look like ../{alg_dir}/rest/of/path.svg
  • table_dir: We generate the metrics computed by the benchmark here. Needed to generate the website, but not needed when hosting the website. (A precomputed version for algorithms we tested is provided below.)
  • test_dir: We generate resized image files for testing algorithms here. Needed also when computing metrics. Not needed by the website. (A precomputed version is provided below.)

The default values are

dataset_dir: './data/Benchmark_Dataset
alg_dir: './data/Automatic_Results'
web_dir: './data/web'
table_dir: './data/Evaluation_Data'
test_dir: './data/Benchmark_Testset'

If you are generating your own test_dir data, you need Inkscape and ImageMagick. run_benchmark.py tries to find them according to your OS. You can set the paths directly in cfg.yaml by changing inkscape_path and magick_path to point to Inkscape and ImageMagick's convert executable, respectively.

Installing Code Dependencies

Clone or download this repository. The code is written in Python. It depends on the following modules: aabbtree, CairoSVG, cssutils, matplotlib, numpy, opencv-python, pandas, Pillow, PyYAML, scipy, svglib, svgpathtools, tqdm

You can install these modules with:

pip3 install -r requirements.txt

or, for a more reproducible environment, use Poetry (brew install poetry or pip install poetry):

poetry install --no-root
poetry shell

or Pipenv (pip install pipenv):

pipenv install
pipenv shell

The shell command turns on the virtual environment. It should be run once before running the scripts.

If you are not downloading the precomputed test images, make sure the following external software has been installed in your system:

  1. Inkscape 1.x. Please install an up-to-date Inkscape. Versions prior to 1.0 have incompatible command line parameters. brew cask install inkscape or apt-get install inkscape.
  2. ImageMagick. brew install imagemagick or apt-get install imagemagick.

The Dataset and Precomputed Output

You can download the sketch dataset, precomputed algorithmic output, and computed metrics here: Benchmark_Dataset.zip (900 MB), Automatic_Results.zip (440 MB), Evaluation_Data.zip (20 MB). Unzip them in ./data/ (unless you changed the paths in cfg.yaml):

unzip Benchmark_Dataset.zip
unzip Automatic_Results.zip
unzip Evaluation_Data.zip

Note that the vectorized data has been normalized to have uniform line width. It was too tedious for artists to match line widths with the underlying image, so we did not require them to do so and then normalized the data.

Running

Generating or Downloading the Testset

(If you are trying to regenerate the website from the paper using the precomputed output and already computed metrics, you do not need the Testset. If you want to change anything except the website itself, you need it.)

The Testset consists of files derived from the dataset: rasterized versions of vector images and downsized images. You can regenerate it (see below) or download Benchmark_Testset.zip (780 MB) and extract it into ./data/ (unless you changed the paths in cfg.yaml):

unzip Benchmark_Testset.zip

You can regenerate the Testset (necessary if you change the dataset itself) by running the following commands:

python3 run_benchmark.py --normalize   # generate normalized versions of SVGs
python3 run_benchmark.py --generate-test # generate rasterized versions of Dataset, at different resolutions

This will scan dataset_dir and test_dir, generate missing normalized and rasterized images as needed. It takes approximately 20 to 30 minutes to generate the entire Testset.

Adding Algorithms to the Benchmark

Run your algorithm on all images in the Testset. If your algorithm takes raster input, run on all images in ./data/Benchmark_Testset/rough/pixel. If your algorithm takes vector input, run on all images in ./data/Benchmark_Testset/rough/vector. For each input, save the corresponding output image as a file with the same name in the directory: ./data/Automatic_Results/{name_of_your_method}{input_type}/{parameter}/

The algorithm folder name must contain two parts: name_of_your_method with an input_type suffix. The input_type suffix must be either -png or -svg. The parameter subdirectory can be any string; the string none is replaced with the empty string when generating the website. Folders beginning with a . are ignored. For examples, see the precomputed algorithmic output in ./Automatic_Results. and evaluation result in ./Evaluation_Data already.

If your algorithm runs via alg path/to/input.svg path/to/output.png, here are two example commands to run your algorithm in batch on the entire benchmark. Via find and parallel

find ./data/Benchmark_Testset/rough/pixel -name '*.png' -print0 | parallel -0 alg '{}' './data/Automatic_Results/MyAlgorithm-png/none/{/.}.svg'

Via fd:

fd ./data/Benchmark_Testset/rough/pixel -e png -x alg '{}' './data/Automatic_Results/MyAlgorithm-png/none/{/.}.svg'

Computing the Metrics

Run the evaluation with the command:

python3 run_benchmark.py --evaluation

This command creates CSV files in ./data/Evaluation_Data. It will not overwrite existing CSV files. If you downloaded the precomputed data, remove a file to regenerate it.

Generating the Website to View Evaluation Results

After you have called the evaluation step above to compute the metrics, generate the website with the command:

python3 run_benchmark.py --website

You must also generate thumbnails once with the command:

python3 run_benchmark.py --thumbs

Internally, the --thumbs command creates a shell that calls find, convert, and parallel.

To view the website, open the help.html or index.html inside the web_dir manually or else call:

python3 run_benchmark.py --show

The website visualizes all algorithms' output and plots the metrics.

Putting It All Together

If you don't want to call each step separately, simply call:

python3 run_benchmark.py --all

Computing Metrics on a Single Sketch

Similarity Metrics

To run the similarity metrics manually, use tools/metric_multiple.py. To get help, run:

python3 tools/metric_multiple.py --help

To compare two files:

python3 tools/metric_multiple.py -gt "example/simple-single-dot.png" -i "example/simple-single-dot-horizontal1.png" -d 0 --f-measure --chamfer --hausdorff

Vector Metrics

To evaluate junction quality:

python3 tools/junction_quality.py --help

To compute arc length statistics:

python3 tools/svg_arclengths_statistics.py --help

Rasterization

If you need to convert a file from an SVG to a PNG, you can do it specifying the output filename:

inkscape my_file.svg --export-filename="output-WIDTH.png" --export-width=WIDTH --export-height=HEIGHT

or specifying the output type (the input filename's extension is replaced):

inkscape my_file.svg --export-type=png --export-width=WIDTH --export-height=HEIGHT

The shorthand versions of the above rasterization commands are:

inkscape -o output-WIDTH.png -w WIDTH -h HEIGHT my_file.svg

or

inkscape --export-type=png -w WIDTH -h HEIGHT my_file.svg

If you pass only one of width or height, the other is chosen automatically in a manner preserving the aspect ratio.

You might also like...
Multiple paper open-source codes of the Microsoft Research Asia DKI group
Multiple paper open-source codes of the Microsoft Research Asia DKI group

📫 Paper Code Collection (MSRA DKI Group) This repo hosts multiple open-source codes of the Microsoft Research Asia DKI Group. You could find the corr

Repo for the Video Person Clustering dataset, and code for the associated paper
Repo for the Video Person Clustering dataset, and code for the associated paper

Video Person Clustering Repo for the Video Person Clustering dataset, and code for the associated paper. This reporsitory contains the Video Person Cl

code associated with ACL 2021 DExperts paper

DExperts Hi! This repository contains code for the paper DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts to appear at

Code associated with the paper
Code associated with the paper "Deep Optics for Single-shot High-dynamic-range Imaging"

Deep Optics for Single-shot High-dynamic-range Imaging Code associated with the paper "Deep Optics for Single-shot High-dynamic-range Imaging" CVPR, 2

Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020)
Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020)

Causality In Traffic Accident (Under Construction) Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020) Overview Data Prepa

Supplementary code for SIGGRAPH 2021 paper: Discovering Diverse Athletic Jumping Strategies
Supplementary code for SIGGRAPH 2021 paper: Discovering Diverse Athletic Jumping Strategies

SIGGRAPH 2021: Discovering Diverse Athletic Jumping Strategies project page paper demo video Prerequisites Important Notes We suspect there are bugs i

Code for the SIGGRAPH 2021 paper
Code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in Video".

Consistent Depth of Moving Objects in Video This repository contains training code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in

[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

A sketch extractor for anime/illustration.
A sketch extractor for anime/illustration.

Anime2Sketch Anime2Sketch: A sketch extractor for illustration, anime art, manga By Xiaoyu Xiang Updates 2021.5.2: Upload more example results of anim

Owner
null
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

null 30 Dec 24, 2022
This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)"

Gait3D-Benchmark This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild

null 82 Jan 4, 2023
[SIGGRAPH Asia 2021] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning.

DeepVecFont This is the homepage for "DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning". Yizhi Wang and Zhouhui Lian. WI

Yizhi Wang 17 Dec 22, 2022
[SIGGRAPH Asia 2021] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning.

DeepVecFont This is the homepage for "DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning". Yizhi Wang and Zhouhui Lian. WI

Yizhi Wang 5 Oct 22, 2021
The implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021

DynamicNeuralGarments Introduction This repository contains the implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021. ./GarmentMoti

null 42 Dec 27, 2022
PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence) and pre-trained model on ImageNet dataset

Reference-Based-Sketch-Image-Colorization-ImageNet This is a PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization usin

Yuzhi ZHAO 11 Jul 28, 2022
Pre-trained BERT Models for Ancient and Medieval Greek, and associated code for LaTeCH 2021 paper titled - "A Pilot Study for BERT Language Modelling and Morphological Analysis for Ancient and Medieval Greek"

Ancient Greek BERT The first and only available Ancient Greek sub-word BERT model! State-of-the-art post fine-tuning on Part-of-Speech Tagging and Mor

Pranaydeep Singh 22 Dec 8, 2022
Code for the paper: Sketch Your Own GAN

Sketch Your Own GAN Project | Paper | Youtube Our method takes in one or a few hand-drawn sketches and customizes an off-the-shelf GAN to match the in

null 677 Dec 28, 2022
TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)

TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020) About The goal of our research problem is illustrated below: give

null 59 Dec 9, 2022