General Virtual Sketching Framework for Vector Line Art (SIGGRAPH 2021)

Overview

General Virtual Sketching Framework for Vector Line Art - SIGGRAPH 2021

Paper | Project Page

    

Outline

Dependencies

Testing with Trained Weights

Model Preparation

Download the models here:

  • pretrain_clean_line_drawings (105 MB): for vectorization
  • pretrain_rough_sketches (105 MB): for rough sketch simplification
  • pretrain_faces (105 MB): for photograph to line drawing

Then, place them in this file structure:

outputs/
    snapshot/
        pretrain_clean_line_drawings/
        pretrain_rough_sketches/
        pretrain_faces/

Usage

Choose the image in the sample_inputs/ directory, and run one of the following commands for each task. The results will be under outputs/sampling/.

python3 test_vectorization.py --input muten.png

python3 test_rough_sketch_simplification.py --input rocket.png

python3 test_photograph_to_line.py --input 1390.png

Note!!! Our approach starts drawing from a randomly selected initial position, so it outputs different results in every testing trial (some might be fine and some might not be good enough). It is recommended to do several trials to select the visually best result. The number of outputs can be defined by the --sample argument:

python3 test_vectorization.py --input muten.png --sample 10

python3 test_rough_sketch_simplification.py --input rocket.png --sample 10

python3 test_photograph_to_line.py --input 1390.png --sample 10

Reproducing Paper Figures: our results (download from here) are selected by doing a certain number of trials. Apparently, it is required to use the same initial drawing positions to reproduce our results.

Additional Tools

a) Visualization

Our vector output is stored in a npz package. Run the following command to obtain the rendered output and the drawing order. Results will be under the same directory of the npz file.

python3 tools/visualize_drawing.py --file path/to/the/result.npz 

b) GIF Making

To see the dynamic drawing procedure, run the following command to obtain the gif. Result will be under the same directory of the npz file.

python3 tools/gif_making.py --file path/to/the/result.npz 

c) Conversion to SVG

Our vector output in a npz package is stored as Eq.(1) in the main paper. Run the following command to convert it to the svg format. Result will be under the same directory of the npz file.

python3 tools/svg_conversion.py --file path/to/the/result.npz 
  • The conversion is implemented in two modes (by setting the --svg_type argument):
    • single (default): each stroke (a single segment) forms a path in the SVG file
    • cluster: each continuous curve (with multiple strokes) forms a path in the SVG file

Important Notes

In SVG format, all the segments on a path share the same stroke-width. While in our stroke design, strokes on a common curve have different widths. Inside a stroke (a single segment), the thickness also changes linearly from an endpoint to another. Therefore, neither of the two conversion methods above generate visually the same results as the ones in our paper. (Please mention this issue in your paper if you do qualitative comparisons with our results in SVG format.)


Training

Preparations

Download the models here:

  • pretrain_neural_renderer (40 MB): the pre-trained neural renderer
  • pretrain_perceptual_model (691 MB): the pre-trained perceptual model for raster loss

Download the datasets here:

  • QuickDraw-clean (14 MB): for clean line drawing vectorization. Taken from QuickDraw dataset.
  • QuickDraw-rough (361 MB): for rough sketch simplification. Synthesized by the pencil drawing generation method from Sketch Simplification.
  • CelebAMask-faces (370 MB): for photograph to line drawing. Processed from the CelebAMask-HQ dataset.

Then, place them in this file structure:

datasets/
    QuickDraw-clean/
    QuickDraw-rough/
    CelebAMask-faces/
outputs/
    snapshot/
        pretrain_neural_renderer/
        pretrain_perceptual_model/

Running

It is recommended to train with multi-GPU. We train each task with 2 GPUs (each with 11 GB).

python3 train_vectorization.py

python3 train_rough_photograph.py --data rough

python3 train_rough_photograph.py --data face

Citation

If you use the code and models please cite:

@article{mo2021virtualsketching,
  title   = {General Virtual Sketching Framework for Vector Line Art},
  author  = {Mo, Haoran and Simo-Serra, Edgar and Gao, Chengying and Zou, Changqing and Wang, Ruomei},
  journal = {ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH 2021)},
  year    = {2021},
  volume  = {40},
  number  = {4},
  pages   = {51:1--51:14}
}
Comments
  • The paper's code seems not being here

    The paper's code seems not being here

    Hello,

    I just have read your paper and got so interested in this project, but when I followed the link to this github page, it seems like there is no implementation code here. Could I ask will they be upload in the further and when?

    Thank you very much! Tan

    opened by AriaTZY 3
  • Setting same init cursors position for all test images

    Setting same init cursors position for all test images

    Hey, can you please tell me how should I set the same initial cursor position for all the images for which I perform vectorization. And does setting the same init cursor results in same stroke data every time?

    Thank you.

    opened by Hrishikesh2798 2
  • Train the neural renderer

    Train the neural renderer

    Hi Mark,

    Thank you for your excellent and impressive work. I would like to train the neural renderer myself. However, if I generate a random stroke (i.e. a single fixed-width line, say wideth=3) and use L-2 loss, the trained neural renderer will output a solid dark image. I think it is due to the small difference between the dark image and the one with a single line. I cannot find the code for training the neural renderer. Could you please share small ideas or the code you used to train it?

    Best regards, Rongkai

    opened by RongkaiZhang 2
  • Fine tuning on other data

    Fine tuning on other data

    Hello! Great work! I was wondering is there a way to fine-tune your pretrained model for vectorization. In train_vectorization.py file, if I'm not mistaken,there is only a cold start variant of training(I mean, pre-trained clean_vectorization module was not loaded). Training from the without pretrained model was working fine, but when I tried to load from clean_vectorization_model(by adding load_checkpoint(sess,model_dir, gen_model_pretrain=True)) I encountered following error(about architecture missmatching)

    2 root error(s) found.

    (0) Invalid argument: tensor_name = Variable_1; expected dtype int32 does not equal original dtype float tensor_name = Variable_3; expected dtype float does not equal original dtype int32 [[node save_1/RestoreV2 (defined at /opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748) ]] [[save_1/RestoreV2/_271]] (1) Invalid argument: tensor_name = Variable_1; expected dtype int32 does not equal original dtype float tensor_name = Variable_3; expected dtype float does not equal original dtype int32 [[node save_1/RestoreV2 (defined at /opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748) ]] 0 successful operations. 0 derived errors ignored.

    Original stack trace for 'save_1/RestoreV2': File "train_vectorization.py", line 319, in main() File "train_vectorization.py", line 315, in main trainer(model_params) File "train_vectorization.py", line 296, in trainer load_checkpoint(sess, '/logs/snapshot/pretrain_clean_line_drawings', gen_model_pretrain=True) File "/code/virtual_sketching/utils.py", line 51, in load_checkpoint restorer = tf.train.Saver(load_var) File "/opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 828, in init self.build() File "/opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 840, in build self._build(self._filename, build_save=True, build_restore=True) File "/opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 878, in _build build_restore=build_restore) File "/opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 508, in _build_internal restore_sequentially, reshape) File "/opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 328, in _AddRestoreOps restore_sequentially) File "/opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 575, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "/opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_io_ops.py", line 1696, in restore_v2 name=name) File "/opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper op_def=op_def) File "/opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "/opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op attrs, op_def, compute_device) File "/opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal op_def=op_def) File "/opt/anaconda3/envs/virtual_sketching/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in init self._traceback = tf_stack.extract_stack()

    I figured out that likely the problem was that the class for Virtual_Sketching are different for training and testing . From the error logs above it is hard to tell where exactly the problem is. Maybe you already encounter this kind of situation-error. I will be very grateful , if you could suggest some solutions or where to look at or maybe tell the right way to do fine-tuning.

    opened by Vahe1994 2
  • 有关model_common_train代码中的问题

    有关model_common_train代码中的问题

    作者您好,请问model_common_train.py的931行代码 filter_curr_stroke_image_soft = tf.multiply(tf.subtract(1.0, curr_state_soft), curr_stroke_image_large) 这里是不是应该改为 filter_curr_stroke_image_soft = tf.multiply(curr_state_soft, curr_stroke_image_large) 不需要tf.subtract(1.0, curr_state_soft)这个一步操作

    opened by DanielCho-HK 2
[SIGGRAPH Asia 2021] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning.

DeepVecFont This is the homepage for "DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning". Yizhi Wang and Zhouhui Lian. WI

Yizhi Wang 5 Oct 22, 2021
Vector Neurons: A General Framework for SO(3)-Equivariant Networks

Vector Neurons: A General Framework for SO(3)-Equivariant Networks Created by Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacc

Congyue Deng 332 Dec 29, 2022
dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVMJ)

dualFace dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVMJ) We provide python implementations for our CVM 2021 paper "dualFac

Haoran XIE 46 Nov 10, 2022
Vector AI — A platform for building vector based applications. Encode, query and analyse data using vectors.

Vector AI is a framework designed to make the process of building production grade vector based applications as quickly and easily as possible. Create

Vector AI 267 Dec 23, 2022
AI Virtual Calculator: This is a simple virtual calculator based on Artificial intelligence.

AI Virtual Calculator: This is a simple virtual calculator that works with gestures using OpenCV. We will use our hand in the air to click on the calc

Md. Rakibul Islam 1 Jan 13, 2022
Implementation for "Seamless Manga Inpainting with Semantics Awareness" (SIGGRAPH 2021 issue)

Seamless Manga Inpainting with Semantics Awareness [SIGGRAPH 2021](To appear) | Project Website | BibTex Introduction: Manga inpainting fills up the d

null 101 Jan 1, 2023
Supplementary code for SIGGRAPH 2021 paper: Discovering Diverse Athletic Jumping Strategies

SIGGRAPH 2021: Discovering Diverse Athletic Jumping Strategies project page paper demo video Prerequisites Important Notes We suspect there are bugs i

null 54 Dec 6, 2022
Official implementation of "StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation" (SIGGRAPH 2021)

StyleCariGAN in PyTorch Official implementation of StyleCariGAN:Caricature Generation via StyleGAN Feature Map Modulation in PyTorch Requirements PyTo

PeterZhouSZ 49 Oct 31, 2022
Code for HodgeNet: Learning Spectral Geometry on Triangle Meshes, in SIGGRAPH 2021.

HodgeNet | Webpage | Paper | Video HodgeNet: Learning Spectral Geometry on Triangle Meshes Dmitriy Smirnov, Justin Solomon SIGGRAPH 2021 Set-up To ins

Dima Smirnov 61 Nov 27, 2022
Tracing Versus Freehand for Evaluating Computer-Generated Drawings (SIGGRAPH 2021)

Tracing Versus Freehand for Evaluating Computer-Generated Drawings (SIGGRAPH 2021) Zeyu Wang, Sherry Qiu, Nicole Feng, Holly Rushmeier, Leonard McMill

Zach Zeyu Wang 23 Dec 9, 2022
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.

st-nerf We provide PyTorch implementations for our paper: Editable Free-viewpoint Video Using a Layered Neural Representation SIGGRAPH 2021 Jiakai Zha

Diplodocus 258 Jan 2, 2023
Code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in Video".

Consistent Depth of Moving Objects in Video This repository contains training code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in

Google 203 Jan 5, 2023
ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021 Dataset Code Demos Authors: He Zhang, Yuting Ye, Tak

HE ZHANG 194 Dec 6, 2022
Official implementation of "StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation" (SIGGRAPH 2021)

StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation This repository contains the official PyTorch implementation of the following

Wonjong Jang 270 Dec 30, 2022
The implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021

DynamicNeuralGarments Introduction This repository contains the implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021. ./GarmentMoti

null 42 Dec 27, 2022
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Phil Wang 59 Nov 24, 2022
A Deep Learning based project for creating line art portraits.

ArtLine The main aim of the project is to create amazing line art portraits. Sounds Intresting,let's get to the pictures!! Model-(Smooth) Model-(Quali

Vijish Madhavan 3.3k Jan 7, 2023