Multi-Content GAN for Few-Shot Font Style Transfer at CVPR 2018

Related tags

Deep Learning MC-GAN
Overview

MC-GAN in PyTorch

This is the implementation of the Multi-Content GAN for Few-Shot Font Style Transfer. The code was written by Samaneh Azadi. If you use this code or our collected font dataset for your research, please cite:

Multi-Content GAN for Few-Shot Font Style Transfer; Samaneh Azadi, Matthew Fisher, Vladimir Kim, Zhaowen Wang, Eli Shechtman, Trevor Darrell, in arXiv, 2017.

Prerequisites:

  • Linux or macOS
  • Python 2.7
  • CPU or NVIDIA GPU + CUDA CuDNN

Getting Started

Installation

  • Install PyTorch and dependencies from http://pytorch.org
  • Install Torch vision from the source.
git clone https://github.com/pytorch/vision
cd vision
python setup.py install
pip install visdom
pip install dominate
pip install scikit-image
  • Clone this repo:
mkdir FontTransfer
cd FontTransfer
git clone https://github.com/azadis/MC-GAN
cd MC-GAN

MC-GAN train/test

  • Download our gray-scale 10K font data set:

./datasets/download_font_dataset.sh Capitals64

../datasets/Capitals64/test_dict/dict.pkl makes observed random glyphs be similar at different test runs on Capitals64 dataset. It is a dictionary with font names as keys and random arrays containing indices from 0 to 26 as their values. Lengths of the arrays are equal to the number of non-observed glyphs in each font.

../datasets/Capitals64/BASE/Code New Roman.0.0.png is a fixed simple font used for training the conditional GAN in the End-to-End model.

./datasets/download_font_dataset.sh public_web_fonts

Given a few letters of font ${DATA} for examples 5 letters {T,O,W,E,R}, training directory ${DATA}/A should contain 5 images each with dimension 64x(64x26)x3 where 5 - 1 = 4 letters are given and the rest are zeroed out. Each image should be saved as ${DATA}_${IND}.png where ${IND} is the index (in [0,26) ) of the letter omitted from the observed set. Training directory ${DATA}/B contains images each with dimension 64x64x3 where only the omitted letter is given. Image names are similar to the ones in ${DATA}/A though. ${DATA}/A/test/${DATA}.png contains all 5 given letters as a 64x(64x26)x3-dimensional image. Structure of the directories for above real-world fonts (including only a few observed letters) is as follows. One can refer to the examples in ../datasets/public_web_fonts for more information.

../datasets/public_web_fonts
                      └── ${DATA}/
                          ├── A/
                          │  ├──train/${DATA}_${IND}.png
                          │  └──test/${DATA}.png
                          └── B/
                             ├──train/${DATA}_${IND}.png
                             └──test/${DATA}.png
  • (Optional) Download our synthetic color gradient font data set:

./datasets/download_font_dataset.sh Capitals_colorGrad64
  • Train Glyph Network:
./scripts/train_cGAN.sh Capitals64

Model parameters will be saved under ./checkpoints/GlyphNet_pretrain.

  • Test Glyph Network after specific numbers of epochs (e.g. 400 by setting EPOCH=400 in ./scripts/test_cGAN.sh):
./scripts/test_cGAN.sh Capitals64
  • (Optional) View the generated images (e.g. after 400 epochs):
cd ./results/GlyphNet_pretrain/test_400/

If you are running the code in your local machine, open index.html. If you are running remotely via ssh, on your remote machine run:

python -m SimpleHTTPServer 8881

Then on your local machine, start an SSH tunnel: ssh -N -f -L localhost:8881:localhost:8881 remote_user@remote_host Now open your browser on the local machine and type in the address bar:

localhost:8881
  • (Optional) Plot loss functions values during training, from MC-GAN directory:
python util/plot_loss.py --logRoot ./checkpoints/GlyphNet_pretrain/
  • Train End-to-End network (e.g. on DATA=ft37_1): You can train Glyph Network following instructions above or download our pre-trained model by running:
./pretrained_models/download_cGAN_models.sh

Now, you can train the full model:

./scripts/train_StackGAN.sh ${DATA}
  • Test End-to-End network:
./scripts/test_StackGAN.sh ${DATA}

results will be saved under ./results/${DATA}_MCGAN_train.

  • (Optional) Make a video from your results in different training epochs:

First, train your model and save model weights in every epoch by setting opt.save_epoch_freq=1 in scripts/train_StackGAN.sh. Then test in different epochs and make the video by:

./scripts/make_video.sh ${DATA}

Follow the previous steps to visualize generated images and training curves where you replace GlyphNet_train with ${DATA}_StackGAN_train.

Training/test Details

  • Flags: see options/train_options.py, options/base_options.py and options/test_options.py for explanations on each flag.

  • Baselines: if you want to use this code to get results of Image Translation baseline or want to try tiling glyphs rather than stacking, refer to the end of scripts/train_cGAN.sh . If you only want to train OrnaNet on top of clean glyphs, refer to the end of scripts/train_StackGAN.sh.

  • Image Dimension: We have tried this network only on 64x64 images of letters. We do not scale and crop images since we set both opt.FineSize and opt.LoadSize to 64.

Citation

If you use this code or the provided dataset for your research, please cite our paper:

@inproceedings{azadi2018multi,
  title={Multi-content gan for few-shot font style transfer},
  author={Azadi, Samaneh and Fisher, Matthew and Kim, Vladimir and Wang, Zhaowen and Shechtman, Eli and Darrell, Trevor},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  volume={11},
  pages={13},
  year={2018}
}

Acknowledgements

We thank Elena Sizikova for downloading all fonts used in the 10K font data set.

Code is inspired by pytorch-CycleGAN-and-pix2pix.

Comments
  • invalid index of a 0-dim tensor

    invalid index of a 0-dim tensor

    When I run the model with CUP, and got this error:

    Traceback (most recent call last): File "train.py", line 38, in errors = model.get_current_errors() File "/Users/XXX/Desktop/XXX/MC-GAN/models/cGAN_model.py", line 250, in get_current_errors return OrderedDict([('G_GAN', self.loss_G_GAN.data[0]), IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number`

    Please change the code return OrderedDict([('G_GAN', self.loss_G_GAN.data[0]) in line 250 of cGAN_model.py into return OrderedDict([('G_GAN', self.loss_G_GAN.data)

    opened by Jakexxh 3
  • RuntimeError: The size of tensor a (3) must match the size of tensor b (7) at non-singleton dimension 0

    RuntimeError: The size of tensor a (3) must match the size of tensor b (7) at non-singleton dimension 0

    I get this error when running ./scripts/train_StackGAN.sh ${DATA}

    model [StackGANModel] was created
    create web directory ./checkpoints/BRAVE_MCGAN_train/web...
    saving the model at the end of epoch 0, iters 0
    Traceback (most recent call last):
      File "train_Stack.py", line 49, in <module>
        model.optimize_parameters_Stacked(epoch)
      File "/home/abc/FontTransfer/MC-GAN/models/StackGAN_model.py", line 538, in optimize_parameters_Stacked
        self.backward_G(fake_B0_grad, iter)
      File "/home/abc/FontTransfer/MC-GAN/models/StackGAN_model.py", line 408, in backward_G
        self.loss_G_L1 = self.criterionL1(weights * self.fake_B0, weights * self.fake_B0_init.detach()) * self.opt.lambda_C
    RuntimeError: The size of tensor a (3) must match the size of tensor b (7) at non-singleton dimension 0
    

    Which version of pytorch and torchvision that this repo based on? My torch is 0.3.1 and torchvision 0.2.0

    opened by LiberiFatali 3
  • when sh train_cGAN.sh, it comes a mistake

    when sh train_cGAN.sh, it comes a mistake

    Hi, Thank you for sharing the code, and when sh train_cGAN.sh, it comes a mistake,

    RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/torch/lib/THC/THCGeneral.c:70

    my system is ubuntu16.04, CUDA8.0, pytorch0.3.0,torchvision0.2.0,python2.7.12.Thanks.

    opened by clscy 2
  • AssertionError: not a valid directory

    AssertionError: not a valid directory

    Hi, so I only mod'd the cGan file to run on a CPU rather than a GPU and I'm consistently running into this assertionError. I have followed the path manually to ensure it directs to the right location (it does) but somehow this error consistently throws even though the directory does exist. Could you take a peek at my code and ref. error code, I could use the help because I have been unable to correct this on my own using web references. `#!/bin/bash -f

    #=====================================

    MC-GAN

    Train and Test conditional GAN Glyph network

    By Samaneh Azadi

    #=====================================

    #=====================================

    Set Parameters

    #=====================================

    DATA=$1 DATASET="../datasets/${DATA}/" experiment_dir="GlyphNet_pretrain" MODEL=cGAN MODEL_G=resnet_6blocks MODEL_D=n_layers n_layers_D=1 NORM=batch IN_NC=26 O_NC=26 GRP=26 PRENET=2_layers FINESIZE=64 LOADSIZE=64 LAM_A=100 NITER=500 NITERD=100 BATCHSIZE=150 CUDA_ID=-1

    if [ ! -d "./checkpoints/${experiment_dir}" ]; then mkdir "./checkpoints/${experiment_dir}" fi LOG="./checkpoints/${experiment_dir}/output.txt" if [ -f $LOG ]; then rm $LOG fi

    exec &> >(tee -a "$LOG")

    =======================================

    Train Glyph Network on font dataset

    =======================================

    python train.py --dataroot ../datasets --name "${experiment_dir}"
    --model ${MODEL} --which_model_netG ${MODEL_G} --which_model_netD ${MODEL_D} --n_layers_D ${n_layers_D} --which_model_preNet ${PRENET}
    --norm ${NORM} --input_nc ${IN_NC} --output_nc ${O_NC} --grps ${GRP} --fineSize ${FINESIZE} --loadSize ${LOADSIZE} --lambda_A ${LAM_A} --align_data --use_dropout
    --display_id 0 --niter ${NITER} --niter_decay ${NITERD} --batchSize ${BATCHSIZE} --conditional --save_epoch_freq 100 --print_freq 100 --conv3d --gpu_ids ' '

    =======================================

    Train on RGB inputs to generate RGB outputs; Image Translation in the paper

    =======================================

    CUDA_VISIBLE_DEVICES=2 python ~/AdobeFontDropper/train.py --dataroot ../datasets/Capitals_colorGrad64/ --name "${experiment_dir}"\

    					 # --model cGAN --which_model_netG resnet_6blocks --which_model_netD n_layers --n_layers_D 1 --which_model_preNet 2_layers \
    					 # --norm batch --input_nc 78 --output_nc 78 --fineSize 64 --loadSize 64 --lambda_A 100 --align_data --use_dropout \
    					 # --display_id 0 --niter 500 --niter_decay 1000 --batchSize 100 --conditional --save_epoch_freq 20 --display_freq 2 --rgb
    

    =======================================

    Consider input as tiling of input glyphs rather than a stack

    =======================================

    CUDA_VISIBLE_DEVICES=2 python ~/AdobeFontDropper/train.py --dataroot ../datasets/Capitals64/ --name "${experiment_dir}" \

    			# --model cGAN --which_model_netG resnet_6blocks --which_model_netD n_layers  --n_layers_D 1 --which_model_preNet 2_layers\
    			# --norm batch --input_nc 1 --output_nc 1 --fineSize 64 --loadSize 64 --lambda_A 100 --align_data --use_dropout\
    			# --display_id 0 --niter 500 --niter_decay 2000 --batchSize 5 --conditional --save_epoch_freq 10 --display_freq 5 --print_freq 100 --flat
    

    ` screen shot 2019-03-06 at 11 46 51 pm

    opened by DanoDataScientist 1
  • Quick fix - setup error on 1 GPU machine

    Quick fix - setup error on 1 GPU machine

    If you only have one GPU, the setup steps give the following error:

    cuda runtime error (38) : no CUDA-capable device is detected

    Please change CUDA_ID=1 to CUDA_ID=0 in scripts/train_cGAN.sh

    (I can make a pull request for this if the author prefers)

    opened by mrmartin 1
  • Usecase question

    Usecase question

    Hi. I don't have a deep understanding of all this, so please bear with me.

    I am working hobby project with the ornate handwriting of a medieval manuscript. The manuscript is in Latin. There are no letter "j"s (i is used), no "k"s (didn't exist), no "v"s (u is used), no "w"s (didn't exist), and very few "y"s.

    Would MC-GAN be capable of doing either of the following tasks?

    1. Produce the letters that don't exist based on the letters that do.

    2. Produce multiple, unique instances of letters that are few in number. (The letters that exist in abundance slightly vary from one to the next, because this is handwriting. So an "e", for example, looks slightly different every time. I'm asking if MC-GAN could create more "y"s, for example, with each one slightly varying from the others, yet plausibly the product of the original scribe.)

    Thank you!

    opened by SB2020-eye 0
  • RuntimeError: DataLoader worker (pid(s) 18360) exited unexpectedly

    RuntimeError: DataLoader worker (pid(s) 18360) exited unexpectedly

    While running train_Stack.py I am getting below error:

    Traceback (most recent call last): File "D:\Innovation day 2019\python\lib\site-packages\torch\utils\data\dataloader.py", line 724, in _try_get_data data = self._data_queue.get(timeout=timeout) File "D:\Innovation day 2019\python\lib\multiprocessing\queues.py", line 105, in get raise Empty _queue.Empty

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "C:/Users/kiku/FontTransfer/MC-GAN/train_Stack.py", line 44, in for i, data in enumerate(dataset): File "C:\Users\kiku\FontTransfer\MC-GAN\data\data_loader.py", line 211, in next A, A_paths = next(self.data_loader_iter_A) File "D:\Innovation day 2019\python\lib\site-packages\torch\utils\data\dataloader.py", line 804, in next idx, data = self._get_data() File "D:\Innovation day 2019\python\lib\site-packages\torch\utils\data\dataloader.py", line 771, in _get_data success, data = self._try_get_data() File "D:\Innovation day 2019\python\lib\site-packages\torch\utils\data\dataloader.py", line 737, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) RuntimeError: DataLoader worker (pid(s) 18360) exited unexpectedly

    Process finished with exit code 1

    opened by krypton404 0
  • KeyError: 'anie_i.0.0.png'        When use my test picture

    KeyError: 'anie_i.0.0.png' When use my test picture

    When I use my test picture,RUN ERROR: Traceback (most recent call last): File "test.py", line 34, in for i, data in enumerate(dataset): File "/home/zhaojing/.conda/envs/py2.7/lib/python2.7/site-packages/future/types/newobject.py", line 53, in next return type(self).next(self) File "/home/zhaojing/ZT/MC-GAN-master1/data/data_loader.py", line 164, in next blank_ind = self.random_dict[file_name][0:int(self.blanks*A.size(1)/n_rgb)] KeyError: 'anie_i.0.0.png'

    BUT when i change the name the same as yours ,it runs .

    opened by zhaojingzj 0
  • Error training with pretrained models.

    Error training with pretrained models.

    After running ./scripts/train_StackGAN.sh ft37_1

    RuntimeError: The size of tensor a (26) must match the size of tensor b (64) at non-singleton dimension 0

    ... I hackily fixed that changing train.staqckGAN.sh FINESIZE=64 LOADSIZE=64 to FINESIZE=26 LOADSIZE=26

    ... Then the next error was it couldn't find the files in /A/train/ and /b/train/ (it was looking for .ft6_14.png when they were all named ft6_14.png etc (without the ' . ' prepended. So I uploaded them because I couldn't find how that path was being set.

    Now I get a similar error to the start..

    RuntimeError: The size of tensor a (3) must match the size of tensor b (26) at non-singleton dimension 0

    I've given up trying to do this. As it's taken my whole day with no results :(

    Here's where I've stopped https://colab.research.google.com/gist/Abul22/cf9a67e393118a1c30add68c38ac65c9/untitled0.ipynb

    If anyone else more capable than I could make a working colab (or help me find where I've gone wrong) -- That would be so amazingly great.

    Cheers

    opened by Abul22 4
Owner
Samaneh Azadi
CS PhD student at UC Berkeley
Samaneh Azadi
The pytorch implementation of DG-Font: Deformable Generative Networks for Unsupervised Font Generation

DG-Font: Deformable Generative Networks for Unsupervised Font Generation The source code for 'DG-Font: Deformable Generative Networks for Unsupervised

null 130 Dec 5, 2022
This script runs neural style transfer against the provided content image.

Neural Style Transfer Content Style Output Description: This script runs neural style transfer against the provided content image. The content image m

Martynas Subonis 0 Nov 25, 2021
Pytorch implementation of the paper "Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization"

Pytorch implementation of the paper "Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization"

Dongkyu Lee 4 Sep 18, 2022
Few-NERD: Not Only a Few-shot NER Dataset

Few-NERD: Not Only a Few-shot NER Dataset This is the source code of the ACL-IJCNLP 2021 paper: Few-NERD: A Few-shot Named Entity Recognition Dataset.

THUNLP 319 Dec 30, 2022
Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"

T-Few This repository contains the official code for the paper: "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learni

null 220 Dec 31, 2022
Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch

Cross Transformers - Pytorch (wip) Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch Install $ pip install cross-t

Phil Wang 40 Dec 22, 2022
Self-training for Few-shot Transfer Across Extreme Task Differences

Self-training for Few-shot Transfer Across Extreme Task Differences (STARTUP) Introduction This repo contains the official implementation of the follo

Cheng Perng Phoo 33 Oct 31, 2022
CVPR '21: In the light of feature distributions: Moment matching for Neural Style Transfer

In the light of feature distributions: Moment matching for Neural Style Transfer (CVPR 2021) This repository provides code to recreate results present

Nikolai Kalischek 49 Oct 13, 2022
Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer"

SCGAN Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer" Prepare The pre-trained model is avaiable at http

null 118 Dec 12, 2022
Code for our CVPR 2021 Paper "Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes".

Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes (CVPR 2021) Project page | Paper | Colab | Colab for Drawing App Rethinking Style

CompVis Heidelberg 153 Jan 4, 2023
Simple Tensorflow implementation of "Adaptive Convolutions for Structure-Aware Style Transfer" (CVPR 2021)

AdaConv — Simple TensorFlow Implementation [Paper] : Adaptive Convolutions for Structure-Aware Style Transfer (CVPR 2021) Note This repository does no

Junho Kim 26 Nov 18, 2022
git《FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding》(CVPR 2021) GitHub: [fig8]

FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding (CVPR 2021) This repo contains the implementation of our state-of-the-art fewshot ob

null 233 Dec 29, 2022
Official repository for Few-shot Image Generation via Cross-domain Correspondence (CVPR '21)

Few-shot Image Generation via Cross-domain Correspondence Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zh

Utkarsh Ojha 251 Dec 11, 2022
[CVPR 2021] Few-shot 3D Point Cloud Semantic Segmentation

Few-shot 3D Point Cloud Semantic Segmentation Created by Na Zhao from National University of Singapore Introduction This repository contains the PyTor

null 117 Dec 27, 2022
Adaptive Prototype Learning and Allocation for Few-Shot Segmentation (CVPR 2021)

ASGNet The code is for the paper "Adaptive Prototype Learning and Allocation for Few-Shot Segmentation" (accepted to CVPR 2021) [arxiv] Overview data/

Gen Li 91 Dec 23, 2022
Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

null 34 Oct 8, 2022
The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter

FAPIS The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter Introduction This repo is primari

Khoi Nguyen 8 Dec 11, 2022
The Pytorch code of "Joint Distribution Matters: Deep Brownian Distance Covariance for Few-Shot Classification", CVPR 2022 (Oral).

DeepBDC for few-shot learning        Introduction In this repo, we provide the implementation of the following paper: "Joint Distribution Matters: Dee

FeiLong 116 Dec 19, 2022
Pytorch implementation of CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation"

MUST-GAN Code | paper The Pytorch implementation of our CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generat

TianxiangMa 46 Dec 26, 2022