AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer
[Paper] [PyTorch Implementation] [Paddle Implementation]
Overview
This repository contains the official PyTorch implementation of paper:
AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer,
Songhua Liu, Tianwei Lin, Dongliang He, Fu Li, Meiling Wang, Xin Li, Zhengxing Sun, Qian Li, Errui Ding
ICCV 2021
Prerequisites
- Linux or macOS
- Python 3
- PyTorch 1.7+ and other dependencies (torchvision, visdom, dominate, and other common python libs)
Getting Started
-
Clone this repository:
git clone https://github.com/Huage001/AdaAttN cd AdaAttN
-
Inference:
-
Make a directory for checkpoints if there is not:
mkdir checkpoints
-
Download pretrained model from Google Drive, move it to checkpoints directory, and unzip:
mv [Download Directory]/AdaAttN_model.zip checkpoints/ unzip checkpoints/AdaAttN_model.zip rm checkpoints/AdaAttN_model.zip
-
Configure content_path and style_path in test_adaattn.sh firstly, indicating paths to folders of testing content images and testing style images respectively.
-
Then, simply run:
bash test_adaattn.sh
-
Check the results under results/AdaAttN folder.
-
-
Train:
-
Download COCO dataset and WikiArt dataset and then extract them.
-
Configure content_path and style_path in train_adaattn.sh, indicating paths to folders of training content images and training style images respectively.
-
Before training, start visdom server:
python -m visdom.server
-
Then, simply run:
bash train_adaattn.sh
-
You can monitor training status at http://localhost:8097/ and models would be saved at checkpoints/AdaAttN folder.
-
You may feel free to try other training options written in train_adaattn.sh.
-
Citation
-
If you find ideas or codes useful for your research, please cite:
@inproceedings{liu2021adaattn, title={AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer}, author={Liu, Songhua and Lin, Tianwei and He, Dongliang and Li, Fu and Wang, Meiling and Li, Xin and Sun, Zhengxing and Li, Qian and Ding, Errui}, booktitle={Proceedings of the IEEE International Conference on Computer Vision}, year={2021} }
Acknowledgments
- This implementation is developed based on the code framework of pytorch-CycleGAN-and-pix2pix by Junyan Zhu et al.