Translation-equivariant Image Quantizer for Bi-directional Image-Text Generation
Woncheol Shin1, Gyubok Lee1, Jiyoung Lee1, Joonseok Lee2,3, Edward Choi1 | Paper
1KAIST, 2Google Research, 3Seoul National University
Abstract
Recently, vector-quantized image modeling has demonstrated impressive performance on generation tasks such as text-to-image generation. However, we discover that the current image quantizers do not satisfy translation equivariance in the quantized space due to aliasing, degrading performance in the downstream text-to-image generation and image-to-text generation, even in simple experimental setups. Instead of focusing on anti-aliasing, we take a direct approach to encourage translation equivariance in the quantized space. In particular, we explore a desirable property of image quantizers, called 'Translation Equivariance in the Quantized Space' and propose a simple but effective way to achieve translation equivariance by regularizing orthogonality in the codebook embedding vectors. Using this method, we improve accuracy by +22% in text-to-image generation and +26% in image-to-text generation, outperforming the VQGAN.
Requirements
TBU
Download Dataset
TBU
Training TE-VQGAN (Stage 1)
TBU
Training Bi-directional Image-Text Generator (Stage 2)
TBU
Thanks to
The implementation of 'TE-VQGAN' and 'Bi-directional Image-Text Generator' is based on VQGAN and DALLE-pytorch. Thanks to all related works!
Citation
@misc{shin2021translationequivariant,
title={Translation-equivariant Image Quantizer for Bi-directional Image-Text Generation},
author={Woncheol Shin and Gyubok Lee and Jiyoung Lee and Joonseok Lee and Edward Choi},
year={2021},
eprint={2112.00384},
archivePrefix={arXiv},
primaryClass={cs.CV}
}