Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized

Overview

VQGAN-CLIP-Docker

About

Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized

This is a stripped and minimal dependency repository for running locally or in production VQGAN+CLIP.

For a Google Colab notebook see the original repository.

Samples

Setup

Clone this repository and cd inside.

git clone https://github.com/kcosta42/VQGAN-CLIP-Docker.git
cd VQGAN-CLIP-Docker

Download a VQGAN model and put it in the ./models folder.

Dataset Link
ImageNet (f=16), 16384 vqgan_imagenet_f16_16384

For GPU capability, make sure you have CUDA installed on your system (tested with CUDA 11.1+).

  • 6 GB of VRAM is required to generate 256x256 images.
  • 11 GB of VRAM is required to generate 512x512 images.
  • 24 GB of VRAM is required to generate 1024x1024 images. (Untested)

Local

Install the Python requirements

python3 -m pip install -r requirements.txt

To know if you can run this on your GPU, the following command must return True.

python3 -c "import torch; print(torch.cuda.is_available());"

Docker

Make sure you have docker and docker-compose installed. nvidia-docker is needed if you want to run this on your GPU through Docker.

A Makefile is provided for ease of use.

make build  # Build the docker image

Usage

Two configuration file are provided ./configs/local.json and ./configs/docker.json. They are ready to go, but you may want to edit them to meet your need. Check the Configuration section to understand each field.

The resulting generations can be found in the ./outputs folder.

GPU

To run locally:

python3 -m scripts.generate -c ./configs/local.json

To run on docker:

make generate

CPU

To run locally:

DEVICE=cpu python3 -m scripts.generate -c ./configs/local.json

To run on docker:

make generate-cpu

Configuration

Argument Type Descriptions
prompts List[str] Text prompts
image_prompts List[FilePath] Image prompts / target image path
max_iterations int Number of iterations
save_freq int Save image iterations
size [int, int] Image size (width height)
init_image FilePath Initial image
init_noise str Initial noise image ['gradient','pixels']
init_weight float Initial weight
output_dir FilePath Path to output directory
models_dir FilePath Path to models cache directory
clip_model FilePath CLIP model path or name
vqgan_checkpoint FilePath VQGAN checkpoint path
vqgan_config FilePath VQGAN config path
noise_prompt_seeds List[int] Noise prompt seeds
noise_prompt_weights List[float] Noise prompt weights
step_size float Learning rate
cutn int Number of cuts
cut_pow float Cut power
seed int Seed (-1 for random seed)
optimizer str Optimiser ['Adam','AdamW','Adagrad','Adamax','DiffGrad','AdamP','RAdam']
augments List[str] Enabled augments ['Ji','Sh','Gn','Pe','Ro','Af','Et','Ts','Cr','Er','Re']

Acknowledgments

VQGAN+CLIP

Taming Transformers

CLIP

DALLE-PyTorch

Citations

@misc{unpublished2021clip,
    title  = {CLIP: Connecting Text and Images},
    author = {Alec Radford, Ilya Sutskever, Jong Wook Kim, Gretchen Krueger, Sandhini Agarwal},
    year   = {2021}
}
@misc{esser2020taming,
      title={Taming Transformers for High-Resolution Image Synthesis},
      author={Patrick Esser and Robin Rombach and Björn Ommer},
      year={2020},
      eprint={2012.09841},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@misc{ramesh2021zeroshot,
    title   = {Zero-Shot Text-to-Image Generation},
    author  = {Aditya Ramesh and Mikhail Pavlov and Gabriel Goh and Scott Gray and Chelsea Voss and Alec Radford and Mark Chen and Ilya Sutskever},
    year    = {2021},
    eprint  = {2102.12092},
    archivePrefix = {arXiv},
    primaryClass = {cs.CV}
}
Comments
  • magma_queue_create_from_cuda_internal: Assertion `queue->dBarray__ != __null' failed

    magma_queue_create_from_cuda_internal: Assertion `queue->dBarray__ != __null' failed

    There was a follow-on problem after solving #1, but I wanted to create a separate issue for this because it's likely unrelated. After running the below commands (as described in that previous issue):

    docker build -t kcosta42-vqgan-clip .
    docker run --gpus all -v $PWD/models:/models -v $PWD/configs:/configs -v $PWD/core:/app/core -v $PWD/scripts:/app/scripts -v $PWD/outputs:/outputs -e DEVICE=cuda -it kcosta42-vqgan-clip python -m scripts.generate -c /configs/docker.json
    

    it gets to iteration 2 and then crashes:

      1%|█                                                                                                                                      | 2/250 [00:01<04:08,  1.00s/it]
    python: /opt/conda/conda-bld/magma-cuda111_1605822518874/work/interface_cuda/interface.cpp:899: void magma_queue_create_from_cuda_internal(magma_device_t, cudaStream_t, cublasHandle_t, cusparseHandle_t, magma_queue**, const char*, const char*, int): Assertion `queue->dBarray__ != __null' failed.
    

    My GPU is a RTX 2070 with 8GB of memory. I haven't changed any config, so it's still set to the default 256x256 size. I'm still in the process of investigating this - I'll post an update here if I solve it.

    opened by josephrocca 1
  • Bump pillow from 8.2.0 to 8.3.2

    Bump pillow from 8.2.0 to 8.3.2

    Bumps pillow from 8.2.0 to 8.3.2.

    Release notes

    Sourced from pillow's releases.

    8.3.2

    https://pillow.readthedocs.io/en/stable/releasenotes/8.3.2.html

    Security

    • CVE-2021-23437 Raise ValueError if color specifier is too long [hugovk, radarhere]

    • Fix 6-byte OOB read in FliDecode [wiredfool]

    Python 3.10 wheels

    • Add support for Python 3.10 #5569, #5570 [hugovk, radarhere]

    Fixed regressions

    • Ensure TIFF RowsPerStrip is multiple of 8 for JPEG compression #5588 [kmilos, radarhere]

    • Updates for ImagePalette channel order #5599 [radarhere]

    • Hide FriBiDi shim symbols to avoid conflict with real FriBiDi library #5651 [nulano]

    8.3.1

    https://pillow.readthedocs.io/en/stable/releasenotes/8.3.1.html

    Changes

    8.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/8.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    8.3.2 (2021-09-02)

    • CVE-2021-23437 Raise ValueError if color specifier is too long [hugovk, radarhere]

    • Fix 6-byte OOB read in FliDecode [wiredfool]

    • Add support for Python 3.10 #5569, #5570 [hugovk, radarhere]

    • Ensure TIFF RowsPerStrip is multiple of 8 for JPEG compression #5588 [kmilos, radarhere]

    • Updates for ImagePalette channel order #5599 [radarhere]

    • Hide FriBiDi shim symbols to avoid conflict with real FriBiDi library #5651 [nulano]

    8.3.1 (2021-07-06)

    • Catch OSError when checking if fp is sys.stdout #5585 [radarhere]

    • Handle removing orientation from alternate types of EXIF data #5584 [radarhere]

    • Make Image.array take optional dtype argument #5572 [t-vi, radarhere]

    8.3.0 (2021-07-01)

    • Use snprintf instead of sprintf. CVE-2021-34552 #5567 [radarhere]

    • Limit TIFF strip size when saving with LibTIFF #5514 [kmilos]

    • Allow ICNS save on all operating systems #4526 [baletu, radarhere, newpanjing, hugovk]

    • De-zigzag JPEG's DQT when loading; deprecate convert_dict_qtables #4989 [gofr, radarhere]

    • Replaced xml.etree.ElementTree #5565 [radarhere]

    ... (truncated)

    Commits
    • 8013f13 8.3.2 version bump
    • 23c7ca8 Update CHANGES.rst
    • 8450366 Update release notes
    • a0afe89 Update test case
    • 9e08eb8 Raise ValueError if color specifier is too long
    • bd5cf7d FLI tests for Oss-fuzz crash.
    • 94a0cf1 Fix 6-byte OOB read in FliDecode
    • cece64f Add 8.3.2 (2021-09-02) [CI skip]
    • e422386 Add release notes for Pillow 8.3.2
    • 08dcbb8 Pillow 8.3.2 supports Python 3.10 [ci skip]
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • RuntimeError: Found no NVIDIA driver on your system.

    RuntimeError: Found no NVIDIA driver on your system.

    Running make generate gives:

    Creating vqgan-clip-docker_generate_run ... done
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/app/scripts/generate.py", line 28, in <module>
        std=[0.26862954, 0.26130258, 0.27577711], device=DEVICE)
      File "/app/core/utils/normalize.py", line 25, in __init__
        self.mean = torch.as_tensor(mean, dtype=dtype, device=device)[None, :, None, None]
      File "/opt/conda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 172, in _lazy_init
        torch._C._cuda_init()
    RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
    ERROR: 1
    

    I haven't really used docker-compose much, but it seems like the --gpus option needs to be passed?

    I solved it by manually creating the docker run command based on the compose file, and adding --gpus all:

    docker build -t kcosta42-vqgan-clip .
    docker run --gpus all -v $PWD/models:/models -v $PWD/configs:/configs -v $PWD/core:/app/core -v $PWD/scripts:/app/scripts -v $PWD/outputs:/outputs -e DEVICE=cuda -it kcosta42-vqgan-clip python -m scripts.generate -c /configs/docker.json
    
    opened by josephrocca 0
  • GPU

    GPU

    Try as I might I keep getting FALSE for

    python3 -c "import torch; print(torch.cuda.is_available());"

    I have re-installed CUDA and all my drivers, I have a compatible card. Can't seem to figure out what is wrong.

    python -c "import torch; print(torch.cuda.is_available());"

    results in TRUE

    I am fairly new to all of this so sorry if its a simple solution.

    Cheers.

    opened by IHateMax 0
  • Multiple graphics cards

    Multiple graphics cards

    I have 2 NVIDIA® GeForce® RTX 2080 Ti graphics cards which have 11GB VRAM each, but I can only render images at 490x490. Is there a way I can utilize the VRAM for both the graphics cards to produce a bigger image?

    opened by ollietb 1
Owner
Kevin Costa
Machine Learning Engineer. Previously Student @ 42 Paris
Kevin Costa
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

XCL 191 Dec 31, 2022
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Nerdy Rodent 2.3k Jan 4, 2023
CLIP + VQGAN / PixelDraw

clipit Yet Another VQGAN-CLIP Codebase This started as a fork of @nerdyrodent's VQGAN-CLIP code which was based on the notebooks of @RiversWithWings a

dribnet 276 Dec 12, 2022
Streamlit Tutorial (ex: stock price dashboard, cartoon-stylegan, vqgan-clip, stylemixing, styleclip, sefa)

Streamlit Tutorials Install pip install streamlit Run cd [directory] streamlit run app.py --server.address 0.0.0.0 --server.port [your port] # http:/

Jihye Back 30 Jan 6, 2023
Text2Art is an AI art generator powered with VQGAN + CLIP and CLIPDrawer models

Text2Art is an AI art generator powered with VQGAN + CLIP and CLIPDrawer models. You can easily generate all kind of art from drawing, painting, sketch, or even a specific artist style just using a text input. You can also specify the dimensions of the image. The process can take 3-20 mins and the results will be emailed to you.

Muhammad Fathy Rashad 643 Dec 30, 2022
An architecture that makes any doodle realistic, in any specified style, using VQGAN, CLIP and some basic embedding arithmetics.

Sketch Simulator An architecture that makes any doodle realistic, in any specified style, using VQGAN, CLIP and some basic embedding arithmetics. See

null 12 Dec 18, 2022
Making a music video with Wav2CLIP and VQGAN-CLIP

music2video Overview A repo for making a music video with Wav2CLIP and VQGAN-CLIP. The base code was derived from VQGAN-CLIP The CLIP embedding for au

Joel Jang | 장요엘 163 Dec 26, 2022
Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab

VQGAN-CLIP-Video cat.mp4 policeman.mp4 schoolboy.mp4 forsenBOG.mp4

null 23 Oct 26, 2022
Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic

Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic [Paper] [Colab is coming soon] Approach Example Usage To r

null 6 Dec 1, 2021
CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP

CLIP-GEN [简体中文][English] 本项目在萤火二号集群上用 PyTorch 实现了论文 《CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP》。 CLIP-GEN 是一个 Language-F

null 75 Dec 29, 2022
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars Fangzhou Hong1*  Mingyuan Zhang1*  Liang Pan1  Zhongang Cai1,2,3  Lei Yang2 

Fangzhou Hong 749 Jan 4, 2023
Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network)

Deep Daze mist over green hills shattered plates on the grass cosmic love and attention a time traveler in the crowd life during the plague meditative

Phil Wang 4.4k Jan 3, 2023
A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN.

Ryan Murdock has done it again, combining OpenAI's CLIP and the generator from a BigGAN! This repository wraps up his work so it is easily accessible to anyone who owns a GPU.

Phil Wang 2.3k Jan 9, 2023
A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

Eugenio Herrera 175 Dec 29, 2022
Codes for ACL-IJCNLP 2021 Paper "Zero-shot Fact Verification by Claim Generation"

Zero-shot-Fact-Verification-by-Claim-Generation This repository contains code and models for the paper: Zero-shot Fact Verification by Claim Generatio

Liangming Pan 47 Jan 1, 2023
ZeroGen: Efficient Zero-shot Learning via Dataset Generation

ZEROGEN This repository contains the code for our paper “ZeroGen: Efficient Zero

Jiacheng Ye 31 Dec 30, 2022
Image-generation-baseline - MUGE Text To Image Generation Baseline

MUGE Text To Image Generation Baseline Requirements and Installation More detail

null 23 Oct 17, 2022
SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model

SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model Edresson Casanova, Christopher Shulby, Eren Gölge, Nicolas Michael Müller, Frede

Edresson Casanova 92 Dec 9, 2022
Code repo for EMNLP21 paper "Zero-Shot Information Extraction as a Unified Text-to-Triple Translation"

Zero-Shot Information Extraction as a Unified Text-to-Triple Translation Source code repo for paper Zero-Shot Information Extraction as a Unified Text

cgraywang 88 Dec 31, 2022