Language Models Can See: Plugging Visual Controls in Text Generation

Overview

Language Models Can See: Plugging Visual Controls in Text Generation

Authors: Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lingpeng Kong, and Nigel Collier

This repository contains code, models, and other related resources of our paper [Language Models Can See: Plugging Visual Controls in Text Generation].

If you are also interested in open-ended text generation and would like to see more details of our contrastive search decoding method, please refer to our SimCTG [paper] and [repo].

Replicate has provided a great web [demo] of MAGIC that is super easy to use and to interact with. Check it out!


MAGIC


Catalogue:


1. Introduction:

Generative language models (LMs) such as GPT-2/3 can be prompted to generate text with remarkable quality. While they are designed for text-prompted generation, it remains an open question how the generation process could be guided by modalities beyond text such as images. In this work, we propose a training-free framework, called MAGIC (iMAge-Guided text generatIon with CLIP), for plugging in visual controls in the generation process and enabling LMs to perform multimodal tasks (e.g., image captioning) in a zero-shot manner. MAGIC is a simple yet efficient plug-and-play framework, which directly combines an off-the-shelf LM (i.e., GPT-2) and an image-text matching model (i.e., CLIP) for image-grounded text generation. During decoding, MAGIC influences the generation of the LM by introducing a CLIP-induced score, called magic score, which regularizes the generated result to be semantically related to a given image while being coherent to the previously generated context. Notably, the proposed decoding scheme does not involve any gradient update operation, therefore being computationally efficient. On the challenging task of zero-shot image captioning, MAGIC outperforms the state-of-the-art method by notable margins with a nearly 27 times decoding speedup. MAGIC is a flexible framework and is theoretically compatible with any text generation tasks that incorporate image grounding. In the experiments, we showcase that it is also capable of performing visually grounded story generation given both an image and a text prompt.


2. News:

  • [2022/05/06] MAGIC is publicly released!

3. Citation:

If you find our paper and resources useful, please kindly leave a star and cite our papers. Thanks!

@article{DBLP:journals/corr/abs-2205-02655,
  author    = {Yixuan Su and
               Tian Lan and
               Yahui Liu and
               Fangyu Liu and
               Dani Yogatama and
               Yan Wang and
               Lingpeng Kong and
               Nigel Collier},
  title     = {Language Models Can See: Plugging Visual Controls in Text Generation},
  journal   = {CoRR},
  volume    = {abs/2205.02655},
  year      = {2022},
  url       = {https://doi.org/10.48550/arXiv.2205.02655},
  doi       = {10.48550/arXiv.2205.02655},
  eprinttype = {arXiv},
  eprint    = {2205.02655},
  timestamp = {Wed, 11 May 2022 17:29:40 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2205-02655.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

@article{DBLP:journals/corr/abs-2202-06417,
  author    = {Yixuan Su and
               Tian Lan and
               Yan Wang and
               Dani Yogatama and
               Lingpeng Kong and
               Nigel Collier},
  title     = {A Contrastive Framework for Neural Text Generation},
  journal   = {CoRR},
  volume    = {abs/2202.06417},
  year      = {2022},
  url       = {https://arxiv.org/abs/2202.06417},
  eprinttype = {arXiv},
  eprint    = {2202.06417},
  timestamp = {Fri, 18 Feb 2022 12:23:53 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2202-06417.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

4. Environment Setup:

python version: 3.8
pip3 install -r requirements.txt

5. Zero-Shot Image Captioning:

5.1. Implementation of Experiments:

To ensure the reproductity of our work, we provide all related resources to implement our experiments on the task of zero-shot image captioning. Please refer more details [here].

5.2. Example Usage of Magic Search:

In the following, we illustrate how to perform zero-shot image captioning with magic search. Specifically, we show how to generate the results as shown in our case study in the paper.

Open In Colab

5.2.1. Load Language Model:

We first load the language model as:

import sys
sys.path.append(r'./image_captioning/language_model/')
from simctg import SimCTG
language_model_name = r'cambridgeltl/magic_mscoco'
sos_token, pad_token = r'<-start_of_text->', r'<-pad->'
generation_model = SimCTG(language_model_name, sos_token, pad_token)
generation_model.eval()
5.2.2. Load CLIP:

Then, we load the CLIP model as:

import sys
sys.path.append(r'./image_captioning/clip/')
from clip import CLIP
model_name = "openai/clip-vit-base-patch32"
clip = CLIP(model_name)
clip.eval()
5.2.3. Prepare Start Token:

Note that, the language model always starts generation with a start of sentence token. Here, we prepare the input ids of the start token.

import torch
sos_token = r'<-start_of_text->'
start_token = generation_model.tokenizer.tokenize(sos_token)
start_token_id = generation_model.tokenizer.convert_tokens_to_ids(start_token)
input_ids = torch.LongTensor(start_token_id).view(1,-1)
5.2.4. Load Image:

To generate the caption of a random image, we need to load the image as:

from PIL import Image             # to load images
from IPython.display import display # to display images
image_name_list = ['COCO_val2014_000000336777.jpg', 'COCO_val2014_000000182784.jpg', 'COCO_val2014_000000299319.jpg', 'COCO_val2014_000000516750.jpg',
                   'COCO_val2014_000000207151.jpg', 'COCO_val2014_000000078707.jpg', 'COCO_val2014_000000027440.jpg', 'COCO_val2014_000000033645.jpg',
                   'COCO_val2014_000000348905.jpg', 'COCO_val2014_000000545385.jpg', 'COCO_val2014_000000210032.jpg', 'COCO_val2014_000000577526.jpg']
index = 1 
'''
   you can easily reproduce all results shown in our case study (index from 0 to 3) 
   and the results in the appendix (index from 4 to 11).
'''

image_path = r'./image_captioning/example_images/' + image_name_list[index]
image_instance = Image.open(image_path)
display(image_instance)
5.2.5. Zero-Shot Image Captioning with Magic Search:

Now, let's generate the image caption with magic search!

'''
   setup the configurations of magic search
      k: the k in magic search
      alpha: the alpha in magic search
      beta: the beta in magic search
      decoding_len: the number of tokens to generate
'''
k, alpha, beta, decoding_len = 45, 0.1, 2.0, 16
eos_token = '<|endoftext|>'
output = generation_model.magic_search(input_ids, k, 
        alpha, decoding_len, beta, image_instance, clip, 60)
print (output)
'''
   A large cow standing in a street stall.
'''
5.2.6. Reproduce Our Results in the Paper:

If you would like to reproduce all the results shown in the case study and appendix of our paper, you can run this demo file as

python image_caption_demo.py

6. Visually Grounded Story Generation:

6.1. Implementation of Experiments:

To ensure the reproductity of our work, we provide all related resources to implement our experiments on the task of visually grounded story generation. Please refer more details [here].

6.2. Example Usage of Magic Search:

In the following, we illustrate how to perform visually grounded story generation with magic search. Specifically, we show how to generate the results as shown in our case study in the paper.

Open In Colab

6.2.1. Load Language Model:

We first load the language model and prepare the story title as:

import sys
sys.path.append(r'./story_generation/language_model')
from transformers import AutoTokenizer
from simctg import SimCTG
language_model_name = r'cambridgeltl/simctg_rocstories'
tokenizer = AutoTokenizer.from_pretrained(language_model_name)
generation_model = SimCTG(language_model_name, tokenizer.pad_token_id)
generation_model.eval()

import torch
title = 'Ice Cream Tasting <|endoftext|>'
title_tokens = tokenizer.tokenize(title)
title_id_list = tokenizer.convert_tokens_to_ids(title_tokens)
title_ids = torch.LongTensor(title_id_list).view(1,-1)
6.2.2. Load CLIP:

Then, we load the CLIP model as:

import sys
sys.path.append(r'./story_generation/clip')
from clip import CLIP
model_name = "openai/clip-vit-base-patch32"
clip = CLIP(model_name)
clip.eval()
6.3.2. Get the Related Image:

Next, let's get the images that are related to the story tile. We provide two ways of doing it as shown below:

6.3.2.1. Retrieve from Image Index:

The first way is to retrieve the images from a constructed image index. Before running the following commands, please make sure you have built the image index from scrath as described [here] or downloaded our provided image index as described [here].

After the image index is ready, we can load the image index as

# build image index
import sys
sys.path.append(r'./story_generation/image_index')
from imageindex import ImageIndex
index_path = r'./story_generation/data/image_index/images_index_data/index_matrix.txt'
mapping_dict_path = r'./story_generation/data/image_index/images_index_data/mapping_dict.json'
image_folder_prefix_path = r'./story_generation/data/image_index/images/'
index = ImageIndex(index_path, mapping_dict_path, image_folder_prefix_path, clip)

Then, we can retrieve the top-1 images as

image_name_list, image_instance_list = index.search_image(title, top_k=1)
'''
   image_name_list: the list of names of the retrieved images
   image_instance_list: the list of images that we retrieve
'''

Let's see the retrieved images we got

from IPython.display import display
# display the top-1 image
display(image_instance_list[0])
6.3.2.2. Directly Load Image:

Alternatively, if you have not prepared the image index, we have provided these the image in the repo. You can directly load it as

from PIL import Image
image_name_list = ['avopix-284658167.jpg']
image_instance_list = []
for name in image_name_list:
    image_path = r'./story_generation/example_images/' + name
    image_instance = Image.open(image_path)
    image_instance_list.append(image_instance)
6.3.3. Visually Grounded Story Generation with Magic Search:

[Note] Recall that, in this example, our story title is 'Ice Cream Tasting <|endoftext|>'.

Now, let's generate the story conditioned on the retrieved image

from IPython.display import display
k, alpha, beta, decoding_len  = 5, 0.6, 0.15, 100
'''
   The k, alpha, beta correspond to the k, alpha, beta in magic search
'''
image_instance = image_instance_list[0]
eos_token = r'<|endoftext|>'
output, _ = generation_model.magic_search(title_ids, k, alpha, decoding_len, beta, image_instance, 
        clip, 60, eos_token)
_, generated_story = generation_model.parse_generated_result(output, num_of_sentences_to_keep=5)
print (generated_story)
display(image_instance)
'''
   My family went to a ice cream shop. They ordered three flavors of ice cream. The first one was 
   strawberry, the second was chocolate, and the third was orange. I was excited to try all three 
   flavors. It was very good and I had a great time at the ice cream shop.
'''

Then, let's see what we can get using the vanilla contrastive search without the image grounding.

k, alpha, decoding_len  = 5, 0.6, 100
'''
   The k and alpha correspond to the k and alpha in contrastive search
'''
eos_token = r'<|endoftext|>'
output, _ = generation_model.fast_contrastive_search(title_ids, k, alpha, decoding_len, eos_token)
_, generated_story = generation_model.parse_generated_result(output, num_of_sentences_to_keep=5)
print (generated_story)
'''
   My family went to a ice cream shop. We ordered the Ice Cream Truck. It was delicious. The customer 
   service was terrible. We had to leave for another day.
'''
6.3.4. Reproduce Our Results in the Paper:

If you would like to reproduce all the results shown in the case study and appendix of our paper, you can run this demo file as

python story_generation_demo.py

7. Contact

If you have any questions, feel free to contact me via (ys484 at cam.ac.uk).


8. MAGIC Elsewhere

We thank the community's effort for extending MAGIC!

  • Replicate has provided a great [demo] of MAGIC that is super easy to use. Thanks for the effort!
Comments
  • Java ClassNotFoundException raised

    Java ClassNotFoundException raised

    Hi, I tried to evaluate result, ClassNotFoundException error raised. How can I add SemgrexPattern class?

    (magic) teang1995@devbox:~/codes/MAGIC/image_captioning/evaluation$ python cocoeval.py --result_file_path ../inference_result/flickr30k/baselines/contrastive_result.json
    tokenization...
    PTBTokenizer tokenized 72436 tokens at 390823.69 tokens per second.
    PTBTokenizer tokenized 14999 tokens at 142902.49 tokens per second.
    setting up scorers...
    computing Bleu score...
    {'testlen': 13000, 'reflen': 12470, 'guess': [13000, 12000, 11000, 10000], 'correct': [6192, 2110, 773, 341]}
    ratio: 1.0425020048114642
    Bleu_1: 0.476
    Bleu_2: 0.289
    Bleu_3: 0.181
    Bleu_4: 0.119
    computing METEOR score...
    METEOR: 0.127
    computing Rouge score...
    ROUGE_L: 0.353
    computing CIDEr score...
    CIDEr: 0.089
    computing SPICE score...
    Exception in thread "main" java.lang.NoClassDefFoundError: edu/stanford/nlp/semgraph/semgrex/SemgrexPattern
            at edu.anu.spice.SpiceParser.<clinit>(SpiceParser.java:64)
            at edu.anu.spice.SpiceScorer.scoreBatch(SpiceScorer.java:70)
            at edu.anu.spice.SpiceScorer.main(SpiceScorer.java:60)
    Caused by: java.lang.ClassNotFoundException: edu.stanford.nlp.semgraph.semgrex.SemgrexPattern
            at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
            at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
            ... 3 more
    Traceback (most recent call last):
      File "cocoeval.py", line 16, in <module>
        cocoEval.evaluate()
      File "/home/teang1995/codes/MAGIC/image_captioning/evaluation/pycocoevalcap/eval.py", line 59, in evaluate
        score, scores = scorer.compute_score(gts, res)
      File "/home/teang1995/codes/MAGIC/image_captioning/evaluation/pycocoevalcap/spice/spice.py", line 69, in compute_score
        subprocess.check_call(spice_cmd, 
      File "/data1/teang1995/anaconda3/lib/python3.8/subprocess.py", line 364, in check_call
        raise CalledProcessError(retcode, cmd)
    subprocess.CalledProcessError: Command '['java', '-jar', '-Xmx8G', 'spice-1.0.jar', '/home/teang1995/codes/MAGIC/image_captioning/evaluation/pycocoevalcap/spice/tmp/tmpc2vzaamg', '-cache', '/home/teang1995/codes/MAGIC/image_captioning/evaluation/pycocoevalcap/spice/cache', '-out', '/home/teang1995/codes/MAGIC/image_captioning/evaluation/pycocoevalcap/spice/tmp/tmp1rq2qr4m', '-subset', '-silent']' returned non-zero exit status 1.
    
    opened by teang1995 3
  • Why is the image captioning score on MS-COCO not measured on full validation set, but only on a subset?

    Why is the image captioning score on MS-COCO not measured on full validation set, but only on a subset?

    About the image captioning results on MS-COCO, I could reproduce exactly the same scores as presented in Table1 of the paper, by using the following result file. (https://github.com/yxuansu/MAGIC/blob/main/image_captioning/inference_result/mscoco/magic_result.json) However, this file only contains results of 4982 images, while the number of full validation images is about 40k. Why is the image captioning score on MS-COCO not measured on full validation set, but only on a subset?

    opened by jmkim0309 3
  • is there anybody reproduce this methods with other LM/VLM?

    is there anybody reproduce this methods with other LM/VLM?

    Thanks to share this great work :)

    I have tried to reproduce this methods with my own LM(gpt-3)/VLM(CLIP). However, the quality is significantly inferior to the example you provided. 스크린샷 2022-10-12 오전 9 11 19

    is there anybody reproduce this methods with your own LM/VLM ? or is there any implementation detail when I use my own LM/VLM ?

    opened by 5joon2 2
  • Add Web Demo & Docker environment

    Add Web Demo & Docker environment

    Hey @yxuansu ! 👋

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! We have implemented both Image Captioning and Story Generation for the demo - view it here: https://replicate.com/yxuansu/magic. The docker file can be found under the tab ‘run model with docker’.

    Do claim the page so you can own the page, customise the Example gallery as you like, push any future update to the web demo, and we'll feature it on our website and tweet about it too. You can find the 'Claim this model' button on the top of the page ~ When the page is claimed, it will be automatically linked to the arXiv website as well (under “Demos”).

    In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊

    opened by chenxwh 1
  • Difference between paper and code

    Difference between paper and code

    Hi,

    In https://github.com/yxuansu/MAGIC/blob/main/story_generation/language_model/utlis.py#L266, you say you only use the generated text to calculate similarity. But in the paper, the MAGIC scores are calculated based on the prefix and the generated tokens. Could you explain this difference?

    Thx~

    opened by mianzhang 0
Owner
Yixuan Su
I am a third-year (final-year) Ph.D. student at the Language Technology Lab of the University of Cambridge.
Yixuan Su
Implementation of StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation in PyTorch

StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation Implementation of StyleSpace Analysis: Disentangled Controls for StyleGAN Ima

Xuanchi Ren 86 Dec 7, 2022
AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

null 4 Feb 13, 2022
Facebook Research 605 Jan 2, 2023
A weakly-supervised scene graph generation codebase. The implementation of our CVPR2021 paper ``Linguistic Structures as Weak Supervision for Visual Scene Graph Generation''

README.md shall be finished soon. WSSGG 0 Overview 1 Installation 1.1 Faster-RCNN 1.2 Language Parser 1.3 GloVe Embeddings 2 Settings 2.1 VG-GT-Graph

Keren Ye 35 Nov 20, 2022
CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)

CLIP (Contrastive Language–Image Pre-training) Experiments (Evaluation) Model Dataset Acc (%) ViT-B/32 (Paper) CIFAR100 65.1 ViT-B/32 (Our) CIFAR100 6

Myeongjun Kim 52 Jan 7, 2023
📝 Wrapper library for text generation / language models at char and word level with RNN in TensorFlow

tensorlm Generate Shakespeare poems with 4 lines of code. Installation tensorlm is written in / for Python 3.4+ and TensorFlow 1.1+ pip3 install tenso

Kilian Batzner 63 May 22, 2021
[EMNLP 2020] Keep CALM and Explore: Language Models for Action Generation in Text-based Games

Contextual Action Language Model (CALM) and the ClubFloyd Dataset Code and data for paper Keep CALM and Explore: Language Models for Action Generation

Princeton Natural Language Processing 43 Dec 16, 2022
Discovering Interpretable GAN Controls [NeurIPS 2020]

GANSpace: Discovering Interpretable GAN Controls Figure 1: Sequences of image edits performed using control discovered with our method, applied to thr

Erik Härkönen 1.7k Jan 3, 2023
Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic

Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic [Paper] [Colab is coming soon] Approach Example Usage To r

null 6 Dec 1, 2021
Image-generation-baseline - MUGE Text To Image Generation Baseline

MUGE Text To Image Generation Baseline Requirements and Installation More detail

null 23 Oct 17, 2022
Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loop Story Generation"

Storium GPT-2 Models This is the official repository for the GPT-2 models described in the EMNLP 2020 paper [STORIUM: A Dataset and Evaluation Platfor

Nader Akoury 27 Dec 20, 2022
Alex Pashevich 62 Dec 24, 2022
A PyTorch implementation of "From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network" (ICCV2021)

From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network The official code of VisionLAN (ICCV2021). VisionLAN successfully a

null 81 Dec 12, 2022
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

Salesforce 1.3k Dec 31, 2022
improvement of CLIP features over the traditional resnet features on the visual question answering, image captioning, navigation and visual entailment tasks.

CLIP-ViL In our paper "How Much Can CLIP Benefit Vision-and-Language Tasks?", we show the improvement of CLIP features over the traditional resnet fea

null 310 Dec 28, 2022
Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting

InversePrompting Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting Code: The code is provided in the "chinese_ip"

THUDM 101 Dec 16, 2022
BARTScore: Evaluating Generated Text as Text Generation

This is the Repo for the paper: BARTScore: Evaluating Generated Text as Text Generation Updates 2021.06.28 Release online evaluation Demo 2021.06.25 R

NeuLab 196 Dec 17, 2022
A 1.3B text-to-image generation model trained on 14 million image-text pairs

minDALL-E on Conceptual Captions minDALL-E, named after minGPT, is a 1.3B text-to-image generation model trained on 14 million image-text pairs for no

Kakao Brain 604 Dec 14, 2022