[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias

Overview

Counterfactual VQA (CF-VQA)

This repository is the Pytorch implementation of our paper "Counterfactual VQA: A Cause-Effect Look at Language Bias" in CVPR 2021. This code is implemented as a fork of RUBi.

CF-VQA is proposed to capture and mitigate language bias in VQA from the view of causality. CF-VQA (1) captures the language bias as the direct causal effect of questions on answers, and (2) reduces the language bias by subtracting the direct language effect from the total causal effect.

If you find this paper helps your research, please kindly consider citing our paper in your publications.

@inproceedings{niu2020counterfactual,
  title={Counterfactual VQA: A Cause-Effect Look at Language Bias},
  author={Niu, Yulei and Tang, Kaihua and Zhang, Hanwang and Lu, Zhiwu and Hua, Xian-Sheng and Wen, Ji-Rong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2021}
}

Summary

Installation

1. Setup and dependencies

Install Anaconda or Miniconda distribution based on Python3+ from their downloads' site.

conda create --name cfvqa python=3.7
source activate cfvqa
pip install -r requirements.txt

2. Download datasets

Download annotations, images and features for VQA experiments:

bash cfvqa/datasets/scripts/download_vqa2.sh
bash cfvqa/datasets/scripts/download_vqacp2.sh

Quick start

Train a model

The boostrap/run.py file load the options contained in a yaml file, create the corresponding experiment directory and start the training procedure. For instance, you can train our best model on VQA-CP v2 (CFVQA+SUM+SMRL) by running:

python -m bootstrap.run -o cfvqa/options/vqacp2/smrl_cfvqa_sum.yaml

Then, several files are going to be created in logs/vqacp2/smrl_cfvqa_sum/:

  • [options.yaml] (copy of options)
  • [logs.txt] (history of print)
  • [logs.json] (batchs and epochs statistics)
  • [_vq_val_oe.json] (statistics for the language-prior based strategy, e.g., RUBi)
  • [_cfvqa_val_oe.json] (statistics for CF-VQA)
  • [_q_val_oe.json] (statistics for language-only branch)
  • [_v_val_oe.json] (statistics for vision-only branch)
  • [_all_val_oe.json] (statistics for the ensembled branch)
  • ckpt_last_engine.pth.tar (checkpoints of last epoch)
  • ckpt_last_model.pth.tar
  • ckpt_last_optimizer.pth.tar

Many options are available in the options directory. CFVQA represents the complete causal graph while cfvqas represents the simplified causal graph.

Evaluate a model

There is no test set on VQA-CP v2, our main dataset. The evaluation is done on the validation set. For a model trained on VQA v2, you can evaluate your model on the test set. In this example, boostrap/run.py load the options from your experiment directory, resume the best checkpoint on the validation set and start an evaluation on the testing set instead of the validation set while skipping the training set (train_split is empty). Thanks to --misc.logs_name, the logs will be written in the new logs_predicate.txt and logs_predicate.json files, instead of being appended to the logs.txt and logs.json files.

python -m bootstrap.run \
-o ./logs/vqacp2/smrl_cfvqa_sum/options.yaml \
--exp.resume last \
--dataset.train_split ''\
--dataset.eval_split val \
--misc.logs_name test 

Useful commands

Use a specific GPU

For a specific experiment:

CUDA_VISIBLE_DEVICES=0 python -m boostrap.run -o cfvqa/options/vqacp2/smrl_cfvqa_sum.yaml

For the current terminal session:

export CUDA_VISIBLE_DEVICES=0

Overwrite an option

The boostrap.pytorch framework makes it easy to overwrite a hyperparameter. In this example, we run an experiment with a non-default learning rate. Thus, I also overwrite the experiment directory path:

python -m bootstrap.run -o cfvqa/options/vqacp2/smrl_cfvqa_sum.yaml \
--optimizer.lr 0.0003 \
--exp.dir logs/vqacp2/smrl_cfvqa_sum_lr,0.0003

Resume training

If a problem occurs, it is easy to resume the last epoch by specifying the options file from the experiment directory while overwritting the exp.resume option (default is None):

python -m bootstrap.run -o logs/vqacp2/smrl_cfvqa_sum/options.yaml \
--exp.resume last

Acknowledgment

Special thanks to the authors of RUBi, BLOCK, and bootstrap.pytorch, and the datasets used in this research project.

Comments
  • Maybe something wrong in cfvqasimple.py

    Maybe something wrong in cfvqasimple.py

    Hello, thanks for sharing your code! I found a possible wrong in cfvqasimple.py Isn't out['logits_all'] = z_qkv # for optimization should be out['logits_all'] = z_qk # for optimization ? Or I took it wrong?

    opened by Mike4Ellis 8
  • smrl_cfvqa_rubi is TOO slow to train

    smrl_cfvqa_rubi is TOO slow to train

    I can train all the other versions except smrl_cfvqa_rubi with batch_size = 256, I have to change it to 64 preventing CUDA out of memory. But the training is too slow, which takes a day to train a epoch with three 3090.

    I wonder what is the difference between smrl_cfvqa_rubi and other versions making it so speical, and is it normal to train so slowly? or is it because I did something wrong?

    opened by Mike4Ellis 4
  • How to implement only update c when minimizing KL divergence?

    How to implement only update c when minimizing KL divergence?

    Hello Yulei Niu. Thank you very much for your inspiring work, it has given me great inspiration. I have a question that I hope you can answer: In your paper, under equation 17, there is a sentence "Only c is updated when minimizing L_kl", but in the code, I don't see how it is implemented, it seems that L_kl is added to the whole loss.

    opened by Thinking-more 3
  • Requesting some clarification regarding the core idea

    Requesting some clarification regarding the core idea

    Hi @yuleiniu Thank you for your great work!

    I have some questions related to the core idea, hoping answering them will make the paper more clear for me.

    1- In equations 11, 12, and 13 you are replacing the learned embeddings by a learnable constant, how this constant may be interpreted? what does it imply? 2- Why fixing this constant across the whole dimension, by multiplying it to ones? 3- Following this,

            z_qkv = self.fusion(logits, q_pred, v_pred, q_fact=True,  k_fact=True, v_fact=True) # te
            z_q = self.fusion(logits, q_pred, v_pred, q_fact=True,  k_fact=False, v_fact=False) # nie
            logits_cfvqa = z_qkv - z_q
    

    if we neglect the non-linearity (z = torch.log(torch.sigmoid(z) + eps)), (z_qkv - z_q) will be interpreted as (z_k + z_q + z_v) - (2C + z_q) which means we can just rely from the beginning on z_k + z_v and remove the QA branch?! I think I missunderstand something here :D 4- Is it possible to replace the constant with other real example, such as augmented version of the input or something like that, what do u think?

    Thanks in advance!

    opened by eslambakr 2
  • Enquiries on reproducing the results

    Enquiries on reproducing the results

    Hi,

    First of all, great work and a good paper!

    I just want to clarify a few things! I followed the readme file and re-trained the following variants: i) vqacp2 (smrl_baseline.yaml / smrl_cfvqa_sum.yaml /smrl_cfvqasimple_sum.yaml) ii) vqa2 (smrl_baseline.yaml / smrl_cfvqa_sum.yaml /smrl_cfvqasimple_sum.yaml) However, the evaluation results were different from the results reported in the paper.

    1. Were the models originally trained for 22 epochs (as indicated in the YAML files)? if not, what's the recommended epoch to achieve results similar to the ones stated in the paper?
    2. The paper reports the model's performance on the vqacp2 test set. However, the readme file states that there is no test set for vqacp2. Can I assume that the eval set and test set are the same for vqacp2?
    3. Upon training, the model report results on logits_all, logits_vq, logits_cfvqa, logits_q and logits_v. How do I relate these results to the ones reported in the table?

    Thank you for your time.

    opened by lalithjets 2
  • Question about evaluation strategy

    Question about evaluation strategy

    Hi, Recently I'm doing my own VQA work. In my case, the convergence speed in three categories (Y/N, Num., Other) are not same. Hence, the best result for each category may reveal in three different epoch. I'm little confused on how to choose the best result for each category.

    So, in your work, how do you choose the best result for each category? Choose the highest result for each category from all epochs result? Or just choose one epoch which has highest All score and choose all results from this epoch only?

    opened by jediyoda36 2
  • How to get the accuracy of the overall test set and the accuray in Y/N, other and number

    How to get the accuracy of the overall test set and the accuray in Y/N, other and number

    logs.txt I try to run the baseline as "python -m bootstrap.run -o cfvqa/options/vqacp2/smrl_baseline.yaml", and got the logs.txt. However, I cannot get the overall accuracy as similar in your paper (about 38.46), and how to get the accuracy of different question types (Y/N, other and number) from this log?

    Should I run more epoch or change some super parameter?

    opened by zpltys 2
  • ModuleNotFoundError: No module named 'block.external'

    ModuleNotFoundError: No module named 'block.external'

    Could you please tell me whether this error affects the experimental results?

    [I 2021-10-12 03:55:06] ...trap/engines/engine.py.126: Saving best checkpoint for strategy eval_epoch.accuracy_top1
    [I 2021-10-12 03:55:06] ...trap/engines/engine.py.420: Saving model...
    Traceback (most recent call last):
      File "/data/gaokuofeng/anaconda3/envs/cfvqa/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/data/gaokuofeng/anaconda3/envs/cfvqa/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/data/gaokuofeng/anaconda3/envs/cfvqa/lib/python3.7/site-packages/block.bootstrap.pytorch-0.1.6-py3.7.egg/block/models/metrics/compute_oe_accuracy.py", line 8, in <module>
    ModuleNotFoundError: No module named 'block.external'
    [I 2021-10-12 03:55:07] ...trap/engines/engine.py.424: Saving optimizer...
    [I 2021-10-12 03:55:10] ...trap/engines/engine.py.428: Saving engine...
    [I 2021-10-12 03:55:10] ...trap/engines/engine.py.129: Saving last checkpoint
    [I 2021-10-12 03:55:10] ...trap/engines/engine.py.420: Saving model...
    [I 2021-10-12 03:55:11] ...trap/engines/engine.py.424: Saving optimizer...
    [I 2021-10-12 03:55:14] ...trap/engines/engine.py.428: Saving engine...
    [I 2021-10-12 03:55:14] ...trap/engines/engine.py.133: Ending training procedures
    
    good first issue 
    opened by ProbeTS 2
  • Questions about the core idea

    Questions about the core idea

    Hi @yuleiniu Thank you for your great work! I have two quick questions:

    1. It seems to me that the core idea is very similar to Tang's unbiased SGG (CVPR'20) in that both works aim to remove the bad co-occurrence bias by subtracting the results with certain data blocked out (image modality/image patches). Is there any misunderstanding here?
    2. The discussion of "good" and "bad" biases: it seems to me that the "bad" language bias can be removed by the proposed method; however, it seems to also remove the "good" ones. I didn't find a detailed discussion on your main motivation of removing the bad biases and retaining the good ones, neither further discussion beyond the Introduction section nor experimental proofs, in the paper. How do the good ones remain? Could you please elaborate on this?
    opened by coldmanck 2
  • Where is the

    Where is the "block" module?

    I found the files in the networks folder that the block module is introduced in the header file, such as "from block.models.networks.mlp import MLP". How do I get the block module?

    opened by CindyTing 1
  • TypeError: Object of type Tensor is not JSON serializable

    TypeError: Object of type Tensor is not JSON serializable

    配置好环境后运行,在bootstrap/lib/logger.py里的json.dump()报TypeError: Object of type Tensor is not JSON serializable,请问作者有遇到这个问题吗?不解决是否可以,把logger.flush()注释掉后train和eval的结果都有了。

    opened by MyMiracles 0
Owner
Yulei Niu
Yulei Niu
TAP: Text-Aware Pre-training for Text-VQA and Text-Caption, CVPR 2021 (Oral)

TAP: Text-Aware Pre-training TAP: Text-Aware Pre-training for Text-VQA and Text-Caption by Zhengyuan Yang, Yijuan Lu, Jianfeng Wang, Xi Yin, Dinei Flo

Microsoft 61 Nov 14, 2022
[CVPR 2021] Released code for Counterfactual Zero-Shot and Open-Set Visual Recognition

Counterfactual Zero-Shot and Open-Set Visual Recognition This project provides implementations for our CVPR 2021 paper Counterfactual Zero-S

null 144 Dec 24, 2022
Pytorch implementation of winner from VQA Chllange Workshop in CVPR'17

2017 VQA Challenge Winner (CVPR'17 Workshop) pytorch implementation of Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challeng

Mark Dong 166 Dec 11, 2022
CausaLM: Causal Model Explanation Through Counterfactual Language Models

CausaLM: Causal Model Explanation Through Counterfactual Language Models Authors: Amir Feder, Nadav Oved, Uri Shalit, Roi Reichart Abstract: Understan

Amir Feder 39 Jul 10, 2022
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

?? ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C

Hyungtae Lim 225 Dec 29, 2022
CAUSE: Causality from AttribUtions on Sequence of Events

CAUSE: Causality from AttribUtions on Sequence of Events

Wei Zhang 21 Dec 1, 2022
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness Code for Paper "Imbalanced Gradients: A Subtle Cause of Overestimated Adv

Hanxun Huang 11 Nov 30, 2022
Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations"

Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations" this repository is maintained by bo

Yuhan Liu 24 Nov 29, 2022
null 571 Dec 25, 2022
[ICCV 2021] Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification

Counterfactual Attention Learning Created by Yongming Rao*, Guangyi Chen*, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for ICCV

Yongming Rao 90 Dec 31, 2022
Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers 1 Using Colab Please notic

Hila Chefer 489 Jan 7, 2023
An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.

Bottom-Up and Top-Down Attention for Visual Question Answering An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge. The

Hengyuan Hu 731 Jan 3, 2023
Neural Module Network for VQA in Pytorch

Neural Module Network (NMN) for VQA in Pytorch Note: This is NOT an official repository for Neural Module Networks. NMN is a network that is assembled

Harsh Trivedi 111 Nov 24, 2022
[ICLR'21] Counterfactual Generative Networks

This repository contains the code for the ICLR 2021 paper "Counterfactual Generative Networks" by Axel Sauer and Andreas Geiger. If you want to take the CGN for a spin and generate counterfactual images, you can try out the Colab below.

null 88 Jan 2, 2023
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms

CARLA - Counterfactual And Recourse Library CARLA is a python library to benchmark counterfactual explanation and recourse models. It comes out-of-the

Carla Recourse 200 Dec 28, 2022
The code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning"

The Code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning" Setting up and using the repo Get the dataset. Follow

null 4 Apr 20, 2022
Official repository of the paper "A Variational Approximation for Analyzing the Dynamics of Panel Data". Mixed Effect Neural ODE. UAI 2021.

Official repository of the paper (UAI 2021) "A Variational Approximation for Analyzing the Dynamics of Panel Data", Mixed Effect Neural ODE. Panel dat

Jurijs Nazarovs 7 Nov 26, 2022
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

NingWang 236 Dec 22, 2022
A pytorch implementation of MBNET: MOS PREDICTION FOR SYNTHESIZED SPEECH WITH MEAN-BIAS NETWORK

Pytorch-MBNet A pytorch implementation of MBNET: MOS PREDICTION FOR SYNTHESIZED SPEECH WITH MEAN-BIAS NETWORK Training To train a new model, please ru

null 46 Dec 28, 2022