When in Doubt: Improving Classification Performance with Alternating Normalization
Findings of EMNLP 2021
Cornell University, Facebook AI
**arXiv**: Menglin Jia, Austin Reiter, Ser-Nam Lim, Yoav Artzi and Claire Cardie https://arxiv.org/abs/2109.13449
Environment settings
This project is tested under Python 3.6, pytorch 1.5.0, torchvision 0.6.0
Preparation
- Download the data and put the data folder under
. - Simulation data: the randomly constructed arrays are available here
- DialogRE: project page
- Ultra-fine entity typing: project page
- ImageNet: project page
- For tuning process, you need to check how many cpus you have first. This reporitory assumes the environment has at least 40 cpus.
Simulation experiments
We have provided the randomly generately array produced from step 1 in the data
folder.
# step 1. generate random matrices
python experiments_simulation.py --step 1 \
--data-root <DATA_PATH>
# step 2. get results
python experiments_simulation.py --step 2 \
--data-root <DATA_PATH>
Empirical experiments
Ultra-fine entity typing
# Denoise
python experiments_text.py \
--dataset ultrafine_entity_typing \
--data-root=<DATA_PATH> --model-type denoise
# multitask
python experiments_text.py \
--dataset ultrafine_entity_typing --data-root=<DATA_PATH>
Relation extraction
python experiments_text.py \
--dataset dialogue_re --data-root=<DATA_PATH>
ImageNet
# step 1: prepare the imagenet logits and targets for training and val set
python prepare_imagenet.py --out-dir <OUT_DIR> --data-root=<DATA_PATH>
# step 2: get results
python experiments_visual.py --dataset imagenet
License
This repo are released under the CC-BY-NC 4.0 license. See LICENSE for additional details.