Language model Prompt And Query Archive

Related tags

Deep Learning LPAQA
Overview

LPAQA: Language model Prompt And Query Archive

This repository contains data and code for the paper How Can We Know What Language Models Know?

Install

Our repository is based on LAMA. Please download our fork from here and follow the instructions to set up the environment and download pre-trained language models.

# clone LPAQA
git clone https://github.com/jzbjyb/LPAQA.git LPAQA
# clone LAMA
git clone https://github.com/jzbjyb/LAMA.git LAMA
pushd LAMA && git reset --hard b6b1885c64de5981f249a8b65de25cb0802b4bd5 && rm -rf .git && popd
mv LAMA/* LPAQA/ && rm -rf LAMA
# follow the instructions to install
cd LPAQA
./setup.sh

Retrieve factual knowledge from LMs

For example, to query the owner of MSN (Microsoft is the answer), you can either use manually created prompts (x is owned by y):

python lama/eval_generation.py --lm bert --t "MSN is owned by [MASK]."

manual

or use LPAQA that ensembles a diversity of prompts:

# mined prompts
python lama/eval_ensemble.py --lm bert --subject MSN --relation P127 --prompts prompt/mine
# paraphrased prompts
python lama/eval_ensemble.py --lm bert --subject MSN --relation P127 --prompts prompt/paraphrase

mine

You might also like...
The Few-Shot Bot: Prompt-Based Learning for Dialogue Systems

Few-Shot Bot: Prompt-Based Learning for Dialogue Systems This repository includes the dataset, experiments results, and code for the paper: Few-Shot B

Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

Prompt-Tuning Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning" Currently, we support the following huggigface models: Bart

Codes for "Template-free Prompt Tuning for Few-shot NER".

EntLM The source codes for EntLM. Dependencies: Cuda 10.1, python 3.6.5 To install the required packages by following commands: $ pip3 install -r requ

[CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos
[CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos

Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos Created by Muheng Li, Lei Chen, Yueqi Duan, Zhilan Hu, Jianjiang Feng, Jie

I decide to sync up this repo and self-critical.pytorch. (The old master is in old master branch for archive)

An Image Captioning codebase This is a codebase for image captioning research. It supports: Self critical training from Self-critical Sequence Trainin

Python library containing BART query generation and BERT-based Siamese models for neural retrieval.
Python library containing BART query generation and BERT-based Siamese models for neural retrieval.

Neural Retrieval Embedding-based Zero-shot Retrieval through Query Generation leverages query synthesis over large corpuses of unlabeled text (such as

The implementation of CVPR2021 paper Temporal Query Networks for Fine-grained Video Understanding, by Chuhan Zhang, Ankush Gupta and Andrew Zisserman.
The implementation of CVPR2021 paper Temporal Query Networks for Fine-grained Video Understanding, by Chuhan Zhang, Ankush Gupta and Andrew Zisserman.

Temporal Query Networks for Fine-grained Video Understanding 📋 This repository contains the implementation of CVPR2021 paper Temporal_Query_Networks

Generative Query Network (GQN) in PyTorch as described in
Generative Query Network (GQN) in PyTorch as described in "Neural Scene Representation and Rendering"

Update 2019/06/24: A model trained on 10% of the Shepard-Metzler dataset has been added, the following notebook explains the main features of this mod

Vector AI — A platform for building vector based applications. Encode, query and analyse data using vectors.
Vector AI — A platform for building vector based applications. Encode, query and analyse data using vectors.

Vector AI is a framework designed to make the process of building production grade vector based applications as quickly and easily as possible. Create

Comments
  • ModuleNotFoundError: No module named 'knowledge_bert'

    ModuleNotFoundError: No module named 'knowledge_bert'

    from knowledge_bert import BertTokenizer, BertForMaskedLM, BasicTokenizer from knowledge_bert.tokenization import whitespace_tokenize_ent

    "knowledge_bert" is not in the https://github.com/facebookresearch/LAMA/tree/main/lama/modules. How can I get them?

    Thanks!

    opened by sev777 0
  • Manual templates

    Manual templates

    Hi, thanks for sharing the code! I can see templates by manual_paraphrase, but I'm don't know which one is manual or paraphrased. Could you also share the manual templates only? Thanks

    opened by bosung 0
  • The provided weights are for which model

    The provided weights are for which model

    Hi, neither in the paper nor in the readme, is it specified, corresponding to which LM were those weights trained on. By that I mean, do I have to use those weights with the BERT-base or BERT-large? Moreover are they cased on uncased models?

    opened by theartpiece 0
  • Trouble running experiments

    Trouble running experiments

    I'm trying to run the experiments (run_exp.sh) and I'm running into a problem with the rel_file. I assume it's one of the mine/paraphrase prompt files so for the --rel_file parameter I put something like "prompts/mine/P19.jsonl" but then I get an error saying the key 'relation' is expected. Do I need to add a relations key like in get_test_phrase_parameters? i.e. relations = [{"relation": "P108", "template": ["[X] works for [Y] .", "[Y] commentator [X] ."]}]

    More specifically, how do I recreate the micro/macro averaged accuracies of Table 2 and 3 in the paper?

    opened by taylorshin 1
Owner
null
Code for ACL 21: Generating Query Focused Summaries from Query-Free Resources

marge This repository releases the code for Generating Query Focused Summaries from Query-Free Resources. Please cite the following paper [bib] if you

Yumo Xu 28 Nov 10, 2022
EMNLP 2021 Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections

Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections Ruiqi Zhong, Kristy Lee*, Zheng Zhang*, Dan Klein EMN

Ruiqi Zhong 42 Nov 3, 2022
Learning to Prompt for Vision-Language Models.

CoOp Paper: Learning to Prompt for Vision-Language Models Authors: Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu CoOp (Context Optimization)

Kaiyang 679 Jan 4, 2023
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by

Mehdi Cherti 135 Dec 30, 2022
Vision-Language Transformer and Query Generation for Referring Segmentation (ICCV 2021)

Vision-Language Transformer and Query Generation for Referring Segmentation Please consider citing our paper in your publications if the project helps

Henghui Ding 143 Dec 23, 2022
Code and datasets for the paper "KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction"

KnowPrompt Code and datasets for our paper "KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction" Requireme

ZJUNLP 137 Dec 31, 2022
a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LSTM layers

RNN-Playwrite a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LS

Arno Barton 1 Oct 29, 2021
The Power of Scale for Parameter-Efficient Prompt Tuning

The Power of Scale for Parameter-Efficient Prompt Tuning Implementation of soft embeddings from https://arxiv.org/abs/2104.08691v1 using Pytorch and H

Kip Parker 208 Dec 30, 2022
This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”

This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?” Usage To replicate our results in Secti

Albert Webson 64 Dec 11, 2022
The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"

The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"

Sun Yi 201 Nov 21, 2022