Toolkit for collecting and applying templates of prompting instances

Overview

PromptSource

Toolkit for collecting and applying templates of prompting instances.

WIP

Setup

  1. Download the repo
  2. Navigate to root directory of the repo
  3. Install requirements with pip install -r requirements.txt

Running

From the root directory of the repo, you can launch the editor with

streamlit run promptsource/promptsource.py

Writing Templates

A prompt template is expressed in Jinja.

It is rendered using an example from the corresponding Hugging Face datasets library (a dictionary). The separator ||| should appear once to divide the template into prompt and output. Generally, the prompt should provide information on the desired behavior, e.g., text passage and instructions, and the output should be a desired response.

Here's an example for AG News:

{{text}}
Is this a piece of news regarding world politics, sports, business, or technology? |||
{{ ["World politics", "Sport", "Business", "Technology"][label] }}

Contributing

This is very much a work in progress, and help is needed and appreciated. Anyone wishing to contribute code can contact Steve Bach for commit access, or submit PRs from forks. Some particular places you could start:

  1. Try to express things! Explore a dataset and tell us what's hard to do to create templates you want
  2. Look in the literature. Are there prompt creation methods that do/do not fit well right now?
  3. Scalability testing. Streamlit is lightweight, and we're reading and writing all prompts on refresh.

See also the design doc.

Before submitting a PR or pushing a new commit, please run style formattings and quality checks so that your newly added file look nice:

make style
make quality

Known Issues

Warning or Error about Darwin on OS X: Try downgrading PyArrow to 3.0.0.

Comments
  • Bias/fairness quantitative measurements

    Bias/fairness quantitative measurements

    Two goals:

    • have quantitative measurements for the paper's Broader impact section
    • reflect these in the model cards when we release checkpoints

    4 held out evaluation sets identified by the Evaluation WG:

    • jigsaw_toxicity_pred
    • crows_pairs
    • winogender (AXG in SuperGLUE)
    • winobias

    At the current state, I see crows_pairs and winobias prompted, jigsaw_toxicity_pred has an opened PR (#451) we need to check, winogender needs to be prompted.

    Workflow:

    • [x] prompt those that were not prompted yet
    • [x] making sure these were actually cached (@VictorSanh can have this caching step done fairly quickly)
    • [x] evaluation (normal and score rank evaluation) or maybe they have some special evaluation?
    • [x] when we know which checkpoints exactly to eval, get the final numbers to report
    opened by VictorSanh 22
  • remove language restrictions in tydiqa + add arabic prompts

    remove language restrictions in tydiqa + add arabic prompts

    • [x] Remove the if statement to allow English prompts to work across the Dataset.
    • [x] Add Arabic prompts for primary task subset with if statement to include only Arabic set.
    • [x] Add Arabic prompts for secondary task subset.
    opened by KhalidAlt 21
  • Tracking trainings and evals

    Tracking trainings and evals

    Main runs

    • D4 only (finetune-t5-xxl-lm-d4-091621-512)
      • [x] Training
      • [x] Eval
        • [x] Last
          • [x] Normal
          • [x] Rank
        • [x] 1'112'200 (half fine-tuning) (cf #456)
          • [x] Normal
          • [x] Rank
        • [x] 1'124'700 (second to last checkpoint)
          • [x] Normal
          • [x] Rank
      • [ ] SuperGLUE test (1380485) - RUNNING
      • [x] BigBench - see #469
    • D4 + GPT (finetune-t5-xxl-lm-d4-gpt-091621/)
      • [x] Training
      • [x] Eval
        • [x] Last
          • [x] Normal
          • [x] Rank
        • [x] 1'112'200 (half fine-tuning)
          • [x] Normal
          • [x] Rank
      • [x] BigBench - see #469 - RUNNING
    • D4 + GPT + Sglue (finetune-t5-xxl-lm-d4-all-091621)
      • [X] Training
      • [X] Eval
        • [x] Last
          • [x] Normal
          • [x] Rank
        • [x] 1'112'000 (half fine-tuning)
          • [x] Normal
          • [x] Rank
      • [x] BigBench - see #469 - RUNNING

    Ablations

    • Nb Template - 1 OG Template (finetune-t5-xxl-lm-d4-og-091621)
      • [X] Training
      • [X] Eval
        • [x] Last
          • [x] Normal
          • [x] Rank
        • [x] 1'112'000 (half fine-tuning)
          • [x] Normal
          • [x] Rank
    • all OG Templates (finetune-t5-xxl-lm-d4-og-all-091621) #465
      • [x] Training - RUNNING
      • [ ] Eval
        • [ ] Last
          • [ ] Normal
          • [ ] Rank
        • [x] 1'112'000 (half fine-tuning)
          • [ ] Normal ``
          • [x] Rank ``
    • Size XL (finetune-t5-xl-lm-d4-091621)
      • [X] Training
      • [X] Eval
        • [x] Last
          • [x] Normal
          • [x] Rank
        • [x] 1'112'000 (half fine-tuning)
          • [x] Normal
          • [x] Rank
    • D4 only duplicate (finetune-t5-xxl-lm-d4-091621/)
      • [x] Training
      • [ ] Eval
        • [ ] Last
          • [ ] Normal
          • [ ] Rank
        • [X] 1'112'000 (half fine-tuning)
          • [X] Normal
          • [X] Rank

    Baseline

    • T5 zero-shot baseline
      • [x] Eval - RUNNING under finetune-t5-xxl-lm-d4-091621
        • [x] Normal 1384362 - RUNNING
        • [x] Rank 1384360 - RUNNING
    opened by VictorSanh 18
  • Special eval metrics and scripts

    Special eval metrics and scripts

    Since we have the string outputs of all tasks, in principal we should be able to run arbitrary metrics, especially for datasets require fancy metrics.@lintangsutawika has imported the official eval scripts for ReCoRD, SQuAD v2, Natural Questions, TriviaQA, and DROP.

    Update: Even when using Lintang's eval scripts, all extractive QAs and closed-book (generative) QAs still have abnormally low numbers, namely:

    • [ ] ReCoRD
    • [x] SQuAD2 (contains unanswerables)
    • [x] DROP
    • [ ] CoQA (contains unanswerables, multiple questions per example)
    • [ ] Quac (contains unanswerables, multiple questions per example)
    • [ ] Natural Questions
    • [x] TriviaQA
    • [x] WebQuestions

    Also, I think all eval of extractive QA from the training mixture also failed.

    (Note that ARC is closed-book, but its performance is fine because it's multiple-choice. A great point in case that machine task categories care more about format way more than human skill/knowledge.)

    Others with issues to keep an eye on:

    • [x] HellaSwag
    • [x] Lambada
    • [x] Winogrande
    evaluations 
    opened by awebson 14
  • Fix rendering: use simple `st.text` instead of `st.markdown`

    Fix rendering: use simple `st.text` instead of `st.markdown`

    The rendering is messed up in a bunch of places. A few issues that are symptomatic: #355 #326

    Replace st.markdown by st.text and extensively test that the rendering is better.

    opened by VictorSanh 13
  • Templates for `ncbi_disease`

    Templates for `ncbi_disease`

    This one compared to the others has been a real pain, but I think the templates are quite interesting. Not sure the Jinja style is top notch, but it seems functional on the example sets I checked.

    opened by drugilsberg 12
  • Creating unique identifier in the template.yaml

    Creating unique identifier in the template.yaml

    For now, it looks like we can sort of uniquely identify each template using a combination of template name and dataset name, but I'm expecting potential collisions when a lot of people start contributing. Besides, naming each template might not be useful (like if we end up with names like template1 template2 etc...), and it would help contributors if they don't have to add a name/check conflicts on the naming part before merging their template.yaml.

    I was thinking that we could add an ID to each entry by getting the hash of timestamp + dataset + string of prompt python function or jinja template? That should be more than enough to prevent collisions

    enhancement 
    opened by arnaudstiegler 12
  • Add prompts for DiaBLa dataset

    Add prompts for DiaBLa dataset

    Added a range of prompts for different tasks:

    • sentence-level translation
    • analogy-based translation
    • contextual translation (with context in different languages, automatically vs. manually translated)
    • detection of errors in translations
    • identification of machine translated output compared to human-produced translations
    opened by rbawden 11
  • AttributeError: 'NoneType' object has no attribute 'session_id'

    AttributeError: 'NoneType' object has no attribute 'session_id'

    Hi,

    Is anyone facing this issue while running streamlit run promptsource/app.py ?

    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "C:\Users\I355109\Anaconda3\envs\python37\lib\multiprocessing\spawn.py", line 105, in spawn_main
        exitcode = _main(fd)
      File "C:\Users\I355109\Anaconda3\envs\python37\lib\multiprocessing\spawn.py", line 114, in _main
        prepare(preparation_data)
      File "C:\Users\I355109\Anaconda3\envs\python37\lib\multiprocessing\spawn.py", line 225, in prepare
        _fixup_main_from_path(data['init_main_from_path'])
      File "C:\Users\I355109\Anaconda3\envs\python37\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
        run_name="__mp_main__")
      File "C:\Users\I355109\Anaconda3\envs\python37\lib\runpy.py", line 263, in run_path
        pkg_name=pkg_name, script_name=fname)
      File "C:\Users\I355109\Anaconda3\envs\python37\lib\runpy.py", line 96, in _run_module_code
        mod_name, mod_spec, pkg_name, script_name)
      File "C:\Users\I355109\Anaconda3\envs\python37\lib\runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "C:\Users\I355109\promptsource\promptsource\app.py", line 59, in <module>
        state = _get_state()
      File "c:\users\i355109\promptsource\promptsource\session.py", line 84, in _get_state
        session = _get_session()
      File "c:\users\i355109\promptsource\promptsource\session.py", line 74, in _get_session
        session_id = get_report_ctx().session_id
    AttributeError: 'NoneType' object has no attribute 'session_id'
    
    opened by manandey 11
  • Add conll2003 ner,pos,chunk task.

    Add conll2003 ner,pos,chunk task.

    Prompt Description:

    1. flat_question_with_label : Regular task. Label are normalized label in-case of POS tagging.
    2. flat_question_with_random_label : It is not expected that user will always provide labels strictly from the dataset. They may provide a subset of labels. So here we provided subset of labels. If the gold labels in the sample are not available in the subset we replace the gold label with "O". In case of choosing random tags, We always include "O" tag for ner, pos and chunk labels.
    3. flat_question_without_label : Regular task. No label is provided.

    POS label Normalization

    Both NER and Chunk task contains "O" tags. But POS doesn't contain "O" tag.

    In case of parts-of-speech tags, there are few labels that are weird in natural sense. For example see a prompt with all pos labels,

    Generate parts of speech from the following sentence. The parts of speech tags are ", '', #, $, (, ), ,, ., :, ``, CC, CD, DT, EX, FW, IN, JJ, JJR, JJS, LS, MD, NN, NNP, NNPS, NNS, NN|SYM, PDT, POS, PRP, PRP$, RB, RBR, RBS, RP, SYM, TO, UH, VB, VBD, VBG, VBN, VBP, VBZ, WDT, WP, WP$, WRB
    

    Here first 9 labels are normalized to "O" tag.

    Earlier Pull

    Earlier zip was not available so I wrote the a brute force code in O(n^2) complexity. But now that zip is available, I wrote the code with simpler notation and loop (with O(n) complexity). While merging I messed up in previous pull https://github.com/bigscience-workshop/promptsource/pull/170 . So I closed that and created the new pull.

    opened by sbmaruf 11
  • Some tweaks to the Editor

    Some tweaks to the Editor

    • Default sort by popularity and include number of prompts

    image

    • Global progress table

    image

    • Separated out new prompts and select, and the columns (I think this is okay, but might break)

    image

    • Added text wrapping so that you can view long prompts

    • Added a template viewer section

    image

    opened by srush 11
  • Merging xP3 & eval-hackathon

    Merging xP3 & eval-hackathon

    I would like to get all xP3 prompts merged into eval-hackathon & then have eval-hackathon be merged into main once & for all, but I'm not sure if you want that? cc @VictorSanh @stephenbach

    It would include adding:

    • Normal english prompts for various new & existing datasets (many PRs already open - will clean them up & request reviews when ready if someone gives me rights to do so / merge if i can & tests pass)
    • Long prompts https://github.com/bigscience-workshop/promptsource/pull/823/files ; Would request reviews once ready
    • Human-translated & Machine-translated prompts (Always suffixed with ht or mt, roughly 20K lines; see https://github.com/Muennighoff/promptsource/pull/47/files)

    Lmk if you're okay with that & I'll get started 🙂

    opened by Muennighoff 0
  • Regenerate all templates in unicode

    Regenerate all templates in unicode

    @KhalidAlt made a great suggestion to change

    yaml.dump(self.format_for_dump(), open(self.yaml_path, "w"))
    

    to

    yaml.dump(self.format_for_dump(), open(self.yaml_path, "w"), allow_unicode=True)
    

    in templates.py so that the yaml files display as unicode (rather than the unicode code points). We should definitely do this, but we should do it when the corpus is relatively stable and we can regenerate (read and write) all the yaml as one commit.

    opened by stephenbach 0
  • fix fields names for TREC after

    fix fields names for TREC after

    changes were induced here https://huggingface.co/datasets/trec/commit/1f97567bdd2adedefe8abdaa9bd6ee0e6725b458 to the field names (coarse_label and fine_label)

    opened by VictorSanh 0
Releases(v0.2.3)
Owner
BigScience Workshop
Research workshop on large language models - The Summer of Language Models 21
BigScience Workshop
Software to help automate collecting crowdsourced annotations using Mechanical Turk.

Video Crowdsourcing Software to help automate collecting crowdsourced annotations using Mechanical Turk. The goal of this project is to enable crowdso

Mike Peven 1 Oct 25, 2021
Lark is a parsing toolkit for Python, built with a focus on ergonomics, performance and modularity.

Lark is a parsing toolkit for Python, built with a focus on ergonomics, performance and modularity.

Lark - Parsing Library & Toolkit 3.5k Jan 5, 2023
A toolkit for writing and executing automation scripts for Final Fantasy XIV

XIV Scripter This is a tool for scripting out series of actions in FFXIV. It allows for custom actions to be defined in config.yaml as well as custom

Jacob Beel 1 Dec 9, 2021
This is discord nitro code generator and checker made with python. This will generate nitro codes and checks if the code is valid or not. If code is valid then it will print the code leaving 2 lines and if not then it will print '*'.

Discord Nitro Generator And Checker ⚙️ Rᴜɴ Oɴ Rᴇᴘʟɪᴛ ??️ Lᴀɴɢᴜᴀɢᴇs Aɴᴅ Tᴏᴏʟs If you are taking code from this repository without a fork, then atleast

Vɪɴᴀʏᴀᴋ Pᴀɴᴅᴇʏ 37 Jan 7, 2023
🔩 Like builtins, but boltons. 250+ constructs, recipes, and snippets which extend (and rely on nothing but) the Python standard library. Nothing like Michael Bolton.

Boltons boltons should be builtins. Boltons is a set of over 230 BSD-licensed, pure-Python utilities in the same spirit as — and yet conspicuously mis

Mahmoud Hashemi 6k Jan 4, 2023
A Python utility belt containing simple tools, a stdlib like feel, and extra batteries. Hashing, Caching, Timing, Progress, and more made easy!

Ubelt is a small library of robust, tested, documented, and simple functions that extend the Python standard library. It has a flat API that all behav

Jon Crall 638 Dec 13, 2022
isort is a Python utility / library to sort imports alphabetically, and automatically separated into sections and by type.

isort is a Python utility / library to sort imports alphabetically, and automatically separated into sections and by type. It provides a command line utility, Python library and plugins for various editors to quickly sort all your imports.

Python Code Quality Authority 5.5k Jan 8, 2023
A library for interacting with Path of Exile game and economy data, and a unique loot filter generation framework.

wraeblast A library for interfacing with Path of Exile game and economy data, and a set of item filters geared towards trade league players. Filter Ge

David Gidwani 29 Aug 28, 2022
A script to parse and display buy_tag and sell_reason for freqtrade backtesting trades

freqtrade-buyreasons A script to parse and display buy_tag and sell_reason for freqtrade backtesting trades Usage Copy the buy_reasons.py script into

Robert Davey 31 Jan 1, 2023
Application for easy configuration of swap file and swappiness priority in slackware and others linux distributions.

Swap File Program created with the objective of assisting in the configuration of swap file in Distributions such as Slackware. Required packages: pyt

Mauricio Ferrari 3 Aug 6, 2022
kawadi is a versatile tool that used as a form of weapon and is used to cut, shape and split wood.

kawadi kawadi (કવાડિ in Gujarati) (Axe in English) is a versatile tool that used as a form of weapon and is used to cut, shape and split wood. kawadi

Jay Vala 2 Jan 10, 2022
Install, run, and update apps without root and only in your home directory

Qube Apps Install, run, and update apps in the private storage of a Qube Building instrutions

Micah Lee 26 Dec 27, 2022
Install, run, and update apps without root and only in your home directory

Qube Apps Install, run, and update apps in the private storage of a Qube. Build and install in Qubes Get the code: git clone https://github.com/micahf

Micah Lee 26 Dec 27, 2022
A python lib for generate random string and digits and special characters or A combination of them

A python lib for generate random string and digits and special characters or A combination of them

Torham 4 Nov 15, 2022
Cleaning-utils - a collection of small Python functions and classes which make cleaning pipelines shorter and easier

cleaning-utils [] [] [] cleaning-utils is a collection of small Python functions

null 4 Aug 31, 2022
A fancy and practical functional tools

Funcy A collection of fancy functional tools focused on practicality. Inspired by clojure, underscore and my own abstractions. Keep reading to get an

Alexander Schepanovski 2.9k Jan 7, 2023
Make your functions return something meaningful, typed, and safe!

Make your functions return something meaningful, typed, and safe! Features Brings functional programming to Python land Provides a bunch of primitives

dry-python 2.6k Jan 9, 2023
Extract the download URL from OneDrive or SharePoint share link and push it to aria2

OneDriveShareLinkPushAria2 Extract the download URL from OneDrive or SharePoint share link and push it to aria2 从OneDrive或SharePoint共享链接提取下载URL并将其推送到a

高玩梁 262 Jan 8, 2023
SysInfo is an app developed in python which gives Basic System Info , and some detailed graphs of system performance .

SysInfo SysInfo is an app developed in python which gives Basic System Info , and some detailed graphs of system performance . Installation Download t

null 5 Nov 8, 2021