Original Implementation of Prompt Tuning from Lester, et al, 2021

Overview

Prompt Tuning

This is the code to reproduce the experiments from the EMNLP 2021 paper "The Power of Scale for Parameter-Efficient Prompt Tuning" (Lester et al., 2021).

These models are built on T5X, which defines the model and training loop; Flaxformer, which defines the actual model computation; Flax, which defines the low level model layers; and Jax, which provides the actual execution. Details of our implementation can be found here.

Table of Contents

Installation

  1. Follow the first 3 steps in the T5X installation instructions to create a cloud TPU VM. Also follow step 5 and create a Google Cloud Storage (GCS) bucket. We will read and write data to this bucket using a URI formatted like gs://{bucket-name}/path/to/item/in/bucket. This is where we will store cached datasets as well as model checkpoints and results. For ease of reference, some of the most common cloud commands for interacting with the TPU VMs are
# Create a Cloud TPU VM
$ gcloud alpha compute tpus tpu-vm create ${TPU_NAME} \
    --zone ${ZONE} \
    --accelerator-type v3-8 \
    --version v2-alpha

# SSH into a Cloud TPU VM
$ gcloud alpha compute tpus tpu-vm ssh ${TPU_NAME} --zone ${ZONE}

# Delete a Cloud TPU VM
$ gcloud alpha compute tpus tpu-vm delete ${TPU_NAME} --zone ${ZONE}
  1. You should now be at the command-line of the TPU VM instance. Clone the Prompt Tuning repository.
git clone --branch=main https://github.com/google-reserach/prompt-tuning
cd prompt_tuning
  1. Install the Prompt Tuning library.
python3 -m pip install . -f https://storage.googleapis.com/jax-releases/libtpu_releases.html

Note: If you plan to hack on the internals of prompt tuning and need an editable install (so changes in the clone code are used when you run training) run pip with the -e flag and you may need to delete the pyproject.toml file if you are getting errors during installation.

To run the tests, install the package with the [test] (python3 -m pip install .[test] ...) option and then run python3 -m pytest from the root of the cloned repository.

Training a Prompt

Training a prompt is similar to fine-tuning a model with T5X; the main difference is that we have our own set of Prompt Tuning configuration files to use.

We provide a demo script (prompt_tuning/scripts/sst2-demo.sh) that has all the required parts for training a prompt. You can use this as a starting point, or set MODEL_DIR and TFDS_DATA_DIR environment variables with paths to your Google Cloud Storage bucket to run this script directly.

./prompt-tuning/prompt_tuning/scripts/sst2-demo.sh

To help with iteration speed, we tend to specify a lot more options the command line rather than bundling all of the configuration into a single gin file. A few options of note:

  • --gin_search_paths :: a comma separated list of directories to use as path prefixes for gin files. We can use prompt_tuning.scripts.find_module ${module} to find the install location of libraries that bundle configurations with them.
  • --gin_file :: The gin file to load. We tend to use paths relative starting with the library they are installed with, i.e. prompt_tuning/configs/models/t5_1_1_base_prompt.gin over models/t5_1_1_base_prompt.gin to avoid any confusion. Using the flag multiple time can be used to specify multiple gin files that will get merged together. Any configurations options set in multiple files will use the value from the last file in the list.
  • --gin.{PARAM}={VALUE} :: This general override flag will set PARAM to VALUE. This can be used to easily set configuration options without requiring them to be actual command line arguments. For example. --gin.utils.SaveCheckpointConfig.keep=20 will save the last 20 checkpoints.

Training a Prompt on a Pod Slice

As models get larger, xl and xxl for example, they do not fit on the 8 TPUs that come with a single TPU VM. In these cases we will need a slice of a TPU pod (more information about TPU architecture and available configurations can be found here). The main difference between training a prompt on a single TPU VM and on a Pod slice is that we now have multiple TPU VMs and will run the same SPMD JAX each VM, this page has more information on multi-host JAX programs. This guide gives a quick introduction to running JAX programs on a TPU Pod slice, but we will hit main points here.

  1. Create a TPU Pod slice. This page lists which accelerator types are available in which zones. This is the same as creating a TPU VM above, except that we are requesting 32 TPUs instead of 8.
$ gcloud alpha compute tpus tpu-vm create ${TPU_NAME} \
    --zone ${ZONE} \
    --accelerator-type v3-32 \
    --version v2-alpha
  1. Install the Prompt Tuning library. Given that we now have 4 TPU VM, each one has 8 of out TPUs, we want to forgo ssh'ing directly into the VM, as we would need to do that for each host. Instead, the Google Cloud SSH command allows use to specify a command to run with the --command= flag and that it should be run on all our VMs (called workers) with --worker=all.
$ gcloud alpha compute tpus tpu-vm ssh ${TPU_NAME} \
  --zone ${ZONE} \
  --worker=all \
  --command="git clone --branch=main https://github.com/google-reserach/prompt-tuning && cd prompt-tuning && "
python3 -m pip install . -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
  1. Write the script to train your prompt. We included a demo script (/prompt_tuning/scripts/sst2-xxl-demo.sh) the trains an prompt to solve the SST2 dataset using T5 1.1 lm100k XXL. You can use this as a starting point or just fill in the paths to your Google Cloud Storage bucket to specify where you want to save your results (MODEL_DIR) and where to cache TFDS data (TFDS_DATA_DIR), or set them as environment variables.

  2. Copy your training script each worker. If this is your first time running scp you may get error, run the ssh-add /.../.ssh/google_compute_engine command from the error message and try again.

$ gcloud alpha compute tpus tpu-vm scp sst2-xxl-demo.sh ${TPU_NAME}: \
  --zone=${ZONE}
  --worker=all
  1. Execute your training script.
$ gcloud alpha compute tpus tpu-vm ssh ${TPU_NAME} \
  --zone ${ZONE} \
  --worker=all \
  --command="./sst2-xxl-demo.sh"

If one of the workers has an error during training, you will be left with processes that are using the TPUs on the other workers. This will stop you from restarting your job until those processes a terminated and release the TPU. The following command should end all these processes. You may see the kill command man page come back from the worker who had the initial error.

$ gcloud alpha compute tpus tpu-vm ssh ${TPU_NAME} \
  --zone ${ZONE} \
  --worker=all \
  --command="sudo lsof -t /dev/accel0 | xargs kill -9"

Custom Dependencies

To train prompts using custom parts, like your own dataset, follow the T5X Instructions on Custom Components

If you package your code as a pip-installable python package, you won't be bound to a single directory, and you can use python3 -m prompt_tuning.scripts.find_module {your_module} to help set the gin_search_paths so that gin configs bundled in your library are findable. Note: If you do plan to bundle gin configs in an installable package, make sure that the directories that contain the config files have an __init__.py as gin requires files to be in a python package.

If parts of your custom components are gin configurable, they need to be explicitly imported in your gin files; if they end up getting imported after the gin files are parsed, they will cause an error. If none of your dependencies contain gin configurables, you can avoid writing a gin file by passing --gin.MIXTURE_OR_TASK_MODULE="'path.to.your.module'. This will automatically import your module and is convenient for when all you are doing is swapping out datasets.

Inference with a Prompt

Our suggested way to do inference with a prompt is to load the original checkpoint used to initialize the model, and the prompt from a file. As explained in this section about partial loading T5X supports loading some model parameters while initializing others from scratch. We use this in conjunction with the from_array prompt initializer to reload the frozen parameters from the original checkpoint and the prompt file a file. The configs/runs/prompt_eval.gin sets up this configuration for you; you just have to supply a PROMPT_FILE. If your model was trained with any of the prompts/ config files, you can remove them from the arguments to the evaluation script.

The included sst2-demo-eval.sh script shows an example of doing evaluation this way. All that is needed is to set EVAL_DIR and TFDS_DATA_DIR environment variables to the paths to store the output of evaluation and the tensorflow datasets cache respectivly.

In T5X, the evaluation script assumes that your dataset has labels and outputs the final results from your dataset's metric functions. The inference script does not require labels and instead outputs your model's prediction. We include an analogous prompt_infer.gin file to use with the inference script.

If you want to do inference or evaluation with the t5x checkpoint that is produced from a prompt tuning training run, you can use the (eval|infer).gin config from T5X directly. You will need to update the utils.RestoreChekcpointConfig though. You should set path to the new checkpoint, assignment_map=() and fallback_to_scratch=False.

Model Configuration

All model, training, evaluation, saving, restoring, etc. configuration is done via gin. See the gin-config repository for a general introduction to gin and this primer

We follow the T5X configuration layout:

  • runs/ :: contains configs for the actual training of model. This is where things like dataset and evaluation configuration go.
  • architectures/ :: contains configs for how the model works. This is where things like encoder-decoder vs decoder-only and embedding sharing are configured.
  • models/ :: contains configs that set model specific parameters like the number of layers or the size of the embedding table. It also configures things like the T5X model wrapper used.
  • decoding/ :: contains easy to use configs to swap out how the model generates text during inference, includes configs for beam search and nucleus sampling.
  • prompts/ :: Our extra directory contains configs that set the PROMPT gin variable, allowing for easy switching of the prompt initialization based which prompt file is added as a --gin_file argument (it needs to come after the models/ gin file),

Order of gin config files

When specifying --gin_file arguments in the command line, the order matters. The general order in which the gin files must be specified is:

  1. models/*.gin
  2. prompts/*.gin
  3. models/decoding/*.gin
  4. runs/*.gin

Required Fields

T5X has some required fields like MIXTURE_OR_TASK_NAME or TASK_FEATURE_LENGTHS. We add two more:

  • PROMPT_LENGTH :: The length of the prompt we are using, this is used in a few different places to we require it as a gin macro we can reference in multiple places and ensure the values are in sync.
  • PROMPT :: This is the configuration of the actual prompt module that will be used in the Flaxformer PromptX subclasses.

Note: Prompt Tuning does not currently support packing of examples. This means that our max target length only need to be long enough to fit the target for each example. This means our targets key in the TASK_FEATURE_LENGTHS mapping can be much shorter, for example around 4 for many SuperGLUE (Wang et al., 2019) tasks, compared to 62 which is what the P5X default is.

Prompt Initialization

There are several options for the initialization of the prompt parameter. We support the various methods in section 3.2 our paper, as well as initialization from a file. The latter allows one to do things like train on BoolQ starting from a prompt learned on MNLI.

All initializers follow the flax initializer API of being a parameterized function that returns a closure over the initialization function. The actual initialization function always has the signature of

def initializer(rng: Array, shape: Sequence[int]) -> Array:
  ...

We provide each initialization scheme as a gin configuration file in the configs/prompts directory. They can be used by including the gin file with the --gin_file=path/to/configs/prompts/scheme.gin. This file needs to come after the main model file, otherwise the default (random uniform) method will overwrite the one you selected. Some of these initialization methods will require you to set extra gin values either though an override flag of in one of your gin files.

Random Uniform

A standard, random initialization similar to what people have used for embedding initialization. This is the default and no gin file is required. The scale of the random values can be adjusted by overridding prompt_init/linen.initializers.uniform.scale=N.

Sampled Vocab

Sample a token embedding to use as initialization for each prompt position with the from_sample_of_embeddings initializer. You can limit the sampling to the first n embeddings with the prompt_init/prompts.from_samples_of_embeddings.population_size parameter.

This can be used with --gin_file=prompt_tuning/configs/prompts/from_sampled_vocab.gin. This method requires that you provide a value for EMBEDDING_FILE that is a numpy array of the models embedding table. This can be extracted from a model checkpoint using prompt_tuning.scripts.extract_variable.

Class Label

We support initializing prompt timesteps with the embedding of class labels (a.k.a. verbalizers) via the from_embedded_list initializer. Users providing a list of words (class labels) to use. Each words is tokenized by a provided vocab; embedded with a provided vocab table; aggregated, if need be, across sub-tokens; and used to initialize a prompt time-step. If the provided tokens don't cover the full prompt length fall back to another provided initializer is used.

We can match the paper, where unfilled prompt tokens are filled by sampling from the embedding table, by composing this initialization with the one above. It can be used with --gin_file=prompt_tuning/configs/prompts/from_class_labels.gin. This requires setting an EMBEDDING_FILE (which is the same as above) and CLASS_LABELS, which is a list of the words that you want to embed as prompt initialization.

From File

You can also load a prompt from a file with the from_array initializer to enable transfer across tasks. This is done with --gin_file=prompt_tuning/configs/prompts/from_file.gin. This requires setting PROMPT_FILE with a path to the Numpy file with the prompt to load. Numpy versions of the prompt are emitted by defaulwhen training, but the prompt can also be extracted with the script mentioned above.

Released Model Checkpoints

We have released T5X native checkpoints of the T5 1.1 checkpoints that have had 100K steps of language model adaptation.

These are converted from the public Mesh TensorFlow checkpoints.

Released Prompts

We have released pretrained prompts on a variety of tasks, and plan to add to them over time.

Prompts can be found in the pretrained_prompts directory. From there each sub-directory groups prompts by the model they were trained for. The easiest way to reference these prompts that are bundled with the library is:

  --PROMPT_FILE=`python3 -m prompt_tuning.scripts.find_module prompt_tuning`/pretrained_prompts/{MODEL_SIZE}/{PROMPT}.npy

Due to the inherent randomness of parallel computation, there are a few settings that need to match between training and evaluation to get the exact same numbers. Each model sub-directory has a README.md the specifies what these settings should be. The most important settings to match are batch size, TPU topology, and model parallelism partitioning. The tables include the scores you should expect to see if you use these prompts in t5x.eval

Extra Resources

This is a collection of additional resources about Prompt Tuning.

How to Cite

If you use this work as a jumping off point, please cite

@inproceedings{lester-etal-2021-power,
    title = "The Power of Scale for Parameter-Efficient Prompt Tuning",
    author = "Lester, Brian  and
      Al-Rfou, Rami  and
      Constant, Noah",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.243",
    pages = "3045--3059",
}

Note

This is not an officially supported Google product.

Comments
  • Custom Prompt Module Doesn't Overriding Checkpoint

    Custom Prompt Module Doesn't Overriding Checkpoint

    Expected Behavior

    I changed prompt to my custom prompt class and load the model from t5x checkpoint. It's expected to override the original prompt module in the checkpoint.

    Actual Behavior

    Even though it reads the gin file as reported below

    I1101 23:22:13.777275 140512320414336 gin_utils.py:65] # Parameters for prompts.CustomPrompt:
    I1101 23:22:13.777303 140512320414336 gin_utils.py:65] # ==============================================================================
    I1101 23:22:13.777332 140512320414336 gin_utils.py:65] prompts.CustomPrompt.dtype = %ACTIVATION_DTYPE
    I1101 23:22:13.777361 140512320414336 gin_utils.py:65] prompts.CustomPrompt.length = %PROMPT_LENGTH
    I1101 23:22:13.777390 140512320414336 gin_utils.py:65] prompts.CustomPrompt.prompt_weight_init = \
    I1101 23:22:13.777419 140512320414336 gin_utils.py:65]     @prompt_weight_init/linen.initializers.constant()
    

    However, it loads original prompt module from the model checkpoint.

    Steps to Reproduce the Problem

    1. You can create a dummy custom class that doesn't have to have real content but runnable and syntax correct.
    2. Set the prompt config from L40-L43 here
    3. Run the training script

    Specifications

    • Version: None
    • Platform: None
    opened by hepengfe 4
  • Question about your paper and possible research topic?

    Question about your paper and possible research topic?

    @blester125 Does soft prompt tuning imply that we can avoid catastrophic forgetting that occurs in multitasking? To do this using a classifier or a series of action codes to predict the next task to use? Let me know what you think! and what you have seen. I am still working through a way to do this with soft prompt tune with pytorch and experiment.

    opened by ArEnSc 3
  • partitioning issues during inference on v3-32

    partitioning issues during inference on v3-32

    Hi,

    I was running into a partitioning issue on tpus. Also referenced here https://github.com/google-research/t5x/issues/250

    Actual Behavior

    I was running inference on prompt-tuning and ran into an issue when doing inference on a v3-32 with the partitioning with TypeError: 'ShapeDtypeStruct' object is not iterable. Training works fine on a v3-32, and training and inference work fine on a v3-8.

    Here is the traceback

    Traceback (most recent call last):
      File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "/home/dptam/.local/lib/python3.8/site-packages/t5x/eval.py", line 234, in <module>
        gin_utils.run(main)
      File "/home/dptam/.local/lib/python3.8/site-packages/t5x/gin_utils.py", line 103, in run
        app.run(
      File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 303, in run
        _run_main(main, args)
      File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 251, in _run_main
        sys.exit(main(argv))
      File "/home/dptam/.local/lib/python3.8/site-packages/t5x/eval.py", line 213, in main
        _main(argv)
      File "/home/dptam/.local/lib/python3.8/site-packages/t5x/eval.py", line 231, in _main
        evaluate_using_gin()
      File "/home/dptam/.local/lib/python3.8/site-packages/gin/config.py", line 1605, in gin_wrapper
        utils.augment_exception_message_and_reraise(e, err_str)
      File "/home/dptam/.local/lib/python3.8/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
        raise proxy.with_traceback(exception.__traceback__) from None
      File "/home/dptam/.local/lib/python3.8/site-packages/gin/config.py", line 1582, in gin_wrapper
        return fn(*new_args, **new_kwargs)
      File "/home/dptam/.local/lib/python3.8/site-packages/t5x/eval.py", line 127, in evaluate
        train_state_initializer = utils.TrainStateInitializer(
      File "/home/dptam/.local/lib/python3.8/site-packages/t5x/utils.py", line 365, in __init__
        self.train_state_axes = partitioner.get_mesh_axes(
      File "/home/dptam/.local/lib/python3.8/site-packages/t5x/partitioning.py", line 826, in get_mesh_axes
        mesh_axes_dict = jax.tree_map(flax_partitioning.logical_to_mesh_axes,
      File "/home/dptam/.local/lib/python3.8/site-packages/jax/_src/tree_util.py", line 178, in tree_map
        return treedef.unflatten(f(*xs) for xs in zip(*all_leaves))
      File "/home/dptam/.local/lib/python3.8/site-packages/jax/_src/tree_util.py", line 178, in <genexpr>
        return treedef.unflatten(f(*xs) for xs in zip(*all_leaves))
      File "/home/dptam/.local/lib/python3.8/site-packages/flax/linen/partitioning.py", line 154, in logical_to_mesh_axes
        axis_name_counts = collections.Counter(array_dim_names)
      File "/usr/lib/python3.8/collections/__init__.py", line 552, in __init__
        self.update(iterable, **kwds)
      File "/usr/lib/python3.8/collections/__init__.py", line 637, in update
        _count_elements(self, iterable)
    TypeError: 'ShapeDtypeStruct' object is not iterable
      In call to configurable 'evaluate' (<function evaluate at 0x7f784d161700>)
    Rewritten gin arg: --gin_bindings=MIXTURE_OR_TASK_NAME = 'glue_rte_32_shot_32_seed'
    Rewritten gin arg: --gin_bindings=MIXTURE_OR_TASK_MODULE = 'prompt_tuning.data.few_glue'
    Rewritten gin arg: --gin_bindings=TASK_FEATURE_LENGTHS = {'inputs': 512, 'targets': 8}
    Rewritten gin arg: --gin_bindings=CHECKPOINT_PATH = 'gs://nicl/pretrained_models/t5x_checkpoints/t0_3b/checkpoint_1112000'
    Rewritten gin arg: --gin_bindings=EVAL_OUTPUT_DIR = 'gs://nicl/checkpoint_models/rte/32_shot/32_seed/prompt-tuning/t0-3b/eval'
    Rewritten gin arg: --gin_bindings=utils.DatasetConfig.split = 'validation'
    Rewritten gin arg: --gin_bindings=utils.DatasetConfig.batch_size = 128
    Rewritten gin arg: --gin_bindings=USE_CACHED_TASKS = False
    Rewritten gin arg: --gin_bindings=partitioning.ModelBasedPjitPartitioner.model_parallel_submesh = (4, 4, 1, 2)
    Rewritten gin arg: --gin_bindings=PROMPT_FILE = 'gs://nicl/checkpoint_models/rte/32_shot/32_seed/prompt-tuning/t0-3b/numpy_checkpoints/checkpoint_1112300/encoder.prompt.prompt.prompt'
    ##### Command execution on worker 0 failed with return code 1. Continuing.
    ##### Command execution on worker 3 failed with return code 1. Continuing.
    ##### Command execution on worker 1 failed with return code 1. Continuing.
    ##### Command execution on worker 2 failed with return code 1. Continuing.
    

    Adam said there might be a bug in the prompt-tuning configs

    Steps to Reproduce the Problem

    The gin config used:

    python3 -m t5x.eval \
      --gin_search_paths="${T5X_DIR},${FLAXFORMER_DIR},${PROMPT_DIR}" \
      --gin_file="prompt_tuning/configs/models/t5_1_1_xl_prompt.gin" \
      --gin_file="prompt_tuning/configs/runs/prompt_eval.gin" \
      --gin.MIXTURE_OR_TASK_NAME="'glue_rte_32_shot_32_seed'" \
      --gin.MIXTURE_OR_TASK_MODULE="'prompt_tuning.data.few_glue'" \
      --gin.TASK_FEATURE_LENGTHS="{'inputs': 512, 'targets': 8}" \
      --gin.CHECKPOINT_PATH="'${PRETRAINED_MODEL}'" \
      --gin.EVAL_OUTPUT_DIR="'${EVAL_DIR}'" \
      --gin.utils.DatasetConfig.split="'validation'" \
      --gin.utils.DatasetConfig.batch_size="128" \
      --gin.USE_CACHED_TASKS="False" \
      --gin.partitioning.ModelBasedPjitPartitioner.model_parallel_submesh="(4, 4, 1, 2)" \
      --gin.PROMPT_FILE="'${PROMPT_FILE}'" \
      --tfds_data_dir=${TFDS_DATA_DIR}
    

    Specifications

    tpu v3-32

    opened by dptam 3
  • How to turn off evaluating in gin config during training?

    How to turn off evaluating in gin config during training?

    Hi,

    Expected Behavior

    I am trying to turn off evaluation during training. I thought the way to do this is to set train_eval_dataset_cfg=None and infer_eval_dataset_cfg=None. I tried setting this in configs/run/prompt_finetune.gin after include "t5x/configs/runs/finetune.gin" line with

    from t5x import train as t5x_train
    t5x_train.train:
      train_eval_dataset_cfg = None
      infer_eval_dataset_cfg = None
    

    I also tried several other variants of importing train, but none seem to worked.

    Actual Behavior

    The model still does evaluation, and when I checked the value of train_eval_dataset_cfg and infer_eval_dataset_cfg passed into train method in train.py, the values are not None, but the original values set in t5x/configs/runs/finetune.gin

    Specifications

    • Version: Uses commit f2e111d5a61a0f82feae7f4838862b92659f27ac

    Thanks

    opened by dptam 2
  • Documentation of the Negative results I found using Perceptron Training for Prompt Tuning.

    Documentation of the Negative results I found using Perceptron Training for Prompt Tuning.

    Documentation of the Negative results I found using Perceptron Training for Prompt Tuning.

    While the results here are negative, there are some interesting findings w.r.t. trained prompt norms and the implementation of training where each possible output is scored could be useful for applications that have some outputs that are "known bads".

    opened by copybara-service[bot] 1
  • Documentation of the Negative results I found using Perceptron Training for Prompt Tuning.

    Documentation of the Negative results I found using Perceptron Training for Prompt Tuning.

    Documentation of the Negative results I found using Perceptron Training for Prompt Tuning.

    While the results here are negative, there are some interesting findings w.r.t. trained prompt norms and the implementation of training where each possible output is scored could be useful for applications that have some outputs that are "known bads".

    opened by copybara-service[bot] 1
  • Compare with Adapter-style models

    Compare with Adapter-style models

    Hi, thank you for sharing your wonderful work!

    I'm curious about the comparison results between prompt tuning and the current parameter-efficient tuning methods (e.g., Adapter, prefix-tuning, LoRA, BitFit, etc). Have you tried to compare with these models? Noting that in this preprint paper, the performance of prompt tuning (or SPoT) is less than Adapter (or BitFit). Is this because the number of trainable parameters by the prompt-tuning-style is too small?

    Looking forward to hear your valuable view! Thanks in advance.

    opened by Ericmututu 0
  • GPU version is needed!

    GPU version is needed!

    Hi,

    Thanks for reading my issue. I've learned from your amazing paper named The Power of Scale for Parameter-Efficient Prompt Tuning. I'm wondering whether I could train my prompt-based model with GPU on my PC, or must via Google Cloud TPU. In other words, could you please provide me with another version of codes which is only relied on GPU?

    Looking forward to your reply! Thanks

    Chen

    opened by TITONIChen 2
  • How to achieve faster convergence speed?

    How to achieve faster convergence speed?

    Hi,

    Thank you for the simple yet elegant work!

    I wonder if you have encountered any convergence issue? I am finetuning a XGLM(4.5B) for a text generation task, with only 100 tunable prefix embeddings. However, for a training dataset with only one sentence, it takes a long time (~ 200 steps for short sentences, ~1600 steps for sentences with length=50) for model to overfit that sentence.

    It seems that 100 tunable embeddings may have limited representation power. So do you think it has the ability to fit a large corpus (10 million ~ 1 billion sentences) ?

    opened by un-certainty 1
Owner
Google Research
Google Research
Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

Prompt-Tuning Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning" Currently, we support the following huggigface models: Bart

Andrew Zeng 36 Dec 19, 2022
The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"

The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"

Sun Yi 201 Nov 21, 2022
Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of images as "pixels"

picinpics Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of

RodrigoCMoraes 1 Oct 24, 2021
The Power of Scale for Parameter-Efficient Prompt Tuning

The Power of Scale for Parameter-Efficient Prompt Tuning Implementation of soft embeddings from https://arxiv.org/abs/2104.08691v1 using Pytorch and H

Kip Parker 208 Dec 30, 2022
Code and datasets for the paper "KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction"

KnowPrompt Code and datasets for our paper "KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction" Requireme

ZJUNLP 137 Dec 31, 2022
Codes for "Template-free Prompt Tuning for Few-shot NER".

EntLM The source codes for EntLM. Dependencies: Cuda 10.1, python 3.6.5 To install the required packages by following commands: $ pip3 install -r requ

null 77 Dec 27, 2022
Black-Box-Tuning - Black-Box Tuning for Language-Model-as-a-Service

Black-Box-Tuning Source code for paper "Black-Box Tuning for Language-Model-as-a

Tianxiang Sun 149 Jan 4, 2023
Saeed Lotfi 28 Dec 12, 2022
Home repository for the Regularized Greedy Forest (RGF) library. It includes original implementation from the paper and multithreaded one written in C++, along with various language-specific wrappers.

Regularized Greedy Forest Regularized Greedy Forest (RGF) is a tree ensemble machine learning method described in this paper. RGF can deliver better r

RGF-team 364 Dec 28, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
An original implementation of "Noisy Channel Language Model Prompting for Few-Shot Text Classification"

Channel LM Prompting (and beyond) This includes an original implementation of Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer. "Noisy Cha

Sewon Min 92 Jan 7, 2023
PyTorch Implementation of Fully Convolutional Networks. (Training code to reproduce the original result is available.)

pytorch-fcn PyTorch implementation of Fully Convolutional Networks. Requirements pytorch >= 0.2.0 torchvision >= 0.1.8 fcn >= 6.1.5 Pillow scipy tqdm

Kentaro Wada 1.6k Jan 7, 2023
PyTorch original implementation of Cross-lingual Language Model Pretraining.

XLM NEW: Added XLM-R model. PyTorch original implementation of Cross-lingual Language Model Pretraining. Includes: Monolingual language model pretrain

Facebook Research 2.7k Dec 27, 2022
An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi

MetaICL: Learning to Learn In Context This includes an original implementation of "MetaICL: Learning to Learn In Context" by Sewon Min, Mike Lewis, Lu

Meta Research 27 Nov 2, 2021
A Next Generation ConvNet by FaceBookResearch Implementation in PyTorch(Original) and TensorFlow.

ConvNeXt A Next Generation ConvNet by FaceBookResearch Implementation in PyTorch(Original) and TensorFlow. A FacebookResearch Implementation on A Conv

Raghvender 2 Feb 14, 2022
This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”

This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?” Usage To replicate our results in Secti

Albert Webson 64 Dec 11, 2022
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by

Mehdi Cherti 135 Dec 30, 2022
The Few-Shot Bot: Prompt-Based Learning for Dialogue Systems

Few-Shot Bot: Prompt-Based Learning for Dialogue Systems This repository includes the dataset, experiments results, and code for the paper: Few-Shot B

Andrea Madotto 103 Dec 28, 2022
Learning to Prompt for Vision-Language Models.

CoOp Paper: Learning to Prompt for Vision-Language Models Authors: Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu CoOp (Context Optimization)

Kaiyang 679 Jan 4, 2023