Ongoing research training transformer language models at scale, including: BERT & GPT-2

Overview

What is this fork of Megatron-LM and Megatron-DeepSpeed

This is a detached fork of https://github.com/microsoft/Megatron-DeepSpeed, which in itself is a fork of https://github.com/NVIDIA/Megatron-LM. The former integrates DeepSpeed into the original Megatron-LM code.

This fork in turn will include direct changes to the models needed for the BigScience project. This is the repo we use for this project.

In addition various code bits and lots of docs are to be found at https://github.com/bigscience-workshop/bigscience.

Please note that the rest of this page has been trimmed to only include the info relevant to the BigScience project and also updated to usage with the integrated Deepspeed. You will find the original page with all the tables and training info on Bert and T5 here.

Setup

  1. Install bigscience-workshop/Megatron-DeepSpeed
git clone https://github.com/bigscience-workshop/Megatron-DeepSpeed
cd Megatron-DeepSpeed
pip install -r requirements.txt

You can now use this repo directly by working directly from it. You don't need to install it unless you write your own scripts elsewhere that use the modules in this repo, in which case you may want to do:

pip install -e .
  1. Install apex
git clone https://github.com/NVIDIA/apex
cd apex
pip install --global-option="--cpp_ext" --global-option="--cuda_ext" --no-cache -v --disable-pip-version-check .  2>&1 | tee build.log

(on JZ it's done in a special way, see here.)

  1. Install deepspeed / the big-science branch

Then install the big-science branch of deepspeed:

git clone https://github.com/microsoft/deepspeed deepspeed-big-science
cd deepspeed-big-science
git checkout big-science
rm -rf build
TORCH_CUDA_ARCH_LIST="7.0" DS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 pip install -e . --global-option="build_ext" --global-option="-j8" --no-cache -v --disable-pip-version-check

adjust TORCH_CUDA_ARCH_LIST="7.0" to the architecture of your NVIDIA GPU (or just remove it altogether if you are not sure how to find one).

(on JZ it's done in a special way, see here.)

  1. CUDA kernels compilation

The first time you run the training scripts several CUDA kernels will be compiled. Which means you need to have a cuda environment set up in your environment and it should match the version pytorch was built with.

Usage

After installation, there are several possible workflows. The most comprehensive is:

  1. Data preprocessing
  2. Pretraining
  3. Finetuning (Optional for zero-shot tasks)
  4. Downstream task evaluation or text generation

However, steps 1 and 2 can be replaced by using one of the pretrained models mentioned above.

We've provided several scripts for pretraining both BERT and GPT in examples directory, as well as scripts for both zero-shot and fine-tuned downstream tasks including MNLI, RACE, WikiText103, and LAMBADA evaluation. There is also a script for GPT interactive text generation.

Training

Vocab

The GPT vocab file and merge table can be downloaded directly.

Data Preprocessing

The training data requires preprocessing. First, place your training data in a loose json format, with one json containing a text sample per line. For example:

{"src": "www.nvidia.com", "text": "The quick brown fox", "type": "Eng", "id": "0", "title": "First Part"}
{"src": "The Internet", "text": "jumps over the lazy dog", "type": "Eng", "id": "42", "title": "Second Part"}

The name of the text field of the json can be changed by using the --json-key flag in preprocess_data.py The other metadata are optional and are not used in training.

The loose json is then processed into a binary format for training. To convert the json into mmap, cached index file, or the lazy loader format use preprocess_data.py. Set the --dataset-impl flag to mmap, cached, or lazy, respectively (default is mmap).

An example script to prepare data for GPT training is:

python tools/preprocess_data.py \
    --input my-corpus.json \
    --output-prefix my-gpt2 \
    --vocab gpt2-vocab.json \
    --dataset-impl mmap \
    --tokenizer-type GPT2BPETokenizer \
    --merge-file gpt2-merges.txt \
    --append-eod \
    --workers 8

The output will be two files named, in this case, my-gpt2_text_document.bin and my-gpt2_text_document.idx. The --data-path specified in later GPT training is the full path and new filename, but without the file extension.

Further command line arguments are described in the source file preprocess_data.py.

You can also use tools/preprocess_data_many_cores.py in the case of high amount of cpu cores available. Typically in JZ setup where cpu nodes have up to 40 physical cpu cores, you should run this script with around 60 workers instead of the tools/preprocess_data.py. The same command line arguments are available.

Merging datasets

Sometimes it's hard to work on a very large dataset at once, so one can pre-process it in chunks and then merge those datasets into a single combined indexed dataset. Here is an example:

python tools/merge_preprocessed_data.py \
    --datasets \
    meg-gpt2-oscar-en-500-p1_text_document \
    meg-gpt2-oscar-en-500-p2_text_document \
    meg-gpt2-oscar-en-500-p3_text_document \
    --output-prefix meg-gpt2_oscar_text_document

Quick pre-processing to start training with

Here is how you can get ready to train quickly, using a 1GB 79K-record jsonl dataset.

wget https://huggingface.co/bigscience/misc-test-data/resolve/main/stas/oscar-1GB.jsonl.xz
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt
xz -d oscar-1GB.jsonl.xz
python tools/preprocess_data.py \
    --input oscar-1GB.jsonl \
    --output-prefix my-gpt2 \
    --vocab gpt2-vocab.json \
    --dataset-impl mmap \
    --tokenizer-type GPT2BPETokenizer \
    --merge-file gpt2-merges.txt \
    --append-eod \
    --workers 8

GPT Pretraining

note: you may want to skip to the next section, since it describes what we actually use at the moment.

The examples/pretrain_gpt.sh script runs single GPU 345M parameter GPT pretraining. Debugging is the primary use for single GPU training, as the code base and command line arguments are optimized for highly distributed training. Most of the arguments are fairly self-explanatory. By default, the learning rate decays linearly over the training iterations starting at --lr to a minimum set by --min-lr over --lr-decay-iters iterations. The fraction of training iterations used for warmup is set by --lr-warmup-fraction. While this is single GPU training, the batch size specified by --micro-batch-size is a single forward-backward path batch-size and the code will perform gradient accumulation steps until it reaches global-batch-size whcih is the batch size per iteration.

The data is partitioned into a 949:50:1 ratio for training/validation/test sets (default is 969:30:1). This partitioning happens on the fly, but is consistent across runs with the same random seed (1234 by default, or specified manually with --seed). We use train-iters as the training iterations requested. Alternatively, one can provide --train-samples which is total number of samples to train on. If this option is present, then instead of providing --lr-decay-iters, one will need to provide --lr-decay-samples.

The logging, checkpoint-saving, and evaluation intervals are specified. Checkpointing the activations facilitates the training of larger models and/or batches. Note that the --data-path now includes the additional _text_sentence suffix added in preprocessing, but does not include the file extensions.

The tokenization scheme used is BPE (which requires a merge table and a json vocabulary file), the model architecture allows for longer sequences (note that the max position embedding must be greater than or equal to the maximum sequence length), and the --lr-decay-style has been set to cosine decay. Note that the --data-path now includes the additional _text_document suffix added in preprocessing, but does not include the file extensions.

However, as you will see below you will learn that DeepSpeed requires a distributed enviroment even with a single GPU. Therefore, instead refer to pretrain_gpt_single_node.sh, which will work with this repo.

CHECKPOINT_PATH=checkpoints/gpt2
VOCAB_FILE=gpt2-vocab.json
MERGE_FILE=gpt2-merges.txt
DATA_PATH=my-gpt2_text_document

GPT_ARGS=" \
    --num-layers 24 \
    --hidden-size 1024 \
    --num-attention-heads 16 \
    --seq-length 1024 \
    --max-position-embeddings 1024 \
    --micro-batch-size 4 \
    --global-batch-size 8 \
    --lr 0.00015 \
    --train-iters 500000 \
    --lr-decay-iters 320000 \
    --lr-decay-style cosine \
    --vocab-file $VOCAB_FILE \
    --merge-file $MERGE_FILE \
    --lr-warmup-fraction .01 \
    --fp16 \
    "

OUTPUT_ARGS=" \
    --log-interval 10 \
    --save-interval 500 \
    --eval-interval 100 \
    --eval-iters 10 \
    --checkpoint-activations \
    "

DATA_ARGS=" \
    --save $CHECKPOINT_PATH \
    --load $CHECKPOINT_PATH \
    --data-path $DATA_PATH \
    "

CMD="pretrain_gpt.py $GPT_ARGS $OUTPUT_ARGS $DATA_ARGS"

N_GPUS=1

LAUNCHER="deepspeed --num_gpus $N_GPUS"

$LAUNCHER $CMD

Note, we replaced python with deepspeed --num_gpus 1. For multi-gpu training update --num_gpus to the number of GPUs you have.

For multi-node training you will either need to create a hostfile which defines all the nodes as explained here or in the SLURM environment it might not work and you will need to use:

CMD=<as above>

MASTER_ADDR=`perl -le '$_=$ENV{"SLURM_JOB_NODELIST"}; s/,.*//; s/-.*//; s/\[//; print'`
MASTER_PORT=6000
GPUS_PER_NODE=4
NNODES=16

export LAUNCHER="python -u -m torch.distributed.launch \
    --nproc_per_node $GPUS_PER_NODE \
    --nnodes $NNODES \
    --master_addr $MASTER_ADDR \
    --master_port $MASTER_PORT \
    "

srun --jobid $SLURM_JOBID bash -c '$LAUNCHER --node_rank $SLURM_PROCID $CMD'

For a single GPU the other approach is to emulate distributed with:

MASTER_ADDR=localhost MASTER_PORT=9994 RANK=0 LOCAL_RANK=0 python pretrain_gpt.py ...

Further command line arguments are described in the source file arguments.py.

Deepspeed PP and ZeRO-DP

To allow further flexibility we are using Deepspeed PP (pipeline parallelism) and ZeRO-DP along with Megatron normal functionality. That is we replace Megatron's PP with Deepspeed's PP, and we use ZERO-DP for DP.

It's similar to the normal Megatron-LM launcher, plus it has a deepspeed config file and a few params:

CHECKPOINT_PATH=checkpoints/gpt2
VOCAB_FILE=data/gpt2-vocab.json
MERGE_FILE=data/gpt2-merges.txt
DATA_PATH=data/meg-gpt2_oscar-combined_text_document
TENSORBOARD_PATH=output_dir/tensorboard
CODECARBON_PATH=output_dir/codecarbon

MICRO_BATCH_SIZE=1
GLOBAL_BATCH_SIZE=16
TP_SIZE=1
PP_SIZE=1

N_GPUS=2
SAVE_INTERVAL=100

#    --train-samples 10_000 \
#    --exit-interval $EXIT_INTERVAL \

#    --exit-interval 100 \
GPT_ARGS=" \
    --num-layers 2 \
    --hidden-size 64 \
    --num-attention-heads 2 \
    --seq-length 1024 \
    --max-position-embeddings 1024 \
    --micro-batch-size $MICRO_BATCH_SIZE \
    --rampup-batch-size 2 2 1_000 \
    --global-batch-size $GLOBAL_BATCH_SIZE \
    --train-samples 100 \
    --optimizer adam \
    --adam-beta1 0.9 \
    --adam-beta2 0.95 \
    --adam-eps 1e-8 \
    --lr 1e-4 \
    --lr-warmup-samples 5 \
    --clip-grad 1.0 \
    --weight-decay 1e-1 \
    --vocab-file $VOCAB_FILE \
    --merge-file $MERGE_FILE \
    --fp16 \
    "
#    --train-iters 500 \

OUTPUT_ARGS=" \
    --log-interval 10 \
    --save-interval $SAVE_INTERVAL \
    --eval-interval 100 \
    --eval-iters 10 \
    --checkpoint-activations \
    "

#    --codecarbon-dir $CODECARBON_PATH \
DATA_ARGS=" \
    --save $CHECKPOINT_PATH \
    --load $CHECKPOINT_PATH \
    --data-path $DATA_PATH \
    --tensorboard-dir $TENSORBOARD_PATH \
    --tensorboard-queue-size 5 \
    --log-timers-to-tensorboard \
    --log-batch-size-to-tensorboard \
    --log-validation-ppl-to-tensorboard \
    "


ZERO_STAGE=1

config_json="./ds_config.json"

# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size()
cat <<EOT > $config_json
{
  "train_micro_batch_size_per_gpu": $MICRO_BATCH_SIZE,
  "train_batch_size": $GLOBAL_BATCH_SIZE,
  "gradient_clipping": 1.0,
  "zero_optimization": {
    "stage": $ZERO_STAGE
  },
  "fp16": {
    "enabled": true,
    "loss_scale": 0,
    "loss_scale_window": 500,
    "hysteresis": 2,
    "min_loss_scale": 1,
    "initial_scale_power": 12
  },
  "steps_per_print": 2000,
  "wall_clock_breakdown": false
}
EOT


DEEPSPEED_ARGS=" \
    --deepspeed \
    --deepspeed_config ${config_json} \
    --zero-stage ${ZERO_STAGE} \
    --deepspeed-activation-checkpointing \
    "

ALL_ARGS="$GPT_ARGS $OUTPUT_ARGS $DATA_ARGS $DEEPSPEED_ARGS"

# if you can't stand pt-1.9 launcher noise
export LOGLEVEL=WARNING

LAUNCHER="deepspeed --num_gpus $N_GPUS"
export CMD=" \
    $LAUNCHER pretrain_gpt.py \
    --tensor-model-parallel-size $TP_SIZE \
    --pipeline-model-parallel-size $PP_SIZE \
    --distributed-backend nccl \
    $ALL_ARGS \
    "

echo $CMD

$CMD

on JZ we use a different launching command, see for example the end of tr1-13B-round1.slurm, but this is also a good fully functional script that you can use. Except it's written for SLURM environment.

Using any pretrained tokenizer

Thanks to @sbmaruf, any HF pretrained tokenizer may be used instead of the Megatron-provided BERT/GPT/T5 tokenizers. You'll need to run preprocessing yourself (tools/preprocess_data.py), using tokenizer-type=PretrainedFromHF and tokenizer-name-or-path=<your_tokenizer>. For example, python tools/preprocess_data.py --input ~/c4_en_train.jsonl --output-prefix c4_en_train --dataset-impl mmap --tokenizer-type PretrainedFromHF --tokenizer-name-or-path t5-small --workers 30 --append-eod

Distributed Pretraining

The examples/pretrain_{bert,gpt,t5}_distributed.sh scripts use the PyTorch distributed launcher for distributed training. As such, multi-node training can be achieved by properly setting environment variables and using init_method='env://' in the launcher. See the official PyTorch documentation for further description of these environment variables. By default, multi-node training uses the nccl distributed backend. A simple set of additional arguments and the use of the PyTorch distributed module with the Python flag -m torch.distributed.launch, detailed below, are the only additional requirements to adopt distributed training.

We use two types of parallelism: data and model parallelism. We facilitate two distributed data parallel implementations: a simple one of our own that performs gradient all-reduce at the end of back propagation step, and Torch's distributed data parallel wrapper that overlaps gradient reduction with back propagation computation. To switch between these two options use --DDP-impl local or --DDP-impl torch, respectively. As expected, Torch distributed data parallelism is more efficient at larger model sizes. For example, for the 8.3 billion parameters model running on 512 GPUs, the scaling increases from 60% to 76% when Torch's distributed data parallel is used. However, the overlapping method requires more memory and for some configurations (e.g., 2.5 billion parameters using 2-way model parallel and 1.2 billion parameters with no model parallel) can make the overall training slower as a result. We empirically found that using a smaller model in those cases improves the training time.

Second, we developed a simple and efficient two-dimensional model-parallel approach. To use tensor model parallelism (splitting execution of a single transformer module over multiple GPUs), add the --tensor-model-parallel-size flag to specify the number of GPUs among which to split the model, along with the arguments passed to the distributed launcher as mentioned above. To use pipeline model parallelism (sharding the transformer modules into stages with an equal number of transformer modules on each stage, and then pipelining execution by breaking the batch into smaller microbatches), use the --pipeline-model-parallel-size flag to specify the number of stages to split the model into (e.g., splitting a model with 24 transformer layers across 4 stages would mean each stage gets 6 transformer layers each).

We have examples of how to use these two different forms of model parallelism the example scripts ending in distributed_with_mp.sh, note that pipeline parallelism is not currently supported in the T5 model:

Other than these minor changes, the distributed training is identical to the training on a single GPU.

Distributed training:

see the details on how to do distributed training with the deepspeed launcher a few sections up XXX: The following needs to be updated:

WORLD_SIZE=8
TENSOR_MP_SIZE=2
PIPELINE_MP_SIZE=2

DISTRIBUTED_ARGS="--nproc_per_node $WORLD_SIZE \
    --nnodes 1 \
    --node_rank 0 \
    --master_addr localhost \
    --master_port 6000"

CHECKPOINT_PATH=&#60;same as above&#62;
VOCAB_FILE=&#60;same as above&#62;
DATA_PATH=&#60;same as above&#62;
MODEL_ARGS=&#60;same as above&#62;
OUTPUT_ARGS=&#60;same as above&#62;

python -m torch.distributed.launch $DISTRIBUTED_ARGS ./pretrain_<model>.py \
    $MODEL_ARGS \
    $OUTPUT_ARGS \
    --save $CHECKPOINT_PATH \
    --load $CHECKPOINT_PATH \
    --data-path $DATA_PATH \
    --tensor-model-parallel-size $TENSOR_MP_SIZE \
    --pipeline-model-parallel-size $PIPELINE_MP_SIZE \
    --DDP-impl torch

GPT-3 Example

In examples/pretrain_gpt3_175B.sh we have provided an example of how to configure Megatron to run GPT-3 with 175 billion parameters on 1024 GPUs. The script is designed for slurm with pyxis plugin but can be easily adopted to any other scheduler. It uses 8-way and 16-way tensor and pipeline parallelism, respectively. With options global-batch-size 1536 and rampup-batch-size 16 16 5859375, the training will start with global batch size 16 and linearly increase the global batch size to 1536 over 5,859,375 samples with incrmeental steps 16. The training dataset can be either a single set or a multiple datasets combined with a set of weights.

With full global batch size of 1536 on 1024 A100 GPUs, each iteration takes around 32 seconds resulting in 138 teraFLOPs per GPU which is 44% of the theoretical peak FLOPs.

Evaluation and Tasks

We provide several command line arguments, detailed in the scripts listed below, to handle various zero-shot and fine-tuned downstream tasks. However, you can also finetune your model from a pretrained checkpoint on other corpora as desired. To do so, simply add the --finetune flag and adjust the input files and training parameters within the original training script. The iteration count will be reset to zero, and the optimizer and internal state will be reinitialized. If the fine-tuning is interrupted for any reason, be sure to remove the --finetune flag before continuing, otherwise the training will start again from the beginning.

Because evaluation requires substantially less memory than training, it may be advantageous to merge a model trained in parallel for use on a single GPU in downstream tasks. The following script accomplishes this. Currently only tensor model parallelism is supported on input and pipeline model parallelsim on the output. This example reads in a model with 2-way tensor model parallelism and writes out a model with 2-way pipeline model parallelism.

TENSOR_MODEL_PARALLEL_SIZE=2
TARGET_PIPELINE_MODEL_PARALLEL_SIZE=2

VOCAB_FILE=bert-vocab.txt
CHECKPOINT_PATH=checkpoints/bert_345m

WORLD_SIZE=$TENSOR_MODEL_PARALLEL_SIZE python tools/merge_mp_partitions.py \
    --model-type BERT \
    --tensor-model-parallel-size $TENSOR_MODEL_PARALLEL_SIZE \
    --pipeline-model-parallel-size 1 \
    --target-pipeline-model-parallel-size $TARGET_PIPELINE_MODEL_PARALLEL_SIZE \
    --tokenizer-type BertWordPieceLowerCase \
    --vocab-file $VOCAB_FILE \
    --num-layers 24 \
    --hidden-size 1024 \
    --num-attention-heads 16 \
    --seq-length 512 \
    --max-position-embeddings 512 \
    --load $CHECKPOINT_PATH
    --save $CHECKPOINT_PATH/merged

Several downstream tasks are described for both GPT and BERT models below. They can be run in distributed and model parallel modes with the same changes used in the training scripts.

GPT Text Generation

bash examples/generate_text.sh

We generate text samples using largely the GPT pretraining script. Few changes need to make, such as we need to provide the path to the pretrained checkpoint, the length of the output samples, whether to generate texts unconditionally (--num-samples to denote how many samples to generate) or conditional (need to pass --sample-input-file <filename> where each line of the file will be used as the conditional texts). There are few optional parameters to play, e.g. top-k, top-p, or greedy (set top-k and top-p to 0) sampling..

CHECKPOINT_PATH=checkpoints/gpt2_345m
VOCAB_FILE=gpt2-vocab.json
MERGE_FILE=gpt2-merges.txt
GPT_ARGS=&#60;same as those in <a href="#gpt-pretraining">GPT pretraining</a> above&#62;

MAX_OUTPUT_SEQUENCE_LENGTH=1024
TEMPERATURE=1.0
TOP_P=0.9
NUMBER_OF_SAMPLES=2
OUTPUT_FILE=samples.json

python tools/generate_samples_gpt.py \
    $GPT_ARGS \
    --load $CHECKPOINT_PATH \
    --out-seq-length $MAX_OUTPUT_SEQUENCE_LENGTH \
    --temperature $TEMPERATURE \
    --genfile $OUTPUT_FILE \
    --num-samples $NUMBER_OF_SAMPLES \
    --top_p $TOP_P \
    --recompute

GPT Evaluation

We include example scripts for GPT evaluation on WikiText perplexity evaluation and LAMBADA Cloze accuracy.

WikiText Perplexity Evaluation

For even comparison with prior works, we evaluate perplexity on the word-level WikiText-103 test dataset, and appropriately compute perplexity given the change in tokens when using our subword tokenizer.

We use the following command to run WikiText-103 evaluation on a 345M parameter model.

TASK="WIKITEXT103"

VALID_DATA=&#60;wikitext path&#62;.txt
VOCAB_FILE=gpt2-vocab.json
MERGE_FILE=gpt2-merges.txt
CHECKPOINT_PATH=checkpoints/gpt2_345m

COMMON_TASK_ARGS=" \
    --num-layers 24 \
    --hidden-size 1024 \
    --num-attention-heads 16 \
    --seq-length 1024 \
    --max-position-embeddings 1024 \
    --fp16 \
    --vocab-file $VOCAB_FILE"

python tasks/main.py \
    --task $TASK \
    $COMMON_TASK_ARGS \
    --valid-data $VALID_DATA \
    --tokenizer-type GPT2BPETokenizer \
    --merge-file $MERGE_FILE \
    --load $CHECKPOINT_PATH \
    --micro-batch-size 8 \
    --checkpoint-activations \
    --log-interval 10 \
    --no-load-optim \
    --no-load-rng
Comments
  • Adding language specific validation sets for Multilingual model training

    Adding language specific validation sets for Multilingual model training

    Summary

    The idea of this issue to modify the megatron-deepspeed to track the progress of validation loss on several validaiton (periodically evaluation) sets separately.

    Currently, the validation loss is calculated on a single validation set that includes the same language combination as the training data. (see here 13B param model training on tensorboard)

    image

    After integration of this PR user could add extra validation sets on the following form

    --periodic-eval-data-path \
    VALID1-FR-KR 0.1 $DATA_FR 0.2 $DATA_KR, \
    VALID2-JP-AR 0.2 $DATA_JP 0.3 $DATA_AR
    

    Validation steps will be run automatically on each dataset independently and results will be displayed on Tensorboard as following

    image

    What was changed

    In order not to change the current way one calls the training.py script, I opted for adding an extra argument --periodic-eval-data-path.

    Users can define extra datasets sets (EACH in a quite similar way to --data-path to be evaluated along with training by providing their data paths (or multiple with weights). Note here that the --split argument does apply to the --periodic-eval-data-path argument.

    Typical Examples for Multilingual training

    When a model is being trained on a preprocessed multilingual dataset multilingual. A user can preprocess 3 monolingual datasets JP, KR, AR, and track their validation progress by sending the following argument:

    --data-path $DATA/multilingual
    --periodic-eval-data-path \
    VALID-JP 1.0 $DATA_JP, \
    VALID-KR 1.0 $DATA_KR, \
    VALID-AR 1.0 $DATA_AR \
    

    Sometimes in multilingual training, some languages are downsampled and some languages are undersampled. If a user wonders how the model performs with respect to different proportions of languages, different combinations of the languages could be passed as external validation datasets.

    --data-path  0.1 $DATA/EN 0.5 $DATA/JP 0.7 $DATA/KR 1.0 $DATA/AR, 
    --periodic-eval-data-path \
    DATASET-BALANCED 1.0 $DATA_EN 1.0 $DATA_JP 1.0 $DATA_KR 1.0 $DATA_AR, \
    DATASET-NO-EN 1.0 $DATA_JP 1.0 $DATA_KR 1.0 $DATA/_R
    

    Connections with PR https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/143

    https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/143 PR is developed to support the usecase: we can't have multilingual training data and English-only validation data at the moment.. This use case is completely supported in this PR by adding an English-only dataset inside as a dataset to be evaluated periodically. More over one can extend this by adding several datasets to be evaluated periodically, not just a single english only one.

    Testing

    • Default training works 🆗
    • Integration with Tensorboard 🆗
    • Testing with real training data multiple combinations
      • 1 dataset 1 combination. 🆗
      • 1 dataset 2 combinations 🆗
      • 2 datasets 1 combinations 🆗
      • 2 datasets 2 combinations 🆗
      • 5 datasets 5 combinations 🆗

    Independent testing @lintangsutawika (in progress)

    Future Modifications (suggestions needed)

    • Adding (optional) periodic-eval-iter and periodic-eval-iterations for this argument periodic-eval-data-path. if not used then fall back to the regular --eval-interval --eval-iters params.
    enhancement multilinguality 
    opened by hadyelsahar 50
  • Add generation server scripts using HF accelerate and DS-inference

    Add generation server scripts using HF accelerate and DS-inference

    Currently, the scripts are working correctly. However, there is some memory leak. In https://github.com/huggingface/accelerate/issues/614 @sgugger says that its not in accelerate which is probably true.

    My hunch is a related bug: https://github.com/apache/incubator-mxnet/issues/19159

    opened by mayank31398 46
  • BLOOM Inference via DeepSpeed-Inference, Accelerate and DeepSpeed-ZeRO

    BLOOM Inference via DeepSpeed-Inference, Accelerate and DeepSpeed-ZeRO

    update: I expanded the PR to include accelerate and deepspeed ZeRO - please see README for full details


    This PR is sorting out the inference script for BLOOM via DeepSpeed-Inference https://github.com/microsoft/DeepSpeed/pull/2083

    I pushed the main script into main already, so this is just the fixes of that script.

    setup transformers

    make sure you are on the latest transformers@main

    setup DeepSpeed

    Get the DS master branch

    git clone https://github.com/microsoft/DeepSpeed
    cd DeepSpeed
    pip install -e .
    

    setup Meg-DS

    git clone https://github.com/bigscience-workshop/Megatron-DeepSpeed
    cd Megatron-DeepSpeed
    git checkout bloom-inference
    

    run the script:

    deepspeed --num_gpus 8 scripts/inference/bloom-ds-inference.py --name bigscience/bloom --benchmark --batch_size 8
    

    adapt to number of wanted gpus, use the larger models if needed.

    p.s. also added zero3-inference script

    deepspeed --num_gpus 8 scripts/inference/bloom-ds-zero-inference.py --name bigscience/bloom
    

    but you must edit the nvme path and this one is super-slow - but it works and requires only 1x 24GB GPU! and not 8x80GB :)

    On JZ you must do:

    srun --pty --account=six@a100 --constraint=a100 --reservation=hug --partition=gpu_p5 --gres=gpu:8 --nodes=1 --cpus-per-task=64 --time 4:00:00 --tasks-per-node=1 bash
    cd $six_ALL_CCFRWORK/code/inference/Megatron-DeepSpeed
    
    export TRANSFORMERS_CACHE=$six_ALL_CCFRWORK/models
    export HF_DATASETS_CACHE=$six_ALL_CCFRWORK/datasets
    export HF_MODULES_CACHE=$six_ALL_CCFRWORK/modules
    export HF_METRICS_CACHE=$six_ALL_CCFRWORK/metrics
    export HF_DATASETS_OFFLINE=1
    export TRANSFORMERS_OFFLINE=1
    
    deepspeed --num_gpus 8 scripts/inference/bloom-ds-inference.py --name bigscience/bloom
    

    (I already pre-cached bigscience/bloom-350m and bigscience/bloom so it should work from offline mode.

    @RezaYazdaniAminabadi, @jeffra

    opened by stas00 46
  • Support skip iteration flag

    Support skip iteration flag

    This PR resolves #175.

    • [x] Support relevant argument flag
    • [x] Add continue logic to skip iterations
    • [x] Update counter and number of consumed samples/tokens
    • [x] Add tests
    • [x] Logging
    opened by jaketae 42
  • distributed merge of per-rank Megatron data files

    distributed merge of per-rank Megatron data files

    This can speed up the merge step, but it requires that the user is writing the final dataset to a POSIX-complaint parallel file system, like Lustre or GPFS. Each rank identifies the file offsets for its own data using collective operations, fseek's to those sections, and writes its data.

    This adds a --merge option to preprocess_data_dist.py, which can be set to any of {parallel, serial, both}. It defaults to parallel, but one can fallback to the algorithm where rank 0 merges all files sequentially with --merge serial. A serial merge might be helpful to people where the parallel merge does not work due to lack of a POSIX-compliant parallel file system. The both option is useful for testing purposes. It merges rank files with both parallel and serial so that the resulting files can be compared with something like cmp.

    An optional --scratch option can be used to store intermediate per-rank files in storage local to the compute node, like /dev/shm, which avoids creating those files on the shared file system and offers faster write/read performance, e.g.,

    --scratch /dev/shm
    

    TODO:

    • [x] add support for torch.distributed
    • [x] avoid deadlock in case some process throws an exception
    • [x] test corner cases, e.g., rank 0 contributes 0 items
    • [x] double check why version "byte" seems to use 2 bytes -- resolved, <B is encoded as a single byte as expected

    Scaling tests: In running tests to check encoding rates at different node counts, I also get the merge time at the end. The script actually does both a parallel merge and a serial merge, so that I can compare their contents. That also provides an easy way to gather times for both. The parallel merge can optionally write the per-rank file file to a scratch directory with --scratch, like /dev/shm, which removes load from the parallel file system.

    Each rank writes its own file, and I'm running 40 ranks per node. Times here are in seconds. Test results can vary based on how busy the (shared) file system is at the time. I've only taken one sample here.

    nodes:       8     16    32    64
    serial:    617    499   718     -
    parallel:   16.1   15.6  17.0  26.8
    /dev/shm:    -     14.8   -    22.4
    

    The final merged file is the same size in all cases (529GB), but as the number of ranks increases, the script generates more per-rank files with each one being smaller. The total data being processed is the same, but the file counts can vary. Ideally, if things are bandwidth bound, you'd expect a constant time across each row.

    Anyway, the main takeaway is that there is a nice boost using the parallel merge.

    The scratch times aren't showing much improvement over writing the per-rank files to the parallel file system. My guess is that the OS has cached the per-rank file in page cache, so it's reading back from memory even when the per-rank file is written to the parallel file system. There might still be some impact on the cost to create and delete those files, but I'm not recording that.

    enhancement 
    opened by adammoody 26
  • use HuggingFace Datasets as source to build Megatron data files

    use HuggingFace Datasets as source to build Megatron data files

    This is work in progress, but I'll post it early to get feedback.

    This PR includes a few things at once.

    Updates megatron/data/indexed_dataset.py to use numpy to compute sample offsets to improve speed while writing an index file.

    • https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/48/commits/f68999f738160a6df414b100959bccef95d07bf0
    • https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/48/commits/a456e484734df310930e3655afc935caebfe64a9 (fix needed in addition to above commit)

    Adds a new tools/preprocess_dataset_mpi.py script:

    • uses HuggingFace datasets as the source to build (.bin and .idx) input files for megatron
    • uses MPI to distribute work via mpi4py to support multiple nodes
    • --split option to name the HF dataset split name, defaults to train
    • --columns option to specify dataset feature (column) names to process from each row
    • --shuffle option to randomly shuffle data samples
    • --seed for random number generator on shuffle operations
    • --count option to limit the number of selected samples, e.g. --count 10000 to use 10k samples
    • --mpi4py option to instruct script to use mpi4py instead of torch.distributed
    • --torch-backend option to select between gloo/mpi
    • --local_rank to support torch.distributed.launch when using torch.distributed
    • --log-interval to specify seconds between progress messages or 0 to disable

    Assuming srun has been configured to launch MPI jobs, one can run this script with something like:

    srun -n 320 -N 8 python preprocess_dataset_mpi.py \
           --input openwebtext \
           --output-prefix openwebtext-bert \
           --vocab bert-large-uncased-vocab.txt \
           --dataset-impl mmap \
           --tokenizer-type BertWordPieceLowerCase \
           --split-sentences \
           --shuffle \
           --seed 100
    

    The script can use MPI and mpi4py. It requires that a shared file system exists, like Lustre or GPFS, such that one process can read a file written by another process. In particular, there may be problems on NFS due to client-side caching.

    TODO:

    • [x] The resulting merged file is different in size from a file written directly. I need to identify the cause. -- The size discrepancy was in the index file, which is resolved with the doc_idx fix here https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/752e958cd1b9e9e8caa50f0336de7f096e1338fc/megatron/data/indexed_dataset.py#L574 With the above fix, I verified that both the .bin and .idx files are identical using cmp after disabling the data shuffle.
    • [x] I have not tested reading the resulting input files, so they may be totally bogus right now. -- Resolved after verifying merged files from multiple ranks are identical to the files produced by a single process.
    • [x] Add some more exception handling to prevent MPI deadlocks on problems. -- Resolved by surrounding per-rank I/O step with try/except block and an allreduce to check that all succeeded. Avoids merge if anyone fails, but cleans up per-rank files regardless.
    • [x] Support options for shuffling if desired, like enable/disable shuffle and define a random seed. -- Resolved by adding --shuffle and --seed options in https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/48/commits/32fc48fe67dc60e303311fa2a1c800e5c80e7aba)
    • [x] Use standard convention for setting rank/size in torch.distributed.init_process_group
    • [x] Catch exceptions to avoid MPI deadlocks, but re-raise exceptions where possible
    • [x] Settle on HF_DATASETS_OFFLINE item

    In my testing with 320 procs on 8 nodes, start up takes 2 minutes, it takes about 15 minutes for all processes to write their .bin/.idx file, and the merge takes 3 more minutes. It processes the full openwebtext dataset in 20 minutes.

    Opening dataset openwebtext
    Dataset({
        features: ['text'],
        num_rows: 8013769
    })
    > building BertWordPieceLowerCase tokenizer ...
     > padded vocab (size: 30522) with 70 dummy tokens (new size: 30592)
    Seconds to startup: 146.8052520751953
    Vocab size: 30522
    Output prefix: openwebtext-6-bert
    > building BertWordPieceLowerCase tokenizer ...
     > padded vocab (size: 30522) with 70 dummy tokens (new size: 30592)
    Seconds to tokenize: 912.1747498512268
    Documents= 8013769 docs/sec= 8785.344037759238
    Sentences= 307384162 sent/sec= 336979.46807904245
    Bytes= 39386887788 bytes/sec= 43179103.34003862
    Merging rank files ...
    Merging file openwebtext-6-bert_text_sentence_0
    <snip>
    Seconds to merge: 169.25763988494873
    Merged 320 files into openwebtext-6-bert
    Bytes= 17425293230 bytes/sec= 102951295.0898091
    Deleting rank files ...
    
    real	20m51.461s
    user	0m0.197s
    sys	0m0.084s
    

    The file system hosting the source dataset, the intermediate .bin/.idx files, and the final merged file is GPFS, which provides 120GB/s write bandwidth.

    opened by adammoody 26
  • Implement prefix-lm as in the T5 paper

    Implement prefix-lm as in the T5 paper

    AFAIU, the current implementation ~~uses the first 50% as the prefix~~ doesn't support prefix-lm. The T5 paper samples the prefix length randomly from 0 to max_sequence_lengh (which is 2048 in our case)

    arch&scale 
    opened by ibeltagy 22
  • Eval harness

    Eval harness

    Providing the functionality for running the EleutherAI evaluation harness on megatron checkpoints, adressing #137

    In order to run on JZ we need to cache the tasks locally since the gpu-nodes does not have internet access. eval_harness/download.py, provides that functionality.

    Currently pipeline parallel models work but model parallel needs to be tested.

    opened by DanielHesslow 21
  • Cannot import C++ compiled

    Cannot import C++ compiled "helpers"

    I am having errors importing the compiled C++ helpers.cpp into python in gpt_dataset.py

                # Use C++ implementation for speed.
                # First compile and then import.
                 
                from megatron.data import helpers
                assert doc_idx.dtype == np.int32
                assert sizes.dtype == np.int32
                sample_idx = helpers.build_sample_idx(sizes, doc_idx, seq_length,
                                                      num_epochs, tokens_per_epoch)
                # sample_idx = _build_sample_idx(sizes, doc_idx, seq_length,
                #                               num_epochs, tokens_per_epoch)
    

    Leading to the following error

    ImportError: cannot import name 'helpers' from 'megatron.data' (XXXX/Megatron-DeepSpeed/megatron/data/__init__.py)
    Killing subprocess 28861
    

    update: I managed to isolate the problem by compiling the helpers.cpp separately using gcc v9.3.1. Compilation works but loading import helpers doesn't work.

    (venv-megatron) bash-4.2$ pwd
    XXX/Megatron-DeepSpeed/megatron/data
    
    (venv-megatron) bash-4.2$ gcc --version | head -1 
    gcc (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)
    
    (venv-megatron) bash-4.2$ make
    make: python3-config: Command not found
    make: python3-config: Command not found
    g++ -O3 -Wall -shared -std=c++11 -fPIC -fdiagnostics-color -I/nfs/core/python/3.9/include/python3.9 -I/xxxx/bigscience/venv-megatron/lib/python3.9/site-packages/pybind11/include helpers.cpp -o helpers
    
    (venv-megatron) bash-4.2$ ipython
    Python 3.9.4 (default, Apr  7 2021, 12:46:00)
    Type 'copyright', 'credits' or 'license' for more information
    IPython 7.27.0 -- An enhanced Interactive Python. Type '?' for help.
    
    In [1]: import helpers
    ---------------------------------------------------------------------------
    ModuleNotFoundError                       Traceback (most recent call last)
    <ipython-input-1-d249c8495052> in <module>
    ----> 1 import helpers
    
    ModuleNotFoundError: No module named 'helpers'
    
    In [2]:
    

    posted the same issue on the original repo : https://github.com/NVIDIA/Megatron-LM/issues/143

    bug 
    opened by hadyelsahar 20
  • Generation server using HF accelerate and DS inference

    Generation server using HF accelerate and DS inference

    This PR depends on There are some redundant methods in some scripts that can be removed once https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/308 is merged into main branch This PR is for adding scripts for creating a generation server using both HF accelerate and DeepSpeed inference.

    opened by mayank31398 19
  • Prefix lm

    Prefix lm

    Support for prefix-lm

    • Fixed: #21

    We provide basic support for prefix-lm:

    • Randomly select a split for documents on unsupervised setting
    • Support per document prefix

    Notable choices:

    • code duplication with gpt script: I've tried to explain my point of view in this comment. https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/52#discussion_r684154051

    TODO:

    • [x] Support Megatron style prefix, ie consider the entire sequence as the document instead of splitting sequences into documents. Pending discussion.
    • [ ] ~Allow prefix on other scripts than pretrain_gpt.py, ie evaluation scripts of such. (Probably on another PR)~ Need to have a script to evaluate using prefix lm.
    • [ ] Create parsing mechanism to obtain prefix split on labeled data. Typically in prompts we should feed the prompts as the prefix and generate the target. (Probably on another PR)
    • [x] Probably support one last split after the last eod if possible.
    • [x] Write test. TBD how do we want to setup tests in the repo? UT vs end to end? You can find some tests here https://github.com/thomasw21/Megatron-DeepSpeed/pull/1 which will be merged after this branch is merged.
    enhancement arch&scale 
    opened by thomasw21 18
  • Add UL2 data sampling and pretraining

    Add UL2 data sampling and pretraining

    This adds pretraining using UL2 for both encoder-decoder, non-causal decoder-only, and causal decoder-only models. I have not yet run large-scale tests to see if it yields the desired training improvements, but I wanted to give others the option to take a look at the code already.

    opened by janEbert 2
  • User Warnings for accessing grad attribute of non-leaf Tensors thrown with TP=1 and PP>1

    User Warnings for accessing grad attribute of non-leaf Tensors thrown with TP=1 and PP>1

    Problem

    On pretaining GPT like models using this script , with tensor_parallelsim(TP)=1 and pipeline_paralleism(PP)>1 for model of any size and batch size I get the following user warning multiple times

    [default3]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
    [default3]:  return self._grad
    [default2]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
    [default2]:  return self._grad
    [default1]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
    [default1]:  return self._grad
    [default0]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
    [default0]:  return self._grad
    [default3]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
    [default3]:  return self._grad
    [default2]:/p/software/juwelsbooster/stages/2022/software/PyTorch/1.11-gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/lib/python3.9/site-packages/torch/_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /dev/shm/strube1/juwelsbooster/PyTorch/1.11/gcccoremkl-11.2.0-2021.4.0-CUDA-11.5/pytorch/build/aten/src/ATen/core/TensorBody.h:470.)
    [default2]:  return self._grad
    
    

    The training does not halt and the logs looks fine but the warning talks about the gradients not being back propagated is concerning. It seems that TP=1, creates non leaf tensors and TP>1 creates leaf tensors which is rather confusing to me. From here, the tensors that are result of an operation are not leaf tensors, but again why only for TP=1 ?

    From my observations, the warning happens only in the beginning of the training and the number of times the error appears is equal to the number of models trained(Data Parallel,DP) times the number of pipeline passes. For example for a 6.7B parameter model:

    1. Nodes=4; GPUs=16;TP=1; PP=2 => DP = 8 ; number of pipleline pass = PP-1=1; error appears 8*1= 8 times
    2. Nodes=8; GPUSs=32;TP=1; PP=8 => DP=4; number of pipeline passes = PP-1=7 ; error appears 4*7=28 times

    Also looking at the pipeline communication here; the tensors are communicated with required_grad=True flag.

    System and Repo specifics:

    • PyTorch Version : 1.11
    • PyTorch Cuda Version : 11.5
    • NVCC Version : 11.5
    • NCCL : 2.12.7+cuda11.5
    • Deepspeed : https://github.com/microsoft/DeepSpeed/tree/0f5c2012ce19c36936be562094ef3d73b230e364 (Deepspeed wheel compiled with torch 1.12, cuda 11.6)
    • Megatron-DeepSpeed : https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/7b5f175b73a12d602cdadd3ee205ed666f6d4234
    • Apex : https://github.com/NVIDIA/apex/tree/21e415479b134309a2ba1af95b2319b7bf068f7a
    • GPU model: NVIDIA A100 Tensor Core GPU with 40 GB; Each node contains 4 GPUs connected via NVLink3 to each other. More info on system here

    Example Launch Command

    >>Megatron-DeepSpeed/pretrain_gpt.py --tensor-model-parallel-size 1 --pipeline-model-parallel-size 2 --num-layers 32 --hidden-size 4096 --num-attention-heads 32 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 2 --global-batch-size 2048 --train-samples 69_335_938 --vocab-file vocab.json --merge-file merges.txt --loss-scale 12 --fp16 --seed 42 --checkpoint-activations --train-tokens 142_000_000_000 --optimizer adam --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-8 --lr 1.2e-4 --min-lr 1.2e-5 --lr-decay-style cosine --lr-decay-samples 126_953_125 --lr-warmup-samples 183_105 --clip-grad 1.0 --weight-decay 1e-1 --log-interval 1 --save-interval 300 --eval-interval 300 --tensorboard-dir tensorboard --tensorboard-queue-size 5 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --save checkpoints --data-path merged_german_only --split 949,50,1 --data-impl mmap --distributed-backend nccl --deepspeed --deepspeed_config ds_config.6416902.json --zero-stage 1 --deepspeed-activation-checkpointing
    
    opened by chelseajohn 3
  • deepspeed_to_megatron several issues

    deepspeed_to_megatron several issues

    1. A recent commit removed tools/convert_checkpoint/deepspeed_checkpoint.py but there is still an attempt to import it in tools/convert_checkpoint/deepspeed_to_megatron.py. The other scripts in the folder appear to be ok. I guess the import on the line 7 should be changed from from .deepspeed_checkpoint import ARGS_KEY, DeepSpeedCheckpoint to from deepspeed.checkpoint.deepspeed_checkpoint import ARGS_KEY, DeepSpeedCheckpoint?

    2. https://github.com/microsoft/Megatron-DeepSpeed/issues/91 applies here as well.

    3. The function _renest_sd defined on line 90, splits the keys, but I encounter a ValueError: too many values to unpack (expected 2) as one of the keys is named word_embeddings.norm.weight, containing two dots. There are probably more such layer names. I'm attempting to convert my own model bf16_zero (pp=1, tp=1) checkpoints to megatron and/or hf format. Entirely possible this is all my own wrongdoing. If so, please advise, how to successfully convert.

    opened by MatejUlcar 0
  • Load Bloom Optimizer State (i.e. Bloom 1B1)

    Load Bloom Optimizer State (i.e. Bloom 1B1)

    Hi,

    I want to continue training of the Bloom model. To start simple, I want to load the 1.1B model into the BigScience Megatron-DeepSpeed library.

    I tried to run pretrain_gpt.py with the argument "load" set to the path of the 1b1 optimizer state (from here https://huggingface.co/bigscience/bloom-1b1-optimizer-states/tree/main/global_step660750)

    It is complaining that there is no meta file available:

    error_load_bloom1b1

    Before I continue to debug or make changes to this library, I was just wondering whether there is a better way/already implemented way to load the optimizer state.

    Best wishes Philipp

    opened by philippmtk 2
Owner
BigScience Workshop
Research workshop on large language models - The Summer of Language Models 21
BigScience Workshop
Transformer related optimization, including BERT, GPT

This repository provides a script and recipe to run the highly optimized transformer-based encoder and decoder component, and it is tested and maintained by NVIDIA.

NVIDIA Corporation 1.7k Jan 4, 2023
GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model

GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub.

Nathan Cooper 2.3k Jan 1, 2023
天池中药说明书实体识别挑战冠军方案;中文命名实体识别;NER; BERT-CRF & BERT-SPAN & BERT-MRC;Pytorch

天池中药说明书实体识别挑战冠军方案;中文命名实体识别;NER; BERT-CRF & BERT-SPAN & BERT-MRC;Pytorch

zxx飞翔的鱼 751 Dec 30, 2022
API for the GPT-J language model 🦜. Including a FastAPI backend and a streamlit frontend

gpt-j-api ?? An API to interact with the GPT-J language model. You can use and test the model in two different ways: Streamlit web app at http://api.v

Víctor Gallego 276 Dec 31, 2022
VD-BERT: A Unified Vision and Dialog Transformer with BERT

VD-BERT: A Unified Vision and Dialog Transformer with BERT PyTorch Code for the following paper at EMNLP2020: Title: VD-BERT: A Unified Vision and Dia

Salesforce 44 Nov 1, 2022
Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5

NLP-Summarizer Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5 This project aimed to provide in

Samuel Sharkey 1 Feb 7, 2022
Pre-training BERT masked language models with custom vocabulary

Pre-training BERT Masked Language Models (MLM) This repository contains the method to pre-train a BERT model using custom vocabulary. It was used to p

Stella Douka 14 Nov 2, 2022
A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

GuwenModels: 古文自然语言处理模型合集, 收录互联网上的古文相关模型及资源. A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

Ethan 66 Dec 26, 2022
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy This package provides spaCy components and architectures to use tr

Explosion 1.2k Jan 8, 2023
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy This package provides spaCy components and architectures to use tr

Explosion 903 Feb 17, 2021
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Facebook Research 5.1k Dec 26, 2022
An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger.

GPT-NeoX An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hun

EleutherAI 3.1k Jan 8, 2023
Universal End2End Training Platform, including pre-training, classification tasks, machine translation, and etc.

背景 安装教程 快速上手 (一)预训练模型 (二)机器翻译 (三)文本分类 TenTrans 进阶 1. 多语言机器翻译 2. 跨语言预训练 背景 TrenTrans是一个统一的端到端的多语言多任务预训练平台,支持多种预训练方式,以及序列生成和自然语言理解任务。 安装教程 git clone git

Tencent Minority-Mandarin Translation Team 42 Dec 20, 2022
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
Pytorch-version BERT-flow: One can apply BERT-flow to any PLM within Pytorch framework.

Pytorch-version BERT-flow: One can apply BERT-flow to any PLM within Pytorch framework.

Ubiquitous Knowledge Processing Lab 59 Dec 1, 2022
Seonghwan Kim 24 Sep 11, 2022
Tools for curating biomedical training data for large-scale language modeling

Tools for curating biomedical training data for large-scale language modeling

BigScience Workshop 242 Dec 25, 2022