Translate - a PyTorch Language Library

Overview

NOTE

PyTorch Translate is now deprecated, please use fairseq instead.


Translate - a PyTorch Language Library

Translate is a library for machine translation written in PyTorch. It provides training for sequence-to-sequence models. Translate relies on fairseq, a general sequence-to-sequence library, which means that models implemented in both Translate and Fairseq can be trained. Translate also provides the ability to export some models to Caffe2 graphs via ONNX and to load and run these models from C++ for production purposes. Currently, we export components (encoder, decoder) to Caffe2 separately and beam search is implemented in C++. In the near future, we will be able to export the beam search as well. We also plan to add export support to more models.

Quickstart

If you are just interested in training/evaluating MT models, and not in exporting the models to Caffe2 via ONNX, you can install Translate for Python 3 by following these few steps:

  1. Install pytorch
  2. Install fairseq
  3. Clone this repository git clone https://github.com/pytorch/translate.git pytorch-translate && cd pytorch-translate
  4. Run python setup.py install

Provided you have CUDA installed you should be good to go.

Requirements and Full Installation

Translate Requires:

  • A Linux operating system with a CUDA compatible card
  • GNU C++ compiler version 4.9.2 and above
  • A CUDA installation. We recommend CUDA 8.0 or CUDA 9.0

Use Our Docker Image:

Install Docker and nvidia-docker, then run

sudo docker pull pytorch/translate
sudo nvidia-docker run -i -t --rm pytorch/translate /bin/bash
. ~/miniconda/bin/activate
cd ~/translate

You should now be able to run the sample commands in the Usage Examples section below. You can also see the available image versions under https://hub.docker.com/r/pytorch/translate/tags/.

Install Translate from Source:

These instructions were mainly tested on Ubuntu 16.04.5 LTS (Xenial Xerus) with a Tesla M60 card and a CUDA 9 installation. We highly encourage you to report an issue if you are unable to install this project for your specific configuration.

  • If you don't already have an existing Anaconda environment with Python 3.6, you can install one via Miniconda3:

    wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
    chmod +x miniconda.sh
    ./miniconda.sh -b -p ~/miniconda
    rm miniconda.sh
    . ~/miniconda/bin/activate
    
  • Clone the Translate repo:

    git clone https://github.com/pytorch/translate.git
    pushd translate
    
  • Install the PyTorch conda package:

    # Set to 8 or 9 depending on your CUDA version.
    TMP_CUDA_VERSION="9"
    
    # Uninstall previous versions of PyTorch. Doing this twice is intentional.
    # Error messages about torch not being installed are benign.
    pip uninstall -y torch
    pip uninstall -y torch
    
    # This may not be necessary if you already have the latest cuDNN library.
    conda install -y cudnn
    
    # Add LAPACK support for the GPU.
    conda install -y -c pytorch "magma-cuda${TMP_CUDA_VERSION}0"
    
    # Install the combined PyTorch nightly conda package.
    conda install pytorch-nightly cudatoolkit=${TMP_CUDA_VERSION}.0 -c pytorch
    
    # Install NCCL2.
    wget "https://s3.amazonaws.com/pytorch/nccl_2.1.15-1%2Bcuda${TMP_CUDA_VERSION}.0_x86_64.txz"
    TMP_NCCL_VERSION="nccl_2.1.15-1+cuda${TMP_CUDA_VERSION}.0_x86_64"
    tar -xvf "${TMP_NCCL_VERSION}.txz"
    rm "${TMP_NCCL_VERSION}.txz"
    
    # Set some environmental variables needed to link libraries correctly.
    export CONDA_PATH="$(dirname $(which conda))/.."
    export NCCL_ROOT_DIR="$(pwd)/${TMP_NCCL_VERSION}"
    export LD_LIBRARY_PATH="${CONDA_PATH}/lib:${NCCL_ROOT_DIR}/lib:${LD_LIBRARY_PATH}"
    
  • Install ONNX:

    git clone --recursive https://github.com/onnx/onnx.git
    yes | pip install ./onnx 2>&1 | tee ONNX_OUT
    

If you get a Protobuf compiler not found error, you need to install it:

conda install -c anaconda protobuf

Then, try to install ONNX again:

yes | pip install ./onnx 2>&1 | tee ONNX_OUT
  • Build Translate:

    pip uninstall -y pytorch-translate
    python3 setup.py build develop
    

Now you should be able to run the example scripts below!

Usage Examples

Note: the example commands given assume that you are the root of the cloned GitHub repository or that you're in the translate directory of the Docker or Amazon image. You may also need to make sure you have the Anaconda environment activated.

Training

We provide an example script to train a model for the IWSLT 2014 German-English task. We used this command to obtain a pretrained model:

bash pytorch_translate/examples/train_iwslt14.sh

The pretrained model actually contains two checkpoints that correspond to training twice with random initialization of the parameters. This is useful to obtain ensembles. This dataset is relatively small (~160K sentence pairs), so training will complete in a few hours on a single GPU.

Training with tensorboard visualization

We provide support for visualizing training stats with tensorboard. As a dependency, you will need tensorboard_logger installed.

pip install tensorboard_logger

Please also make sure that tensorboard is installed. It also comes with tensorflow installation.

You can use the above example script to train with tensorboard, but need to change line 10 from :

CUDA_VISIBLE_DEVICES=0 python3 pytorch_translate/train.py

to

CUDA_VISIBLE_DEVICES=0 python3 pytorch_translate/train_with_tensorboard.py

The event log directory for tensorboard can be specified by option --tensorboard_dir with a default value: run-1234. This directory is appended to your --save_dir argument.

For example in the above script, you can visualize with:

tensorboard --logdir checkpoints/runs/run-1234

Multiple runs can be compared by specifying different --tensorboard_dir. i.e. run-1234 and run-2345. Then

tensorboard --logdir checkpoints/runs

can visualize stats from both runs.

Pretrained Model

A pretrained model for IWSLT 2014 can be evaluated by running the example script:

bash pytorch_translate/examples/generate_iwslt14.sh

Note the improvement in performance when using an ensemble of size 2 instead of a single model.

Exporting a Model with ONNX

We provide an example script to export a PyTorch model to a Caffe2 graph via ONNX:

bash pytorch_translate/examples/export_iwslt14.sh

This will output two files, encoder.pb and decoder.pb, that correspond to the computation of the encoder and one step of the decoder. The example exports a single checkpoint (--checkpoint model/averaged_checkpoint_best_0.pt but is also possible to export an ensemble (--checkpoint model/averaged_checkpoint_best_0.pt --checkpoint model/averaged_checkpoint_best_1.pt). Note that during export, you can also control a few hyperparameters such as beam search size, word and UNK rewards.

Using the Model

To use the sample exported Caffe2 model to translate sentences, run:

echo "hallo welt" | bash pytorch_translate/examples/translate_iwslt14.sh

Note that the model takes in BPE inputs, so some input words need to be split into multiple tokens. For instance, "hineinstopfen" is represented as "hinein@@ stop@@ fen".

PyTorch Translate Research

We welcome you to explore the models we have in the pytorch_translate/research folder. If you use them and encounter any errors, please paste logs and a command that we can use to reproduce the error. Feel free to contribute any bugfixes or report your experience, but keep in mind that these models are a work in progress and thus are currently unsupported.

Join the Translate Community

We welcome contributions! See the CONTRIBUTING.md file for how to help out.

License

Translate is BSD-licensed, as found in the LICENSE file.

Comments
  • Use binarized data format instead of raw text for memory/reuse efficiency

    Use binarized data format instead of raw text for memory/reuse efficiency

    Summary: If the user passes in a text file, it's binarized and the rest of training code uses the binarized version.

    The binarized file is either saved to the location specified by the flag (for re-use) or to a temp file.

    Add bool_flag helper type to allow flags like --reverse-source (which default to true) to be turned off by specifying "--reverse-source False"

    Depends on https://github.com/pytorch/fairseq/pull/161

    Differential Revision: D8021994

    fbshipit-source-id: 35dd378564887b5418e456bd8cb4d31fe2f63c04

    opened by theweiho 9
  • During Build Translate step i am getting this error, should i start all over again?

    During Build Translate step i am getting this error, should i start all over again?

    Btw this notice that caffe installed never failed but it got successfully ended at 18% . it was shocking to me but i let is go on ..

    Current error is below

    `(base) test@dc-isb-ds-001:~/Downloads/translate/pytorch_translate/cpp/build$ cmake \

    -DCMAKE_PREFIX_PATH="${CONDA_PATH}/usr/local"
    -DCMAKE_INSTALL_PREFIX="${CONDA_PATH}" ..
    2>&1 | tee CMAKE_OUT CMake Error at CMakeLists.txt:8 (find_package): By not providing "FindCaffe2.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Caffe2", but CMake did not find one.

    Could not find a package configuration file provided by "Caffe2" with any of the following names:

    Caffe2Config.cmake
    caffe2-config.cmake
    

    Add the installation prefix of "Caffe2" to CMAKE_PREFIX_PATH or set "Caffe2_DIR" to a directory containing one of the above files. If "Caffe2" provides a separate development package or SDK, be sure it has been installed.

    -- Configuring incomplete, errors occurred! See also "/home/test/Downloads/translate/pytorch_translate/cpp/build/CMakeFiles/CMakeOutput.log". `

    opened by arsalan993 8
  • Python version degrades

    Python version degrades

    while using this conda install -y -c caffe2 "pytorch-caffe2-cuda${TMP_CUDA_VERSION}.0-cudnn7" image python version degrades to 2.7.15 and hence is not able to export model using https://github.com/pytorch/translate#exporting-a-model-with-onnx

    Can this conversion be done on a CPU Does pytorch1.0 does not need many of the above steps and facilitate the conversion process..

    opened by gvskalyan 7
  • Manifold migration for export code / export code cleanup

    Manifold migration for export code / export code cleanup

    Summary:

    1. Migrate export related code from Gluster to Manifold
    2. Simplify the code a bit by removing branches for batched beam. We don't export models with batched beam anymore.
    3. Following suggestions in T56261838, copy from/to gluster for now and remove the logic after downstream code (eval related T55948284 Chau) & upstream code (sweep related) finishes migration.
    4. splitter ":" -> "|"

    Differential Revision: D18185452

    Merged fb-exported 
    opened by cndn 6
  • build translate fail under gcc 5.3.1

    build translate fail under gcc 5.3.1

    when i build translate with cuda 9.0, i meet below issues, did anyone meet the same issue?

    -- The C compiler identification is GNU 5.3.1 -- The CXX compiler identification is GNU 5.3.1 -- Check for working C compiler: /opt/rh/devtoolset-4/root/usr/bin/cc -- Check for working C compiler: /opt/rh/devtoolset-4/root/usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /opt/rh/devtoolset-4/root/usr/bin/c++ -- Check for working CXX compiler: /opt/rh/devtoolset-4/root/usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Looking for pthread.h -- Looking for pthread.h - found -- Looking for pthread_create -- Looking for pthread_create - not found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE
    -- Caffe2: Found gflags with new-style gflags target. -- Caffe2: Cannot find glog automatically. Using legacy find. -- Found glog: /usr/include
    -- Caffe2: Found glog (include: /usr/include, library: /usr/lib64/libglog.so) -- Caffe2: Found protobuf with new-style protobuf targets. -- Caffe2: Protobuf version 3.5.0 -- Found CUDA: /usr/local/cuda (found suitable version "9.0", minimum required is "7.0") -- Caffe2: CUDA detected: 9.0 -- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc -- Caffe2: CUDA toolkit directory: /usr/local/cuda -- Caffe2: Header version is: 9.0 -- Found CUDNN: /usr/local/cuda/include
    -- Found cuDNN: v7.1.2 (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so) -- Autodetected CUDA architecture(s): 6.0 -- Added CUDA NVCC flags for: -gencode;arch=compute_60,code=sm_60 -- Summary: -- CMake version : 3.11.1 -- CMake command : /home/wliao2/local/bin/cmake -- System name : Linux -- C++ compiler : /opt/rh/devtoolset-4/root/usr/bin/c++ -- C++ compiler version : 5.3.1 -- CXX flags : -std=c++11 -O2 -fPIC -Wno-narrowing -- Caffe2 version : 0.8.2 -- Caffe2 include path : /home/wliao2/anaconda3/envs/translate/include -- Have CUDA : -- Configuring done -- Generating done -- Build files have been written to: /home/wliao2/translate/pytorch_translate/cpp/build (translate) [xxx]$ make 2>&1 | tee MAKE_OUT Scanning dependencies of target translation_decoder [ 16%] Building CXX object CMakeFiles/translation_decoder.dir/Decoder.cpp.o /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:38:18: error: expected constructor, destructor, or type conversion before ‘(’ token C10_DEFINE_string(encoder_model, "", "Encoder model path"); ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:39:18: error: expected constructor, destructor, or type conversion before ‘(’ token C10_DEFINE_string(decoder_step_model, "", "Decoder step model path"); ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:40:18: error: expected constructor, destructor, or type conversion before ‘(’ token C10_DEFINE_string(source_vocab_path, "", "Source vocab file"); ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:41:18: error: expected constructor, destructor, or type conversion before ‘(’ token C10_DEFINE_string(target_vocab_path, "", "Target vocab file"); ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:43:15: error: expected constructor, destructor, or type conversion before ‘(’ token C10_DEFINE_int(beam_size, -1, "Beam size"); ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:44:18: error: expected constructor, destructor, or type conversion before ‘(’ token C10_DEFINE_double( ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:50:15: error: expected constructor, destructor, or type conversion before ‘(’ token C10_DEFINE_int( ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:57:16: error: expected constructor, destructor, or type conversion before ‘(’ token C10_DEFINE_bool( ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:61:16: error: expected constructor, destructor, or type conversion before ‘(’ token C10_DEFINE_bool( ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:65:16: error: expected constructor, destructor, or type conversion before ‘(’ token C10_DEFINE_bool( ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:69:18: error: expected constructor, destructor, or type conversion before ‘(’ token C10_DEFINE_double( ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp: In function ‘int main(int, char**)’: /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:78:7: error: ‘FLAGS_source_vocab_path’ is not a member of ‘c10’ if (c10::FLAGS_source_vocab_path.empty() || ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:79:7: error: ‘FLAGS_target_vocab_path’ is not a member of ‘c10’ c10::FLAGS_target_vocab_path.empty() || ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:80:7: error: ‘FLAGS_encoder_model’ is not a member of ‘c10’ c10::FLAGS_encoder_model.empty() || ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:81:7: error: ‘FLAGS_decoder_step_model’ is not a member of ‘c10’ c10::FLAGS_decoder_step_model.empty()) { ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:85:38: error: ‘FLAGS_source_vocab_path’ is not a member of ‘c10’ << "(source_vocab_path='" << c10::FLAGS_source_vocab_path ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:86:40: error: ‘FLAGS_target_vocab_path’ is not a member of ‘c10’ << "', target_vocab_path='" << c10::FLAGS_target_vocab_path ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:87:36: error: ‘FLAGS_encoder_model’ is not a member of ‘c10’ << "', encoder_model='" << c10::FLAGS_encoder_model ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:88:41: error: ‘FLAGS_decoder_step_model’ is not a member of ‘c10’ << "', decoder_step_model='" << c10::FLAGS_decoder_step_model << "')"; ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:92:41: error: ‘FLAGS_source_vocab_path’ is not a member of ‘c10’ std::make_sharedpyt::Dictionary(c10::FLAGS_source_vocab_path); ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:94:41: error: ‘FLAGS_target_vocab_path’ is not a member of ‘c10’ std::make_sharedpyt::Dictionary(c10::FLAGS_target_vocab_path); ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:96:7: error: ‘FLAGS_beam_size’ is not a member of ‘c10’ c10::FLAGS_beam_size, ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:97:7: error: ‘FLAGS_max_out_seq_len_mult’ is not a member of ‘c10’ c10::FLAGS_max_out_seq_len_mult, ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:98:7: error: ‘FLAGS_max_out_seq_len_bias’ is not a member of ‘c10’ c10::FLAGS_max_out_seq_len_bias, ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:101:7: error: ‘FLAGS_encoder_model’ is not a member of ‘c10’ c10::FLAGS_encoder_model, ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:102:7: error: ‘FLAGS_decoder_step_model’ is not a member of ‘c10’ c10::FLAGS_decoder_step_model, ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:103:7: error: ‘FLAGS_reverse_source’ is not a member of ‘c10’ c10::FLAGS_reverse_source, ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:104:7: error: ‘FLAGS_stop_at_eos’ is not a member of ‘c10’ c10::FLAGS_stop_at_eos, ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:105:7: error: ‘FLAGS_append_eos_to_source’ is not a member of ‘c10’ c10::FLAGS_append_eos_to_source, ^ /home/wliao2/translate/pytorch_translate/cpp/Decoder.cpp:106:7: error: ‘FLAGS_length_penalty’ is not a member of ‘c10’ c10::FLAGS_length_penalty); ^ make[2]: *** [CMakeFiles/translation_decoder.dir/Decoder.cpp.o] Error 1 make[1]: *** [CMakeFiles/translation_decoder.dir/all] Error 2 make: *** [all] Error 2

    opened by wincent8 6
  • Following instructions to install gives compiler errors when building python on Ubuntu

    Following instructions to install gives compiler errors when building python on Ubuntu

    Hello,

    I tried to follow the instructions in the readme to install. First, I used my existing anaconda environment. However, it failed during the build of pytorch, the linker complaining about missing -pthreads among other things. I did some searching and found that maybe the anaconda compilers weren't installed, so I tried to install them with

    conda install gcc_linux-64 gxx_linux-64 gfortran_linux-64

    However, this was apparently the wrong compilers, since the build immediately failed with two unrecognized command line options: -fstack-protector-strong and -fno-plt

    After that I thought I'd try using a fresh miniconda environment rather than my existing anaconda environment, so I did it over with following the miniconda installation steps in the README too. This time, it fails with linker being unable to find -lgcc_s, which again sounds a lot like a missing compiler install. Trying to install them with the same command line as above gives the same error.

    Installing FAIR's tools - which are fantastic once I get them to work, thank you! - appear to be extremely sensitive to C++ compiler versions. The same with nvidia's tools... I stranded on this problem last time I tried to install pytorch from source, however then the new nightly build distributions of pytorch saved me then.

    If you could provide similar bleeding edge binaries for ONNX and Caffe2, that'd be terrific! If not, if the build files could make sure the right compiler was used (and give sensible error messages if not) that would be a big improvement too.

    opened by HaraldKorneliussen 6
  • depricated functions

    depricated functions

    p3 instance on AWS Ubuntu 18.04 torch: 1.5.0.dev20200304 fairseq: 0.9.0 pytorch-translate: 0.1.0

    from ~/translate I run

    bash pytorch-translate/examples/train-transformer.sh

    where train-transformer.sh is:

    #!/bin/bash

    NCCL_ROOT_DIR="$(pwd)/nccl_2.1.15-1+cuda-10.1" export NCCL_ROOT_DIR LD_LIBRARY_PATH="${NCCL_ROOT_DIR}/lib:${LD_LIBRARY_PATH}" export LD_LIBRARY_PATH wget https://download.pytorch.org/models/translate/iwslt14/data.tar.gz tar -xvzf data.tar.gz rm -rf checkpoints data.tar.gz && mkdir -p checkpoints CUDA_VISIBLE_DEVICES=0 python3 pytorch_translate/train.py
    ""
    --arch ptt_transformer
    --lr-scheduler inverse_sqrt
    --log-verbose
    --max-epoch 100
    --stop-time-hr 72
    --stop-no-best-bleu-eval 5
    --optimizer adam
    --lr 5e-4
    --clip-norm 5.0
    --criterion label_smoothed_cross_entropy
    --label-smoothing 0.1
    --batch-size 128
    --length-penalty 0
    --unk-reward -0.5
    --word-reward 0.25
    --max-tokens 4096
    --save-dir checkpoints
    --adam-betas '(0.9, 0.98)'
    --num-avg-checkpoints 10
    --beam 2
    --no-beamable-mm
    --source-lang de
    --target-lang en
    --train-source-text-file data/train.tok.bpe.de
    --train-target-text-file data/train.tok.bpe.en
    --dropout 0.3
    --attention-dropout 0.3
    --relu-dropout 0.3
    --encoder-embed-dim 256
    --encoder-ffn-embed-dim 256
    --encoder-layers 4
    --encoder-attention-heads 4
    --decoder-embed-dim 256
    --decoder-ffn-embed-dim 256
    --decoder-layers 4
    --decoder-attention-heads 4
    --decoder-layerdrop 0.3
    --eval-source-text-file data/valid.tok.bpe.de
    --eval-target-text-file data/valid.tok.bpe.en
    --source-max-vocab-size 14000
    --target-max-vocab-size 14000
    --log-interval 10
    --seed "${RANDOM}"
    2>&1 | tee -a checkpoints/log

    I get:

    ...
    [2020-03-08 18:16:15.546057] | Finished removing old checkpoint checkpoints/checkpoint100_end.pt.
    pytorch_translate/train.py:693: UserWarning: Trainer.get_meter is deprecated. Please use fairseq.metrics instead.
      meter = trainer.get_meter(k)
    /opt/conda/conda-bld/pytorch_1583309282142/work/torch/csrc/utils/python_arg_parser.cpp:739: UserWarning: This overload of add_ is deprecated:
    	add_(Number alpha, Tensor other)
    Consider using one of the following signatures instead:
    	add_(Tensor other, Number alpha)
    Error in atexit._run_exitfuncs:
    Traceback (most recent call last):
      File "/home/ubuntu/miniconda/lib/python3.7/site-packages/fairseq-0.9.0-py3.7-linux-x86_64.egg/fairseq/logging/progress_bar.py", line 299, in _close_writers
        for w in _tensorboard_writers.values():
    NameError: name '_tensorboard_writers' is not defined
    
    opened by alrojo 4
  • Script MultiheadAttention (#1524)

    Script MultiheadAttention (#1524)

    Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/1524

    Make fairseq MultiheadAttention scriptable. Looking for feedbacks.

    1. Add types
    2. Move incremental state management logic from util functions to initializers. TorchScript in general doesn't support global dict. As a result modules with multihead attention in it would assign itself fairseq_instance_id in the initializer.
    3. There might be opportunities to make assertions and annotations cleaner.

    Differential Revision: D18772594

    Merged fb-exported 
    opened by cndn 4
  • While running pretrained model(IWSLT 2014) , observed below errors

    While running pretrained model(IWSLT 2014) , observed below errors

    Traceback (most recent call last): File "pytorch_translate/generate.py", line 705, in main() File "pytorch_translate/generate.py", line 609, in main generate(args) File "pytorch_translate/generate.py", line 634, in generate args.path.split(":") File "/root/pytorch/fairseq/pytorch-translate/translate/pytorch_translate/utils.py", line 116, in load_diverse_ensemble_for_inference task = tasks.setup_task(checkpoints_data[0]["args"]) File "/root/pytorch/fairseq/fairseq/tasks/init.py", line 19, in setup_task return TASK_REGISTRY[args.task].setup_task(args, **kwargs) File "/root/pytorch/fairseq/pytorch-translate/translate/pytorch_translate/tasks/pytorch_translate_task.py", line 124, in setup_task args.source_vocab_file File "/root/pytorch/fairseq/fairseq/data/dictionary.py", line 184, in load d.add_from_file(f, ignore_utf_errors) File "/root/pytorch/fairseq/fairseq/data/dictionary.py", line 201, in add_from_file raise fnfe File "/root/pytorch/fairseq/fairseq/data/dictionary.py", line 195, in add_from_file with open(f, 'r', encoding='utf-8') as fd: FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/dictionary-de.txt'

    opened by streamhsa 4
  • bi-directional transformer rescoring

    bi-directional transformer rescoring

    Summary: rescore the generated sentences by bi-directioanl transformer. The idea is predict every word given all surronding words except itself, i.e., p(y_t | y_{\neq t}). The training strategy is quite like BERT, but don't need to mask any words to avoid self-attention when we only have a single layer (check out FAIR's paper: "Cloze-driven Pretraining of Self-attention Networks")

    The loss function keeps unchanged, such as cross entropy. The score of a given hypothesis is \sum log p(y_t | y_{\neq t})

    Differential Revision: D14926425

    Merged 
    opened by qingerVT 4
  • unexpected keyword argument 'max_tokens'

    unexpected keyword argument 'max_tokens'

    I followed "Quickstart" section to install the framework. When I run default training example (bash pytorch_translate/examples/train_iwslt14.sh) I get following error:

    Traceback (most recent call last):
      File "pytorch_translate/train.py", line 974, in <module>
        main(args, single_process_main)
      File "pytorch_translate/train.py", line 945, in main
        return single_process_train(args)
      File "pytorch_translate/train.py", line 332, in single_process_main
        extra_state, trainer, task, epoch_itr = setup_training(args)
      File "pytorch_translate/train.py", line 285, in setup_training
        shard_id=args.distributed_rank,
    TypeError: __init__() got an unexpected keyword argument 'max_tokens'
    
    

    I am not quite sure what is exact problem here... Please tell me if you need any additional information.

    opened by delmaksym 4
  • [*.py] Rename

    [*.py] Rename "Arguments:" to "Args:"

    I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow—and now PyTorch—codebases: inconsistent use of Args: and Arguments: in its docstrings. It is easy enough to extend my parsers to support both variants, however it looks like Arguments: is wrong anyway, as per:

    • https://google.github.io/styleguide/pyguide.html#doc-function-args @ ddccc0f

    • https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ 9fc0fc0

    • https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ c0ae8e3

    Therefore, only Args: is valid. This PR replaces them throughout the codebase.

    PS: For related PRs, see pytorch/pytorch/pull/49736

    cla signed 
    opened by SamuelMarks 0
  • cmd-line support for loading mmap datasets

    cmd-line support for loading mmap datasets

    Summary: Allows loading data from the Fairseq .idx/.bin format (including most current "mmap" implementation) by specifying the --fairseq-binary-data-format flag.

    (Note that D16867809 added iniernal support for loading legacy .idx / .bin files, but did not expose an option for using that format to the command-line trainer.)

    Differential Revision: D19844619

    fb-exported cla signed 
    opened by jhcross 2
  • beam search

    beam search

    According to README:

    Currently, we export components (encoder, decoder) to Caffe2 separately and beam search is implemented in C++. In the near future, we will be able to export the beam search as well. We also plan to add export support to more models.

    Where can I find this C++ code or exportable (jit) beam search already in repo?

    opened by Oktai15 2
  • NAT bug fix

    NAT bug fix

    Summary: Remove a duplicate operation. Line 1838 is doing the same thing.

    Earlier max_iter > 1 inference is broken internally. This fixes it, though we don't observe much performance gain for max_iter > 1. And the performance max_iter=1 doesn't change.

    Reviewed By: kahne

    Differential Revision: D19145640

    fb-exported cla signed 
    opened by cndn 1
Owner
null
Simple Python script to scrape youtube channles of "Parity Technologies and Web3 Foundation" and translate them to well-known braille language or any language

Simple Python script to scrape youtube channles of "Parity Technologies and Web3 Foundation" and translate them to well-known braille language or any

Little Endian 1 Apr 28, 2022
Translate U is capable of translating the text present in an image from one language to the other.

Translate U is capable of translating the text present in an image from one language to the other. The app uses OCR and Google translate to identify and translate across 80+ languages.

Neelanjan Manna 1 Dec 22, 2021
Input english text, then translate it between languages n times using the Deep Translator Python Library.

mass-translator About Input english text, then translate it between languages n times using the Deep Translator Python Library. How to Use Install dep

null 2 Mar 4, 2022
Auto translate textbox from Japanese to English or Indonesia

priconne-auto-translate Auto translate textbox from Japanese to English or Indonesia How to use Install python first, Anaconda is recommended Install

Aji Priyo Wibowo 5 Aug 25, 2022
translate using your voice

speech-to-text-translator Usage translate using your voice description this project makes translating a word easy, all you have to do is speak and...

null 1 Oct 18, 2021
translate using your voice

speech-to-text-translator Usage translate using your voice description this project makes translating a word easy, all you have to do is speak and...

null 1 Oct 18, 2021
This program do translate english words to portuguese

Python-Dictionary This program is used to translate english words to portuguese. Web-Scraping This program use BeautifulSoap to make web scraping, so

João Assalim 1 Oct 10, 2022
Tool which allow you to detect and translate text.

Text detection and recognition This repository contains tool which allow to detect region with text and translate it one by one. Description Two pretr

Damian Panek 176 Nov 28, 2022
A telegram bot to translate 100+ Languages

?? GOOGLE TRANSLATER ?? The owner would not be responsible for any kind of bans due to the bot. • ⚡ INSTALLING ⚡ • • ?? Deploy To Railway ?? • • ✅ OFF

Aɴᴋɪᴛ Kᴜᴍᴀʀ 5 Dec 20, 2021
Graphical user interface for Argos Translate

Argos Translate GUI Website | GitHub | PyPI Graphical user interface for Argos Translate. Install pip3 install argostranslategui

Argos Open Tech 16 Dec 7, 2022
Takes a string and puts it through different languages in Google Translate a requested amount of times, returning nonsense.

PythonTextObfuscator Takes a string and puts it through different languages in Google Translate a requested amount of times, returning nonsense. Requi

null 2 Aug 29, 2022
MicBot - MicBot uses Google Translate to speak everyone's chat messages

MicBot MicBot uses Google Translate to speak everyone's chat messages. It can al

null 2 Mar 9, 2022
Use the state-of-the-art m2m100 to translate large data on CPU/GPU/TPU. Super Easy!

Easy-Translate is a script for translating large text files in your machine using the M2M100 models from Facebook/Meta AI. We also privide a script fo

Iker García-Ferrero 41 Dec 15, 2022
A python framework to transform natural language questions to queries in a database query language.

__ _ _ _ ___ _ __ _ _ / _` | | | |/ _ \ '_ \| | | | | (_| | |_| | __/ |_) | |_| | \__, |\__,_|\___| .__/ \__, | |_| |_| |___/

Machinalis 1.2k Dec 18, 2022
A Domain Specific Language (DSL) for building language patterns. These can be later compiled into spaCy patterns, pure regex, or any other format

RITA DSL This is a language, loosely based on language Apache UIMA RUTA, focused on writing manual language rules, which compiles into either spaCy co

Šarūnas Navickas 60 Sep 26, 2022
Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG)

Indobenchmark Toolkit Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG) resources fo

Samuel Cahyawijaya 11 Aug 26, 2022
LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language

LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language ⚖️ The library of Natural Language Processing for Brazilian legal lang

Felipe Maia Polo 125 Dec 20, 2022
A design of MIDI language for music generation task, specifically for Natural Language Processing (NLP) models.

MIDI Language Introduction Reference Paper: Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions: code This

Robert Bogan Kang 3 May 25, 2022