Sequence to Sequence Models with PyTorch

Overview

Sequence to Sequence models with PyTorch

This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch

At present it has implementations for :

* Vanilla Sequence to Sequence models

* Attention based Sequence to Sequence models from https://arxiv.org/abs/1409.0473 and https://arxiv.org/abs/1508.04025

* Faster attention mechanisms using dot products between the **final** encoder and decoder hidden states

* Sequence to Sequence autoencoders (experimental)

Sequence to Sequence models

A vanilla sequence to sequence model presented in https://arxiv.org/abs/1409.3215, https://arxiv.org/abs/1406.1078 consits of using a recurrent neural network such as an LSTM (http://dl.acm.org/citation.cfm?id=1246450) or GRU (https://arxiv.org/abs/1412.3555) to encode a sequence of words or characters in a source language into a fixed length vector representation and then deocoding from that representation using another RNN in the target language.

Sequence to Sequence

An extension of sequence to sequence models that incorporate an attention mechanism was presented in https://arxiv.org/abs/1409.0473 that uses information from the RNN hidden states in the source language at each time step in the deocder RNN. This attention mechanism significantly improves performance on tasks like machine translation. A few variants of the attention model for the task of machine translation have been presented in https://arxiv.org/abs/1508.04025.

Sequence to Sequence with attention

The repository also contains a simpler and faster variant of the attention mechanism that doesn't attend over the hidden states of the encoder at each time step in the deocder. Instead, it computes the a single batched dot product between all the hidden states of the decoder and encoder once after the decoder has processed all inputs in the target. This however comes at a minor cost in model performance. One advantage of this model is that it is possible to use the cuDNN LSTM in the attention based decoder as well since the attention is computed after running through all the inputs in the decoder.

Results on English - French WMT14

The following presents the model architecture and results obtained when training on the WMT14 English - French dataset. The training data is the english-french bitext from Europral-v7. The validation dataset is newstest2011

The model was trained with following configuration

* Source and target word embedding dimensions - 512

* Source and target LSTM hidden dimensions - 1024

* Encoder - 2 Layer Bidirectional LSTM

* Decoder - 1 Layer LSTM

* Optimization - ADAM with a learning rate of 0.0001 and batch size of 80

* Decoding - Greedy decoding (argmax)
Model BLEU Train Time Per Epoch
Seq2Seq 11.82 2h 50min
Seq2Seq FastAttention 18.89 3h 45min
Seq2Seq Attention 22.60 4h 47min

Times reported are using a Pre 2016 Nvidia GeForce Titan X

Running

To run, edit the config file and execute python nmt.py --config <your_config_file>

NOTE: This only runs on a GPU for now.

Comments
  • Questions about the implementation

    Questions about the implementation

    Hello!

    I am reading your implementation line by line, and found it's nice and easy to follow. Thanks a lot! But I still have some questions. Since I didn't finish reading yet, I guess I will have more later on.

    You set the requires_grad of two initial hidden states as false (code). Could you explain why you did this, since I thought they should be true for back-propagation. Also, it is wrong if we set them as true.

    opened by RangoHU 7
  • how to run the code with beam search?

    how to run the code with beam search?

    Dear authors,

    Thanks for sharing you code. Your code is well structured and easy to read, but I still encounter a problem in running seq2seq with beam search.

    In the evaluate.py, you have declared that the beam search is in TODO. And in the decode.py I found that the BeamSearchDecoder has been implemented, so I try to run the decode.py, but it throws a Exception like this:

    Traceback (most recent call last):
      File "decode.py", line 479, in <module>
        decoder.translate()
      File "decode.py", line 242, in translate
        hypotheses, scores = decoder.decode_batch(j)
      File "decode.py", line 153, in decode_batch
        context
      File "/usr/share/Anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
        result = self.forward(*input, **kwargs)
    TypeError: forward() takes at most 3 arguments (4 given)
    

    So, could you tell me how to run the decode.py or give me some suggestions for implementing the beam search part. It will be highly appreciated for your any suggestions.

    Thanks.

    opened by wanyao1992 5
  • Does summarization.py work?

    Does summarization.py work?

    Hey, thanks for your implementations.

    Trying to get the summarization code running, but I have a feeling it doesn't actually work (yet). Am i correct to assume so? For example, it's calling read_nmt_data instead of read_summarization_data and you've removed the file in your refactor branch.

    Any tips on getting it to run?

    opened by Coolnesss 4
  • xentropy mask

    xentropy mask

    Thanks for putting this code up! It's very clean.

    My question is similar to https://github.com/MaximumEntropy/Seq2Seq-PyTorch/issues/6.

    At https://github.com/MaximumEntropy/Seq2Seq-PyTorch/blob/master/nmt.py#L184, it seems like you aren't masking your losses, so outputting pad tokens is part of the supervision. Is this the case?

    opened by rpryzant 2
  • SoftAttentionDot isn't exactly the attention mechanism from Luong et al., is it?

    SoftAttentionDot isn't exactly the attention mechanism from Luong et al., is it?

    In their paper, the input at next time step is the concatenation of h_tilde and the actual input. However, this code seems to use h_tilde as the hidden state for the next time-step LSTM computation.

    Brilliant code base, btw.

    opened by MurtyShikhar 2
  • question about the 'batch_mask'

    question about the 'batch_mask'

    Hi, your code has a nice abstraction, thanks for your share. But I have a question about the 'attentionLSTM', it seems that you didn' t use any 'ctx_mask' or 'trg_mask' in your code related to attention part, won't this cause error using for attention ? I'm new to pytorch, hope for your reply!

    opened by ypruan 2
  • Download Script for WMT Data?

    Download Script for WMT Data?

    Hey @MaximumEntropy, thanks for such a nice, clean repo. I was wondering if there was a specific script you used to download the wmt data. Maybe you can point us to what you used?

    Also, do you mind sharing how many training examples there are in the WMT data? It looks like you have ~5hr train time per epoch. I was wondering how many training examples was in each epoch.

    opened by NickShahML 2
  • Bugs in Seq2Seq model

    Bugs in Seq2Seq model

    Hi, the code has a nice abstraction and easy to follow. Thanks!!

    However, there are some issues in your implement....

    (code)

    If you don't pass c_t through a Linear layer from encoder hidden to decoder hidden, then the code crashes. (Encoder and Decoder can have different dimensions)

    (code)

    When self.decoder.num_layers != 1 the view function will crash because of dimension dis-match.

    opened by JianLiu91 1
  • ValueError: Expecting property name: line 6 column 3 (char 83)

    ValueError: Expecting property name: line 6 column 3 (char 83)

    File "/home/mb75502/Seq2Seq-PyTorch/data_utils.py", line 27, in read_config json_object = json.load(open(file_path, 'r')) File "/home/mb75502/anaconda2/lib/python2.7/json/init.py", line 291, in load **kw) File "/home/mb75502/anaconda2/lib/python2.7/json/init.py", line 339, in loads return _default_decoder.decode(s) File "/home/mb75502/anaconda2/lib/python2.7/json/decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/home/mb75502/anaconda2/lib/python2.7/json/decoder.py", line 380, in raw_decode obj, end = self.scan_once(s, idx) ValueError: Expecting property name: line 6 column 3 (char 83)

    opened by SuperAlexander 0
  • Share tokenized data

    Share tokenized data

    @MaximumEntropy Could you, or anyone reading, this share the tokenized version of the data here? It's really important that I run this, but I can't install the Mosesdecoder on my server (that is shared).

    opened by wishvivek 0
  • get_best() returns index 1

    get_best() returns index 1

    Hi. In the beam_search.py you have a function get_best(). Shouldn't this return the first element, as in index 0, of the sorted list instead of index 1?

    opened by VSJMilewski 1
  • RuntimeError: bool value of Tensor with more than one value is ambiguous

    RuntimeError: bool value of Tensor with more than one value is ambiguous

    While running your code, I encountered this error.

    Traceback (most recent call last):
      File "nmt.py", line 181, in <module>
        decoder_logit = model(input_lines_src, input_lines_trg)
      File "/home/cmaurya1/code/py2.7/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/cmaurya1/code/seq2seq/seq2seq_maximum_entropy/model.py", line 841, in forward
        ctx_mask
      File "/home/cmaurya1/code/py2.7/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__
        result = self.forward(*in
    Traceback (most recent call last):
      File "nmt.py", line 181, in <module>
        decoder_logit = model(input_lines_src, input_lines_trg)
      File "/home/cmaurya1/code/py2.7/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/cmaurya1/code/seq2seq/seq2seq_maximum_entropy/model.py", line 841, in forward
        ctx_mask
      File "/home/cmaurya1/code/py2.7/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/cmaurya1/code/seq2seq/seq2seq_maximum_entropy/model.py", line 382, in forward
        output.append(isinstance(hidden, tuple) and hidden[0] or hidden)
    RuntimeError: bool value of Tensor with more than one value is ambiguous
    

    Any hint to solve?

    opened by chandreshiit 1
Owner
Sandeep Subramanian
MILA (Universite de Montreal) Formerly CMU | MSR | FAIR | Element AI
Sandeep Subramanian
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Segmentation Transformer Implementation of Segmentation Transformer in PyTorch, a new model to achieve SOTA in semantic segmentation while using trans

Abhay Gupta 161 Dec 8, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021) Citation Please cite as: @inproceedings{liu2020understan

Sunbow Liu 22 Nov 25, 2022
[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Fudan Zhang Vision Group 897 Jan 5, 2023
Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction

This is a fork of Fairseq(-py) with implementations of the following models: Pervasive Attention - 2D Convolutional Neural Networks for Sequence-to-Se

Maha 490 Dec 15, 2022
An implementation of a sequence to sequence neural network using an encoder-decoder

Keras implementation of a sequence to sequence model for time series prediction using an encoder-decoder architecture. I created this post to share a

Luke Tonin 195 Dec 17, 2022
Sequence lineage information extracted from RKI sequence data repo

Pango lineage information for German SARS-CoV-2 sequences This repository contains a join of the metadata and pango lineage tables of all German SARS-

Cornelius Roemer 24 Oct 26, 2022
Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Paper | Blog OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image gene

OFA Sys 1.4k Jan 8, 2023
Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

null 2.7k Jan 5, 2023
A modular, research-friendly framework for high-performance and inference of sequence models at many scales

T5X T5X is a modular, composable, research-friendly framework for high-performance, configurable, self-service training, evaluation, and inference of

Google Research 1.1k Jan 8, 2023
Simulate genealogical trees and genomic sequence data using population genetic models

msprime msprime is a population genetics simulator based on tskit. Msprime can simulate random ancestral histories for a sample of individuals (consis

Tskit developers 150 Dec 14, 2022
Facebook Research 605 Jan 2, 2023
A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN)

A PyTorch Implementation of GGNN This is a PyTorch implementation of the Gated Graph Sequence Neural Networks (GGNN) as described in the paper Gated G

Ching-Yao Chuang 427 Dec 13, 2022
Unofficial pytorch implementation for Self-critical Sequence Training for Image Captioning. and others.

An Image Captioning codebase This is a codebase for image captioning research. It supports: Self critical training from Self-critical Sequence Trainin

Ruotian(RT) Luo 906 Jan 3, 2023
A Fast Sequence Transducer Implementation with PyTorch Bindings

transducer A Fast Sequence Transducer Implementation with PyTorch Bindings. The corresponding publication is Sequence Transduction with Recurrent Neur

Awni Hannun 184 Dec 18, 2022
A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN)

A PyTorch Implementation of GGNN This is a PyTorch implementation of the Gated Graph Sequence Neural Networks (GGNN) as described in the paper Gated G

Ching-Yao Chuang 427 Dec 13, 2022
PyTorch trainer and model for Sequence Classification

PyTorch-trainer-and-model-for-Sequence-Classification After cloning the repository, modify your training data so that the training data is a .csv file

NhanTieu 2 Dec 9, 2022
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Mayur 119 Nov 24, 2022