Gathers machine learning and Tensorflow deep learning models for NLP problems, 1.13 < Tensorflow < 2.0

Overview

logo

MIT License


NLP-Models-Tensorflow, Gathers machine learning and tensorflow deep learning models for NLP problems, code simplify inside Jupyter Notebooks 100%.

Table of contents

Objective

Original implementations are quite complex and not really beginner friendly. So I tried to simplify most of it. Also, there are tons of not-yet release papers implementation. So feel free to use it for your own research!

I will attached github repositories for models that I not implemented from scratch, basically I copy, paste and fix those code for deprecated issues.

Tensorflow version

Tensorflow version 1.13 and above only, not included 2.X version. 1.13 < Tensorflow < 2.0

pip install -r requirements.txt

Contents

Abstractive Summarization

Trained on India news.

Accuracy based on 10 epochs only, calculated using word positions.

Complete list (12 notebooks)
  1. LSTM Seq2Seq using topic modelling, test accuracy 13.22%
  2. LSTM Seq2Seq + Luong Attention using topic modelling, test accuracy 12.39%
  3. LSTM Seq2Seq + Beam Decoder using topic modelling, test accuracy 10.67%
  4. LSTM Bidirectional + Luong Attention + Beam Decoder using topic modelling, test accuracy 8.29%
  5. Pointer-Generator + Bahdanau, https://github.com/xueyouluo/my_seq2seq, test accuracy 15.51%
  6. Copynet, test accuracy 11.15%
  7. Pointer-Generator + Luong, https://github.com/xueyouluo/my_seq2seq, test accuracy 16.51%
  8. Dilated Seq2Seq, test accuracy 10.88%
  9. Dilated Seq2Seq + Self Attention, test accuracy 11.54%
  10. BERT + Dilated CNN Seq2seq, test accuracy 13.5%
  11. self-attention + Pointer-Generator, test accuracy 4.34%
  12. Dilated-CNN Seq2seq + Pointer-Generator, test accuracy 5.57%

Chatbot

Trained on Cornell Movie Dialog corpus, accuracy table in chatbot.

Complete list (54 notebooks)
  1. Basic cell Seq2Seq-manual
  2. LSTM Seq2Seq-manual
  3. GRU Seq2Seq-manual
  4. Basic cell Seq2Seq-API Greedy
  5. LSTM Seq2Seq-API Greedy
  6. GRU Seq2Seq-API Greedy
  7. Basic cell Bidirectional Seq2Seq-manual
  8. LSTM Bidirectional Seq2Seq-manual
  9. GRU Bidirectional Seq2Seq-manual
  10. Basic cell Bidirectional Seq2Seq-API Greedy
  11. LSTM Bidirectional Seq2Seq-API Greedy
  12. GRU Bidirectional Seq2Seq-API Greedy
  13. Basic cell Seq2Seq-manual + Luong Attention
  14. LSTM Seq2Seq-manual + Luong Attention
  15. GRU Seq2Seq-manual + Luong Attention
  16. Basic cell Seq2Seq-manual + Bahdanau Attention
  17. LSTM Seq2Seq-manual + Bahdanau Attention
  18. GRU Seq2Seq-manual + Bahdanau Attention
  19. LSTM Bidirectional Seq2Seq-manual + Luong Attention
  20. GRU Bidirectional Seq2Seq-manual + Luong Attention
  21. LSTM Bidirectional Seq2Seq-manual + Bahdanau Attention
  22. GRU Bidirectional Seq2Seq-manual + Bahdanau Attention
  23. LSTM Bidirectional Seq2Seq-manual + backward Bahdanau + forward Luong
  24. GRU Bidirectional Seq2Seq-manual + backward Bahdanau + forward Luong
  25. LSTM Seq2Seq-API Greedy + Luong Attention
  26. GRU Seq2Seq-API Greedy + Luong Attention
  27. LSTM Seq2Seq-API Greedy + Bahdanau Attention
  28. GRU Seq2Seq-API Greedy + Bahdanau Attention
  29. LSTM Seq2Seq-API Beam Decoder
  30. GRU Seq2Seq-API Beam Decoder
  31. LSTM Bidirectional Seq2Seq-API + Luong Attention + Beam Decoder
  32. GRU Bidirectional Seq2Seq-API + Luong Attention + Beam Decoder
  33. LSTM Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder
  34. GRU Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder
  35. Bytenet
  36. LSTM Seq2Seq + tf.estimator
  37. Capsule layers + LSTM Seq2Seq-API Greedy
  38. Capsule layers + LSTM Seq2Seq-API + Luong Attention + Beam Decoder
  39. LSTM Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder + Dropout + L2
  40. DNC Seq2Seq
  41. LSTM Bidirectional Seq2Seq-API + Luong Monotic Attention + Beam Decoder
  42. LSTM Bidirectional Seq2Seq-API + Bahdanau Monotic Attention + Beam Decoder
  43. End-to-End Memory Network + Basic cell
  44. End-to-End Memory Network + LSTM cell
  45. Attention is all you need
  46. Transformer-XL
  47. Attention is all you need + Beam Search
  48. Transformer-XL + LSTM
  49. GPT-2 + LSTM
  50. CNN Seq2seq
  51. Conv-Encoder + LSTM
  52. Tacotron + Greedy decoder
  53. Tacotron + Beam decoder
  54. Google NMT

Dependency-Parser

Trained on CONLL English Dependency. Train set to train, dev and test sets to test.

Stackpointer and Biaffine-attention originally from https://github.com/XuezheMax/NeuroNLP2 written in Pytorch.

Accuracy based on arc, types and root accuracies after 15 epochs only.

Complete list (8 notebooks)
  1. Bidirectional RNN + CRF + Biaffine, arc accuracy 70.48%, types accuracy 65.18%, root accuracy 66.4%
  2. Bidirectional RNN + Bahdanau + CRF + Biaffine, arc accuracy 70.82%, types accuracy 65.33%, root accuracy 66.77%
  3. Bidirectional RNN + Luong + CRF + Biaffine, arc accuracy 71.22%, types accuracy 65.73%, root accuracy 67.23%
  4. BERT Base + CRF + Biaffine, arc accuracy 64.30%, types accuracy 62.89%, root accuracy 74.19%
  5. Bidirectional RNN + Biaffine Attention + Cross Entropy, arc accuracy 72.42%, types accuracy 63.53%, root accuracy 68.51%
  6. BERT Base + Biaffine Attention + Cross Entropy, arc accuracy 72.85%, types accuracy 67.11%, root accuracy 73.93%
  7. Bidirectional RNN + Stackpointer, arc accuracy 61.88%, types accuracy 48.20%, root accuracy 89.39%
  8. XLNET Base + Biaffine Attention + Cross Entropy, arc accuracy 74.41%, types accuracy 71.37%, root accuracy 73.17%

Entity-Tagging

Trained on CONLL NER.

Complete list (9 notebooks)
  1. Bidirectional RNN + CRF, test accuracy 96%
  2. Bidirectional RNN + Luong Attention + CRF, test accuracy 93%
  3. Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 95%
  4. Char Ngrams + Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 96%
  5. Char Ngrams + Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 96%
  6. Char Ngrams + Residual Network + Bahdanau Attention + CRF, test accuracy 69%
  7. Char Ngrams + Attention is you all Need + CRF, test accuracy 90%
  8. BERT, test accuracy 99%
  9. XLNET-Base, test accuracy 99%

Extractive Summarization

Trained on CNN News dataset.

Accuracy based on ROUGE-2.

Complete list (4 notebooks)
  1. LSTM RNN, test accuracy 16.13%
  2. Dilated-CNN, test accuracy 15.54%
  3. Multihead Attention, test accuracy 26.33%
  4. BERT-Base

Generator

Trained on Shakespeare dataset.

Complete list (15 notebooks)
  1. Character-wise RNN + LSTM
  2. Character-wise RNN + Beam search
  3. Character-wise RNN + LSTM + Embedding
  4. Word-wise RNN + LSTM
  5. Word-wise RNN + LSTM + Embedding
  6. Character-wise + Seq2Seq + GRU
  7. Word-wise + Seq2Seq + GRU
  8. Character-wise RNN + LSTM + Bahdanau Attention
  9. Character-wise RNN + LSTM + Luong Attention
  10. Word-wise + Seq2Seq + GRU + Beam
  11. Character-wise + Seq2Seq + GRU + Bahdanau Attention
  12. Word-wise + Seq2Seq + GRU + Bahdanau Attention
  13. Character-wise Dilated CNN + Beam search
  14. Transformer + Beam search
  15. Transformer XL + Beam search

Language-detection

Trained on Tatoeba dataset.

Complete list (1 notebooks)
  1. Fast-text Char N-Grams

Neural Machine Translation

Trained on English-French, accuracy table in neural-machine-translation.

Complete list (53 notebooks)

1.basic-seq2seq 2.lstm-seq2seq 3.gru-seq2seq 4.basic-seq2seq-contrib-greedy 5.lstm-seq2seq-contrib-greedy 6.gru-seq2seq-contrib-greedy 7.basic-birnn-seq2seq 8.lstm-birnn-seq2seq 9.gru-birnn-seq2seq 10.basic-birnn-seq2seq-contrib-greedy 11.lstm-birnn-seq2seq-contrib-greedy 12.gru-birnn-seq2seq-contrib-greedy 13.basic-seq2seq-luong 14.lstm-seq2seq-luong 15.gru-seq2seq-luong 16.basic-seq2seq-bahdanau 17.lstm-seq2seq-bahdanau 18.gru-seq2seq-bahdanau 19.basic-birnn-seq2seq-bahdanau 20.lstm-birnn-seq2seq-bahdanau 21.gru-birnn-seq2seq-bahdanau 22.basic-birnn-seq2seq-luong 23.lstm-birnn-seq2seq-luong 24.gru-birnn-seq2seq-luong 25.lstm-seq2seq-contrib-greedy-luong 26.gru-seq2seq-contrib-greedy-luong 27.lstm-seq2seq-contrib-greedy-bahdanau 28.gru-seq2seq-contrib-greedy-bahdanau 29.lstm-seq2seq-contrib-beam-luong 30.gru-seq2seq-contrib-beam-luong 31.lstm-seq2seq-contrib-beam-bahdanau 32.gru-seq2seq-contrib-beam-bahdanau 33.lstm-birnn-seq2seq-contrib-beam-bahdanau 34.lstm-birnn-seq2seq-contrib-beam-luong 35.gru-birnn-seq2seq-contrib-beam-bahdanau 36.gru-birnn-seq2seq-contrib-beam-luong 37.lstm-birnn-seq2seq-contrib-beam-luongmonotonic 38.gru-birnn-seq2seq-contrib-beam-luongmonotic 39.lstm-birnn-seq2seq-contrib-beam-bahdanaumonotonic 40.gru-birnn-seq2seq-contrib-beam-bahdanaumonotic 41.residual-lstm-seq2seq-greedy-luong 42.residual-gru-seq2seq-greedy-luong 43.residual-lstm-seq2seq-greedy-bahdanau 44.residual-gru-seq2seq-greedy-bahdanau 45.memory-network-lstm-decoder-greedy 46.google-nmt 47.transformer-encoder-transformer-decoder 48.transformer-encoder-lstm-decoder-greedy 49.bertmultilanguage-encoder-bertmultilanguage-decoder 50.bertmultilanguage-encoder-lstm-decoder 51.bertmultilanguage-encoder-transformer-decoder 52.bertenglish-encoder-transformer-decoder 53.transformer-t2t-2gpu

OCR (optical character recognition)

Complete list (2 notebooks)
  1. CNN + LSTM RNN, test accuracy 100%
  2. Im2Latex, test accuracy 100%

POS-Tagging

Trained on CONLL POS.

Complete list (8 notebooks)
  1. Bidirectional RNN + CRF, test accuracy 92%
  2. Bidirectional RNN + Luong Attention + CRF, test accuracy 91%
  3. Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 91%
  4. Char Ngrams + Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 91%
  5. Char Ngrams + Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 91%
  6. Char Ngrams + Residual Network + Bahdanau Attention + CRF, test accuracy 3%
  7. Char Ngrams + Attention is you all Need + CRF, test accuracy 89%
  8. BERT, test accuracy 99%

Question-Answers

Trained on bAbI Dataset.

Complete list (4 notebooks)
  1. End-to-End Memory Network + Basic cell
  2. End-to-End Memory Network + GRU cell
  3. End-to-End Memory Network + LSTM cell
  4. Dynamic Memory

Sentence-pair

Trained on Cornell Movie--Dialogs Corpus

Complete list (1 notebooks)
  1. BERT

Speech to Text

Trained on Toronto speech dataset.

Complete list (11 notebooks)
  1. Tacotron, https://github.com/Kyubyong/tacotron_asr, test accuracy 77.09%
  2. BiRNN LSTM, test accuracy 84.66%
  3. BiRNN Seq2Seq + Luong Attention + Cross Entropy, test accuracy 87.86%
  4. BiRNN Seq2Seq + Bahdanau Attention + Cross Entropy, test accuracy 89.28%
  5. BiRNN Seq2Seq + Bahdanau Attention + CTC, test accuracy 86.35%
  6. BiRNN Seq2Seq + Luong Attention + CTC, test accuracy 80.30%
  7. CNN RNN + Bahdanau Attention, test accuracy 80.23%
  8. Dilated CNN RNN, test accuracy 31.60%
  9. Wavenet, test accuracy 75.11%
  10. Deep Speech 2, test accuracy 81.40%
  11. Wav2Vec Transfer learning BiRNN LSTM, test accuracy 83.24%

Spelling correction

Complete list (4 notebooks)
  1. BERT-Base
  2. XLNET-Base
  3. BERT-Base Fast
  4. BERT-Base accurate

SQUAD Question-Answers

Trained on SQUAD Dataset.

Complete list (1 notebooks)
  1. BERT,
{"exact_match": 77.57805108798486, "f1": 86.18327335287402}

Stemming

Trained on English Lemmatization.

Complete list (6 notebooks)
  1. LSTM + Seq2Seq + Beam
  2. GRU + Seq2Seq + Beam
  3. LSTM + BiRNN + Seq2Seq + Beam
  4. GRU + BiRNN + Seq2Seq + Beam
  5. DNC + Seq2Seq + Greedy
  6. BiRNN + Bahdanau + Copynet

Text Augmentation

Complete list (8 notebooks)
  1. Pretrained Glove
  2. GRU VAE-seq2seq-beam TF-probability
  3. LSTM VAE-seq2seq-beam TF-probability
  4. GRU VAE-seq2seq-beam + Bahdanau Attention TF-probability
  5. VAE + Deterministic Bahdanau Attention, https://github.com/HareeshBahuleyan/tf-var-attention
  6. VAE + VAE Bahdanau Attention, https://github.com/HareeshBahuleyan/tf-var-attention
  7. BERT-Base + Nucleus Sampling
  8. XLNET-Base + Nucleus Sampling

Text classification

Trained on English sentiment dataset, accuracy table in text-classification.

Complete list (79 notebooks)
  1. Basic cell RNN
  2. Basic cell RNN + Hinge
  3. Basic cell RNN + Huber
  4. Basic cell Bidirectional RNN
  5. Basic cell Bidirectional RNN + Hinge
  6. Basic cell Bidirectional RNN + Huber
  7. LSTM cell RNN
  8. LSTM cell RNN + Hinge
  9. LSTM cell RNN + Huber
  10. LSTM cell Bidirectional RNN
  11. LSTM cell Bidirectional RNN + Huber
  12. LSTM cell RNN + Dropout + L2
  13. GRU cell RNN
  14. GRU cell RNN + Hinge
  15. GRU cell RNN + Huber
  16. GRU cell Bidirectional RNN
  17. GRU cell Bidirectional RNN + Hinge
  18. GRU cell Bidirectional RNN + Huber
  19. LSTM RNN + Conv2D
  20. K-max Conv1d
  21. LSTM RNN + Conv1D + Highway
  22. LSTM RNN + Basic Attention
  23. LSTM Dilated RNN
  24. Layer-Norm LSTM cell RNN
  25. Only Attention Neural Network
  26. Multihead-Attention Neural Network
  27. Neural Turing Machine
  28. LSTM Seq2Seq
  29. LSTM Seq2Seq + Luong Attention
  30. LSTM Seq2Seq + Bahdanau Attention
  31. LSTM Seq2Seq + Beam Decoder
  32. LSTM Bidirectional Seq2Seq
  33. Pointer Net
  34. LSTM cell RNN + Bahdanau Attention
  35. LSTM cell RNN + Luong Attention
  36. LSTM cell RNN + Stack Bahdanau Luong Attention
  37. LSTM cell Bidirectional RNN + backward Bahdanau + forward Luong
  38. Bytenet
  39. Fast-slow LSTM
  40. Siamese Network
  41. LSTM Seq2Seq + tf.estimator
  42. Capsule layers + RNN LSTM
  43. Capsule layers + LSTM Seq2Seq
  44. Capsule layers + LSTM Bidirectional Seq2Seq
  45. Nested LSTM
  46. LSTM Seq2Seq + Highway
  47. Triplet loss + LSTM
  48. DNC (Differentiable Neural Computer)
  49. ConvLSTM
  50. Temporal Convd Net
  51. Batch-all Triplet-loss + LSTM
  52. Fast-text
  53. Gated Convolution Network
  54. Simple Recurrent Unit
  55. LSTM Hierarchical Attention Network
  56. Bidirectional Transformers
  57. Dynamic Memory Network
  58. Entity Network
  59. End-to-End Memory Network
  60. BOW-Chars Deep sparse Network
  61. Residual Network using Atrous CNN
  62. Residual Network using Atrous CNN + Bahdanau Attention
  63. Deep pyramid CNN
  64. Transformer-XL
  65. Transfer learning GPT-2 345M
  66. Quasi-RNN
  67. Tacotron
  68. Slice GRU
  69. Slice GRU + Bahdanau
  70. Wavenet
  71. Transfer learning BERT Base
  72. Transfer learning XL-net Large
  73. LSTM BiRNN global Max and average pooling
  74. Transfer learning BERT Base drop 6 layers
  75. Transfer learning BERT Large drop 12 layers
  76. Transfer learning XL-net Base
  77. Transfer learning ALBERT
  78. Transfer learning ELECTRA Base
  79. Transfer learning ELECTRA Large

Text Similarity

Trained on MNLI.

Complete list (10 notebooks)
  1. BiRNN + Contrastive loss, test accuracy 73.032%
  2. BiRNN + Cross entropy, test accuracy 74.265%
  3. BiRNN + Circle loss, test accuracy 75.857%
  4. BiRNN + Proxy loss, test accuracy 48.37%
  5. BERT Base + Cross entropy, test accuracy 91.123%
  6. BERT Base + Circle loss, test accuracy 89.903%
  7. ELECTRA Base + Cross entropy, test accuracy 96.317%
  8. ELECTRA Base + Circle loss, test accuracy 95.603%
  9. XLNET Base + Cross entropy, test accuracy 93.998%
  10. XLNET Base + Circle loss, test accuracy 94.033%

Text to Speech

Trained on Toronto speech dataset.

Complete list (8 notebooks)
  1. Tacotron, https://github.com/Kyubyong/tacotron
  2. CNN Seq2seq + Dilated CNN vocoder
  3. Seq2Seq + Bahdanau Attention
  4. Seq2Seq + Luong Attention
  5. Dilated CNN + Monothonic Attention + Dilated CNN vocoder
  6. Dilated CNN + Self Attention + Dilated CNN vocoder
  7. Deep CNN + Monothonic Attention + Dilated CNN vocoder
  8. Deep CNN + Self Attention + Dilated CNN vocoder

Topic Generator

Trained on Malaysia news.

Complete list (4 notebooks)
  1. TAT-LSTM
  2. TAV-LSTM
  3. MTA-LSTM
  4. Dilated CNN Seq2seq

Topic Modeling

Extracted from English sentiment dataset.

Complete list (3 notebooks)
  1. LDA2Vec
  2. BERT Attention
  3. XLNET Attention

Unsupervised Extractive Summarization

Trained on random books.

Complete list (3 notebooks)
  1. Skip-thought Vector
  2. Residual Network using Atrous CNN
  3. Residual Network using Atrous CNN + Bahdanau Attention

Vectorizer

Trained on English sentiment dataset.

Complete list (11 notebooks)
  1. Word Vector using CBOW sample softmax
  2. Word Vector using CBOW noise contrastive estimation
  3. Word Vector using skipgram sample softmax
  4. Word Vector using skipgram noise contrastive estimation
  5. Supervised Embedded
  6. Triplet-loss + LSTM
  7. LSTM Auto-Encoder
  8. Batch-All Triplet-loss LSTM
  9. Fast-text
  10. ELMO (biLM)
  11. Triplet-loss + BERT

Visualization

Complete list (4 notebooks)
  1. Attention heatmap on Bahdanau Attention
  2. Attention heatmap on Luong Attention
  3. BERT attention, https://github.com/hsm207/bert_attn_viz
  4. XLNET attention

Old-to-Young Vocoder

Trained on Toronto speech dataset.

Complete list (1 notebooks)
  1. Dilated CNN

Attention

Complete list (8 notebooks)
  1. Bahdanau
  2. Luong
  3. Hierarchical
  4. Additive
  5. Soft
  6. Attention-over-Attention
  7. Bahdanau API
  8. Luong API

Not-deep-learning

  1. Markov chatbot
  2. Decomposition summarization (3 notebooks)
Comments
  • Something wrong with the loss

    Something wrong with the loss

    Hello, I have run your codes of 'chatbot' on a conversation dataset. But the loss seems unnormally low. The results on dailydialog dataset from paper 'DailyDialog: AManuallyLabelledMulti-turnDialogueDataset' show that perplexity is more than 30 and loss is more than 3. But the perplexity obtained by your codes is lower 3 which is absolutely wrong. Could you provide some advice? Thank you. '

    opened by katherinelyx 6
  • missing embed_seq()

    missing embed_seq()

    https://github.com/huseinzol05/NLP-Models-Tensorflow/blob/master/entity-tagging/7.attention-is-all-you-need.ipynb

    def learned_position_encoding(inputs, mask, embed_dim): T = tf.shape(inputs)[1] outputs = tf.range(tf.shape(inputs)[1]) # (T_q) outputs = tf.expand_dims(outputs, 0) # (1, T_q) outputs = tf.tile(outputs, [tf.shape(inputs)[0], 1]) # (N, T_q) outputs = embed_seq(outputs, T, embed_dim, zero_pad=False, scale=False) return tf.expand_dims(tf.to_float(mask), -1) * outputs

    opened by ytianjao 2
  • bug in skip-thought.ipynb

    bug in skip-thought.ipynb

    NLP-Models-Tensorflow/unsupervised-summarization/skip-thought.ipynb

    bugs:

    for i in range(5): pbar = tqdm(range(0, len(middle), batch_size), desc='train minibatch loop') for p in pbar:

    should be

    for k in range(5): pbar = tqdm(range(0, len(middle), batch_size), desc='train minibatch loop') for i in pbar:

    opened by guodata 2
  • Could you please provide the installation guide for augmentation in Speech2text parts?

    Could you please provide the installation guide for augmentation in Speech2text parts?

    Hi, I am trying to run the examples in 'speech-to-text'. But the caching.ipynb needs the augmentation module. It seems like this module is not installed by 'pip install augmentation', which does not include the 'change_pitch_speech', 'change_amplitude', ... methods. so could you pls provide me some infos about the module? Many thanks.

    opened by iamweiweishi 1
  • Tensorflow 1.1not compatible with cuda 9 or 10

    Tensorflow 1.1not compatible with cuda 9 or 10


    ImportError Traceback (most recent call last) ~/anaconda3/envs/research3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py in () 40 sys.setdlopenflags(_default_dlopen_flags | ctypes.RTLD_GLOBAL) ---> 41 from tensorflow.python.pywrap_tensorflow_internal import * 42 from tensorflow.python.pywrap_tensorflow_internal import version

    ~/anaconda3/envs/research3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py in () 27 return _mod ---> 28 _pywrap_tensorflow_internal = swig_import_helper() 29 del swig_import_helper

    ~/anaconda3/envs/research3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py in swig_import_helper() 23 try: ---> 24 _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) 25 finally:

    ~/anaconda3/envs/research3.5/lib/python3.5/imp.py in load_module(name, file, filename, details) 242 else: --> 243 return load_dynamic(name, filename, file) 244 elif type_ == PKG_DIRECTORY:

    ~/anaconda3/envs/research3.5/lib/python3.5/imp.py in load_dynamic(name, path, file) 342 name=name, loader=loader, origin=path) --> 343 return _load(spec) 344

    ImportError: libcublas.so.8.0: cannot open shared object file: No such file or directory

    During handling of the above exception, another exception occurred:

    ImportError Traceback (most recent call last) in () 1 import json 2 import numpy as np ----> 3 import tensorflow as tf 4 import collections 5 from sklearn.cross_validation import train_test_split

    ~/anaconda3/envs/research3.5/lib/python3.5/site-packages/tensorflow/init.py in () 22 23 # pylint: disable=wildcard-import ---> 24 from tensorflow.python import * 25 # pylint: enable=wildcard-import 26

    ~/anaconda3/envs/research3.5/lib/python3.5/site-packages/tensorflow/python/init.py in () 49 import numpy as np 50 ---> 51 from tensorflow.python import pywrap_tensorflow 52 53 # Protocol buffers

    ~/anaconda3/envs/research3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py in () 50 for some common reasons and solutions. Include the entire stack trace 51 above this error message when asking for help.""" % traceback.format_exc() ---> 52 raise ImportError(msg) 53 54 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long

    ImportError: Traceback (most recent call last): File "/home/mandarin/anaconda3/envs/research3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in from tensorflow.python.pywrap_tensorflow_internal import * File "/home/mandarin/anaconda3/envs/research3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "/home/mandarin/anaconda3/envs/research3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/home/mandarin/anaconda3/envs/research3.5/lib/python3.5/imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "/home/mandarin/anaconda3/envs/research3.5/lib/python3.5/imp.py", line 343, in load_dynamic return _load(spec) ImportError: libcublas.so.8.0: cannot open shared object file: No such file or directory

    Failed to load the native TensorFlow runtime.

    See https://www.tensorflow.org/install/install_sources#common_installation_problems

    for some common reasons and solutions. Include the entire stack trace above this error message when asking for help.

    opened by makamkkumar 1
  • Could you please share example of code on how to use trained models for the text classification?

    Could you please share example of code on how to use trained models for the text classification?

    Hi there,

    Good job for what you have done here. I would like to use trained models using one of your classifiers (for example, gpt-2 or LX-net) on a new input. Do you have code that will help me use the trained models for inference?

    Thanks in advance.

    opened by negacy 1
  • [ASK]

    [ASK]

    I tried using your code for lemmatization, my problem is i can't save the model so i dont have to re train whenever i predict new data. can you show us how to save the model? thank you.

    opened by mfsatya 1
  • Could you code predict function transfer-learning-albert-base.ipynb

    Could you code predict function transfer-learning-albert-base.ipynb

    pardon me if english not well, i run your code to train after that i can't recode predict function in new data , could you help me , thank you so much

    opened by BinhMinhs10 0
  • 1.lstm-seq2seq-greedy.ipynb  In [17]  missing 1 required positional argument: 'maxlen'

    1.lstm-seq2seq-greedy.ipynb In [17] missing 1 required positional argument: 'maxlen'

    In [17]: def pad_sentence_batch(sentence_batch, pad_int, maxlen): In [18]: batch_x, _ = pad_sentence_batch(train_X[k: min(k+batch_size,len(train_X))], PAD) batch_y, _ = pad_sentence_batch(train_Y[k: min(k+batch_size,len(train_X))], PAD) error: TypeError: pad_sentence_batch() missing 1 required positional argument: 'maxlen'

    maybe: def pad_sentence_batch(sentence_batch, pad_int): padded_seqs = [] seq_lens = [] max_sentence_len = max([len(sentence) for sentence in sentence_batch]) for sentence in sentence_batch: padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence))) seq_lens.append(len(sentence)) return padded_seqs, seq_lens

    opened by 425776024 0
  • Spelling Correction-  Shape must be rank 2 but is rank 3 for 'cls/predictions/MatMul' (op: 'MatMul') with input shapes: [?,?,768], [768,30522].

    Spelling Correction- Shape must be rank 2 but is rank 3 for 'cls/predictions/MatMul' (op: 'MatMul') with input shapes: [?,?,768], [768,30522].

    @huseinzol05 Can you please help me solve the below error??

    Versions: Python: 3.6.10 Tensorflow: 1.13.1 Bert: 2.2.0

    Code Source: https://github.com/huseinzol05/NLP-Models-Tensorflow/blob/master/spelling-correction/3.bert-base-fast.ipynb

    I am running the exact same code that is in the above code source link, but getting the below attached error while running the below chunk of code :

    Code: tf.reset_default_graph() sess = tf.InteractiveSession() model = Model() sess.run(tf.global_variables_initializer()) var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope = 'bert') Error:

    InvalidArgumentError Traceback (most recent call last) ~/anaconda3/envs/projectenv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs) 1658 try: -> 1659 c_op = c_api.TF_FinishOperation(op_desc) 1660 except errors.InvalidArgumentError as e:

    InvalidArgumentError: Shape must be rank 2 but is rank 3 for 'cls/predictions/MatMul' (op: 'MatMul') with input shapes: [?,?,768], [768,30522].

    During handling of the above exception, another exception occurred:

    ValueError Traceback (most recent call last) in 1 tf.reset_default_graph() 2 sess = tf.InteractiveSession() ----> 3 model = Model() 4 5 sess.run(tf.global_variables_initializer())

    in init(self) 32 initializer = tf.zeros_initializer(), 33 ) ---> 34 logits = tf.matmul(input_tensor, tf.transpose(embedding)) 35 self.logits = tf.nn.bias_add(logits, output_bias)

    ~/anaconda3/envs/projectenv/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py in matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b, a_is_sparse, b_is_sparse, name) 2453 else: 2454 return gen_math_ops.mat_mul( -> 2455 a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name) 2456 2457

    ~/anaconda3/envs/projectenv/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py in mat_mul(a, b, transpose_a, transpose_b, name) 5331 _, _, _op = _op_def_lib._apply_op_helper( 5332 "MatMul", a=a, b=b, transpose_a=transpose_a, transpose_b=transpose_b, -> 5333 name=name) 5334 _result = _op.outputs[:] 5335 _inputs_flat = _op.inputs

    ~/anaconda3/envs/projectenv/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords) 786 op = g.create_op(op_type_name, inputs, output_types, name=scope, 787 input_types=input_types, attrs=attr_protos, --> 788 op_def=op_def) 789 return output_structure, op_def.is_stateful, op 790

    ~/anaconda3/envs/projectenv/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py in new_func(*args, **kwargs) 505 'in a future version' if date is None else ('after %s' % date), 506 instructions) --> 507 return func(*args, **kwargs) 508 509 doc = _add_deprecated_arg_notice_to_docstring(

    ~/anaconda3/envs/projectenv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in create_op(failed resolving arguments) 3298 input_types=input_types, 3299 original_op=self._default_original_op, -> 3300 op_def=op_def) 3301 self._create_op_helper(ret, compute_device=compute_device) 3302 return ret

    ~/anaconda3/envs/projectenv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in init(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def) 1821 op_def, inputs, node_def.attr) 1822 self._c_op = _create_c_op(self._graph, node_def, grouped_inputs, -> 1823 control_input_ops) 1824 1825 # Initialize self._outputs.

    ~/anaconda3/envs/projectenv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs) 1660 except errors.InvalidArgumentError as e: 1661 # Convert to ValueError for backwards compatibility. -> 1662 raise ValueError(str(e)) 1663 1664 return c_op

    ValueError: Shape must be rank 2 but is rank 3 for 'cls/predictions/MatMul' (op: 'MatMul') with input shapes: [?,?,768], [768,30522].


    opened by HarshithaMG 3
  • embedded for data?

    embedded for data?

    Hello, you can introduce this data? i meet data like this {a, b, c},each element is article. now, it tells me this a and b's similarity greater than a and c.(distance(a, b) > distance(a, c)), i know to use triplet loss,but i don't my data how to match your positive and negative data?

    opened by huangxiancun 1
Owner
HUSEIN ZOLKEPLI
I really love to fart and korek hidung.
HUSEIN ZOLKEPLI
Grading tools for Advanced NLP (11-711)Grading tools for Advanced NLP (11-711)

Grading tools for Advanced NLP (11-711) Installation You'll need docker and unzip to use this repo. For docker, visit the official guide to get starte

Hao Zhu 2 Sep 27, 2022
NLP, Machine learning

Netflix-recommendation-system NLP, Machine learning About Recommendation algorithms are at the core of the Netflix product. It provides their members

Harshith VH 6 Jan 12, 2022
NLP - Machine learning

Flipkart-product-reviews NLP - Machine learning About Product reviews is an essential part of an online store like Flipkart’s branding and marketing.

Harshith VH 1 Oct 29, 2021
This is my reading list for my PhD in AI, NLP, Deep Learning and more.

This is my reading list for my PhD in AI, NLP, Deep Learning and more.

Zhong Peixiang 156 Dec 21, 2022
:house_with_garden: Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.

(Framework for Adapting Representation Models) What is it? FARM makes Transfer Learning with BERT & Co simple, fast and enterprise-ready. It's built u

deepset 1.6k Dec 27, 2022
:house_with_garden: Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.

(Framework for Adapting Representation Models) What is it? FARM makes Transfer Learning with BERT & Co simple, fast and enterprise-ready. It's built u

deepset 1.1k Feb 14, 2021
Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks

Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks, which modifies the input text with a textual template and directly uses PLMs to conduct pre-trained tasks. This library provides a standard, flexible and extensible framework to deploy the prompt-learning pipeline. OpenPrompt supports loading PLMs directly from huggingface transformers. In the future, we will also support PLMs implemented by other libraries.

THUNLP 2.3k Jan 8, 2023
🤗 The largest hub of ready-to-use NLP datasets for ML models with fast, easy-to-use and efficient data manipulation tools

?? The largest hub of ready-to-use NLP datasets for ML models with fast, easy-to-use and efficient data manipulation tools

Hugging Face 15k Jan 2, 2023
T‘rex Park is a Youzan sponsored project. Offering Chinese NLP and image models pretrained from E-commerce datasets

T‘rex Park is a Youzan sponsored project. Offering Chinese NLP and image models pretrained from E-commerce datasets (product titles, images, comments, etc.).

null 55 Nov 22, 2022
An easy-to-use framework for BERT models, with trainers, various NLP tasks and detailed annonations

FantasyBert English | 中文 Introduction An easy-to-use framework for BERT models, with trainers, various NLP tasks and detailed annonations. You can imp

Fan 137 Oct 26, 2022
This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

Graph4AI 230 Nov 22, 2022
A NLP program: tokenize method, PoS Tagging with deep learning

IRIS NLP SYSTEM A NLP program: tokenize method, PoS Tagging with deep learning Report Bug · Request Feature Table of Contents About The Project Built

Zakaria 7 Dec 13, 2022
Neural-Machine-Translation - Implementation of revolutionary machine translation models

Neural Machine Translation Framework: PyTorch Repository contaning my implementa

Utkarsh Jain 1 Feb 17, 2022
A list of NLP(Natural Language Processing) tutorials built on Tensorflow 2.0.

A list of NLP(Natural Language Processing) tutorials built on Tensorflow 2.0.

Won Joon Yoo 335 Jan 4, 2023
Super easy library for BERT based NLP models

Fast-Bert New - Learning Rate Finder for Text Classification Training (borrowed with thanks from https://github.com/davidtvs/pytorch-lr-finder) Suppor

Utterworks 1.8k Dec 27, 2022
Super easy library for BERT based NLP models

Fast-Bert New - Learning Rate Finder for Text Classification Training (borrowed with thanks from https://github.com/davidtvs/pytorch-lr-finder) Suppor

Utterworks 1.5k Feb 18, 2021
Interpretable Models for NLP using PyTorch

This repo is deprecated. Please find the updated package here. https://github.com/EdGENetworks/anuvada Anuvada: Interpretable Models for NLP using PyT

Sandeep Tammu 19 Dec 17, 2022
A2T: Towards Improving Adversarial Training of NLP Models (EMNLP 2021 Findings)

A2T: Towards Improving Adversarial Training of NLP Models This is the source code for the EMNLP 2021 (Findings) paper "Towards Improving Adversarial T

QData 17 Oct 15, 2022