NLP and Text Generation Experiments in TensorFlow 2.x / 1.x

Overview
	Code has been run on Google Colab, thanks Google for providing computational resources

Contents


Text Classification

└── finch/tensorflow2/text_classification/imdb
	│
	├── data
	│   └── glove.840B.300d.txt          # pretrained embedding, download and put here
	│   └── make_data.ipynb              # step 1. make data and vocab: train.txt, test.txt, word.txt
	│   └── train.txt  		     # incomplete sample, format <label, text> separated by \t 
	│   └── test.txt   		     # incomplete sample, format <label, text> separated by \t
	│   └── train_bt_part1.txt  	     # (back-translated) incomplete sample, format <label, text> separated by \t
	│
	├── vocab
	│   └── word.txt                     # incomplete sample, list of words in vocabulary
	│	
	└── main
		└── sliced_rnn.ipynb         # step 2: train and evaluate model
		└── ...
└── finch/tensorflow2/text_classification/clue
	│
	├── data
	│   └── make_data.ipynb              # step 1. make data and vocab
	│   └── train.txt  		     # download from clue benchmark
	│   └── test.txt   		     # download from clue benchmark
	│
	├── vocab
	│   └── label.txt                    # list of emotion labels
	│	
	└── main
		└── bert_finetune.ipynb      # step 2: train and evaluate model
		└── ...

Text Matching

└── finch/tensorflow2/text_matching/snli
	│
	├── data
	│   └── glove.840B.300d.txt       # pretrained embedding, download and put here
	│   └── download_data.ipynb       # step 1. run this to download snli dataset
	│   └── make_data.ipynb           # step 2. run this to generate train.txt, test.txt, word.txt 
	│   └── train.txt  		  # incomplete sample, format <label, text1, text2> separated by \t 
	│   └── test.txt   		  # incomplete sample, format <label, text1, text2> separated by \t
	│
	├── vocab
	│   └── word.txt                  # incomplete sample, list of words in vocabulary
	│	
	└── main              
		└── dam.ipynb      	  # step 3. train and evaluate model
		└── esim.ipynb      	  # step 3. train and evaluate model
		└── ......
└── finch/tensorflow2/text_matching/chinese
	│
	├── data
	│   └── make_data.ipynb           # step 1. run this to generate char.txt and char.npy
	│   └── train.csv  		  # incomplete sample, format <text1, text2, label> separated by comma 
	│   └── test.csv   		  # incomplete sample, format <text1, text2, label> separated by comma
	│
	├── vocab
	│   └── cc.zh.300.vec             # pretrained embedding, download and put here
	│   └── char.txt                  # incomplete sample, list of chinese characters
	│   └── char.npy                  # saved pretrained embedding matrix for this task
	│	
	└── main              
		└── pyramid.ipynb      	  # step 2. train and evaluate model
		└── esim.ipynb      	  # step 2. train and evaluate model
		└── ......
└── finch/tensorflow2/text_matching/ant
	│
	├── data
	│   └── make_data.ipynb           # step 1. run this to generate char.txt and char.npy
	│   └── train.json           	  # incomplete sample, format <text1, text2, label> separated by comma 
	│   └── dev.json   		  # incomplete sample, format <text1, text2, label> separated by comma
	│
	├── vocab
	│   └── cc.zh.300.vec             # pretrained embedding, download and put here
	│   └── char.txt                  # incomplete sample, list of chinese characters
	│   └── char.npy                  # saved pretrained embedding matrix for this task
	│	
	└── main              
		└── pyramid.ipynb      	  # step 2. train and evaluate model
		└── bert.ipynb      	  # step 2. train and evaluate model
		└── ......

Intent Detection and Slot Filling

└── finch/tensorflow2/spoken_language_understanding/atis
	│
	├── data
	│   └── glove.840B.300d.txt           # pretrained embedding, download and put here
	│   └── make_data.ipynb               # step 1. run this to generate vocab: word.txt, intent.txt, slot.txt 
	│   └── atis.train.w-intent.iob       # incomplete sample, format <text, slot, intent>
	│   └── atis.test.w-intent.iob        # incomplete sample, format <text, slot, intent>
	│
	├── vocab
	│   └── word.txt                      # list of words in vocabulary
	│   └── intent.txt                    # list of intents in vocabulary
	│   └── slot.txt                      # list of slots in vocabulary
	│	
	└── main              
		└── bigru_clr.ipynb               # step 2. train and evaluate model
		└── ...

Retrieval Dialog


Semantic Parsing

└── finch/tensorflow2/semantic_parsing/tree_slu
	│
	├── data
	│   └── glove.840B.300d.txt     	# pretrained embedding, download and put here
	│   └── make_data.ipynb           	# step 1. run this to generate vocab: word.txt, intent.txt, slot.txt 
	│   └── train.tsv   		  	# incomplete sample, format <text, tokenized_text, tree>
	│   └── test.tsv    		  	# incomplete sample, format <text, tokenized_text, tree>
	│
	├── vocab
	│   └── source.txt                	# list of words in vocabulary for source (of seq2seq)
	│   └── target.txt                	# list of words in vocabulary for target (of seq2seq)
	│	
	└── main
		└── lstm_seq2seq_tf_addons.ipynb           # step 2. train and evaluate model
		└── ......
		

Knowledge Graph Completion

└── finch/tensorflow2/knowledge_graph_completion/wn18
	│
	├── data
	│   └── download_data.ipynb       	# step 1. run this to download wn18 dataset
	│   └── make_data.ipynb           	# step 2. run this to generate vocabulary: entity.txt, relation.txt
	│   └── wn18  		          	# wn18 folder (will be auto created by download_data.ipynb)
	│   	└── train.txt  		  	# incomplete sample, format <entity1, relation, entity2> separated by \t
	│   	└── valid.txt  		  	# incomplete sample, format <entity1, relation, entity2> separated by \t 
	│   	└── test.txt   		  	# incomplete sample, format <entity1, relation, entity2> separated by \t
	│
	├── vocab
	│   └── entity.txt                  	# incomplete sample, list of entities in vocabulary
	│   └── relation.txt                	# incomplete sample, list of relations in vocabulary
	│	
	└── main              
		└── distmult_1-N.ipynb    	# step 3. train and evaluate model
		└── ...

Knowledge Base Question Answering


Multi-hop Question Answering

└── finch/tensorflow1/question_answering/babi
	│
	├── data
	│   └── make_data.ipynb           		# step 1. run this to generate vocabulary: word.txt 
	│   └── qa5_three-arg-relations_train.txt       # one complete example of babi dataset
	│   └── qa5_three-arg-relations_test.txt	# one complete example of babi dataset
	│
	├── vocab
	│   └── word.txt                  		# complete list of words in vocabulary
	│	
	└── main              
		└── dmn_train.ipynb
		└── dmn_serve.ipynb
		└── attn_gru_cell.py

Text Visualization


Recommender System

└── finch/tensorflow1/recommender/movielens
	│
	├── data
	│   └── make_data.ipynb           		# run this to generate vocabulary
	│
	├── vocab
	│   └── user_job.txt
	│   └── user_id.txt
	│   └── user_gender.txt
	│   └── user_age.txt
	│   └── movie_types.txt
	│   └── movie_title.txt
	│   └── movie_id.txt
	│	
	└── main              
		└── dnn_softmax.ipynb
		└── ......

Multi-turn Dialogue Rewriting

└── finch/tensorflow1/multi_turn_rewrite/chinese/
	│
	├── data
	│   └── make_data.ipynb         # run this to generate vocab, split train & test data, make pretrained embedding
	│   └── corpus.txt		# original data downloaded from external
	│   └── train_pos.txt		# processed positive training data after {make_data.ipynb}
	│   └── train_neg.txt		# processed negative training data after {make_data.ipynb}
	│   └── test_pos.txt		# processed positive testing data after {make_data.ipynb}
	│   └── test_neg.txt		# processed negative testing data after {make_data.ipynb}
	│
	├── vocab
	│   └── cc.zh.300.vec		# fastText pretrained embedding downloaded from external
	│   └── char.npy		# chinese characters and their embedding values (300 dim)	
	│   └── char.txt		# list of chinese characters used in this project 
	│	
	└── main              
		└── baseline_lstm_train.ipynb
		└── baseline_lstm_predict.ipynb
		└── ...

Generative Dialog

└── finch/tensorflow1/free_chat/chinese_lccc
	│
	├── data
	│   └── LCCC-base.json           	# raw data downloaded from external
	│   └── LCCC-base_test.json         # raw data downloaded from external
	│   └── make_data.ipynb           	# step 1. run this to generate vocab {char.txt} and data {train.txt & test.txt}
	│   └── train.txt           		# processed text file generated by {make_data.ipynb}
	│   └── test.txt           			# processed text file generated by {make_data.ipynb}
	│
	├── vocab
	│   └── char.txt                	# list of chars in vocabulary for chinese
	│   └── cc.zh.300.vec			# fastText pretrained embedding downloaded from external
	│   └── char.npy			# chinese characters and their embedding values (300 dim)	
	│	
	└── main
		└── lstm_seq2seq_train.ipynb    # step 2. train and evaluate model
		└── lstm_seq2seq_infer.ipynb    # step 4. model inference
		└── ...
  • Task: Large-scale Chinese Conversation Dataset

      Training Data: 5000000 (sampled due to small memory), Testing Data: 19008
    
    • Data

    • Model

      Code Model Env Test Case Perplexity
      <Notebook> Transformer Encoder + LSTM Generator TF1 <Notebook> 42.465
      <Notebook> LSTM Encoder + LSTM Generator TF1 <Notebook> 41.250
      <Notebook> LSTM Encoder + LSTM Pointer-Generator TF1 <Notebook> 36.525
    • If you want to deploy model in Java production

       └── FreeChatInference
       	│
       	├── data
       	│   └── transformer_export/
       	│   └── char.txt
       	│   └── libtensorflow-1.14.0.jar
       	│   └── tensorflow_jni.dll
       	│
       	└── src              
       		└── ModelInference.java
      
      • <Notebook> Java Inference

      • If you don't know the input and output node names in Java, you can display the node names:

         !saved_model_cli show --dir ../model/xxx/1587959473/ --tag_set serve --signature_def serving_default
        
    • Large Pre-trained GPT

      Test Case Model Env Author
      <Notebook> GPT LCCC base TF1 + bert4keras Tsinghua University
      <Notebook> GPT LCCC large TF1 + bert4keras Tsinghua University
      <Notebook> NEZHA LM TF1 + bert4keras Jianlin Su
    • Diverse Response Comparsion

      问题 / 模型回复 Our RNN GPT base GPT large Nezha
      Q: 你好 A: 你好 A: 你好 A: 你好,你好,我是大好人。 A: 您好
      Q: 早上好 A: 早上好 A: 早上好,好久不见 A: 早上好! A: 早啊早啊
      Q: 晚上好 A: 晚上好 A: 你好 A: 好久不见 A: 晚上好!
      Q: 再见 A: 再见 A: 不见 A: 你是不是不爱我了 A: 我要走了
      Q: 好久不见 A: 好久不见 A: 是呀,好久不见。 A: 是啊,好久不见。 A: 是啊,最近怎么样啊
      Q: 想死你了 A: 我也是 A: 我也想你, A: 我不信 A: 哈哈哈哈哈
      Q: 谢谢你 A: 不客气 A: 不用谢 A: 不用 A: 不谢不谢,
      Q: 爱你 A: 爱你 A: 你是爱我 A: 我也爱你, A: 我也爱你,你最美
      Q: 你叫什么名字 A: 你猜 A: 我也想知道 A: 叫你 A: 你是不是想说我的名字太俗了
      Q: 你几岁了 A: 问女孩子年龄可不是绅士所为哦 ~ A: 你猜 A: 你猜 A: 我不知道
      Q: 现在几点了 A: 11点 A: 你说呢? A: 十二点 A: 我也不知道啊
      Q: 今天天气怎么样 A: 下雨了 A: 今天很冷,但是我还是觉得很冷 A: 很好 A: 不错
      Q: 我们现在在哪里 A: 在这里 A: 我在这里等你呢 A: 我不知道你在说什么 A: 在我们宿舍楼下。。。
      Q: 你能给我讲个笑话吗 A: 不能 A: 讲个笑话 A: 讲给谁听 A: 不能
      Q: 你是男孩还是女孩呀 A: 我是萌妹纸 A: 你猜 A: 你猜呀 A: 我是萌妹纸
      Q: 你会几种语言呀 A: 你猜 A: 不会 A: 你猜呀~ A: 只会说一句
      Q: 你能陪我玩吗 A: 可以啊 A: 可以 A: 我可以啊 A: 你猜
      Q: 说话可以大声一点吗 A: 可以 A: 可以的 A: 不要 A: 我不
      Q: 天气真好 A: 是的 A: 是啊! A: 天气好是因为有你 A: 是啊。
      Q: 天气太糟糕了 A: 是啊 A: 是啊 A: 是啊 A: 是的
      Q: 下雨了 A: 是的 A: 你们那边也下雨了 A: 是呀 A: 我也是
      Q: 雨好大 A: 今天下雨了 A: 是啊 A: 是的,好大雨 A: 对呀,我们还穿短袖
      Q: 我讨厌艳阳天 A: 我也讨厌艳阳天 A: 我也是。 A: 我喜欢阴天 A: 我喜欢
      Q: 好晒啊 A: 是啊 A: 是啊 A: 是啊!! A: 是啊,不是很热的
      Q: 今天好冷 A: 多穿点衣服 A: 你们都不来 A: 你那儿也冷了 A: 是啊。
      Q: 今天好热 A: 今天热死了 A: 热死我了 A: 对啊 A: 对啊,我都热疯
      Q: 风好大 A: 是啊 A: 风太大了。。。 A: 你不是在上课嘛 A: 嗯嗯嗯,风好大,冷
      Q: 雾太大了看不清路 A: 雾太大了 A: 哈哈,你这是在逗我笑 A: 我也看不清 A: 你是不是傻啊
      Q: 打雷了好可怕 A: 可怕 A: 我们这边也是,我们这边下雨了,我们这边也没有下雨。 A: 是啊 A: 我们这边下雨了
      Q: 下雪了诶 A: 下雪了? A: 是滴! A: 嗯嗯嗯!!好大的雪!! A: 下了一天了
      Q: 好烦啊 A: 我也好烦 A: 我也是 A: 我好烦 A: 我也是
      Q: 好开心 A: 开心就好 A: 你是不是想我了 A: 是呀 A: 哈哈哈,我也是!
      Q: 太激动了 A: 激动什么 A: 我也激动 A: 哈哈,还是不要了 A: 哈哈
      Q: 我好难过 A: 怎么了? A: 我也是,我们都是 A: 不要难过,我陪你 A: 你怎么难过
      Q: 我想哭 A: 抱抱 A: 抱紧你 A: 我不想哭!你不要想太多! A: 哭吧哭吧,我陪你哭
      Q: 太好笑了 A: 笑死我了 A: 你也笑了? A: 我也觉得 A: 哈哈
      Q: 我好伤心 A: 怎么了 A: 你伤心什么 A: 我也是 A: 你是不是想笑死我!!下次再给你拍
      Q: 心好痛 A: 抱抱 A: 你心痛什么 A: 心碎一地了。 A: 怎么了
      Q: 好累啊 A: 加油 A: 你不会还是一个人在寝室吧 A: 好好休息 A: 累了就睡觉
      Q: 我好疲惫 A: 我也是 A: 我也差不多。。 A: 我好困 A: 你不是一直都很疲惫么
      Q: 我爱你 A: 我也爱你 A: 我也是 A: 我们都爱你 A: 爱我就来见面
      Q: 我讨厌你 A: 我也讨厌你 A: 你不是一直很喜欢我 A: 不是我的 A: 我也讨厌你
      Q: 你真是太棒啦 A: 谢谢 A: 你最棒 A: 你也棒棒哒 A: 谢谢!
      Q: 你好厉害啊 A: 哈哈 A: 是不是 A: 你也不差呀 A: 你也可以的
      Q: 吓死我了 A: 哈哈 A: 我都不敢出门了 A: 哈哈哈哈哈!!! A: 哈哈哈哈哈笑死我了
      Q: 我想回家 A: 我也想回家 A: 我也想!! A: 想家的时候回来,想家的时候离开。 A: 回来吧,家真的好啊!
      Q: 我想爸妈了 A: 我也想爸妈 A: 哈哈 A: 我也想 A: 想我吗
      Q: 不知道小孩在家有没有听话 A: 我也不知道 A: 没有 A: 听话的话肯定是会听话的。 A: 我也是听不懂啊
      Q: 想回家撸猫 A: 我也想回家 A: 你也想啊? A: 我们这也有一个 A: 回呀回呀
Comments
  • in vae code,how to use LSTMcell?

    in vae code,how to use LSTMcell?

    when I change the GRUcell to LSTMcell ,I got this error: ValueError: Shape must be rank 2 but is rank 3 for 'decoding/decoder/concat' (op: 'ConcatV2') with input shapes: [?,128], [2,?,20], [].

    please help, thanku~

    opened by shaomai00 7
  • Is the function

    Is the function "add_encoder_layer" in "seq2seq_ultimate.py" correct?

    Hi, I have a question:

    Several code snippet in seq2seq_ultimate.py (function: add_encoder_layer"), maybe has an incorrect position: bi_state_c = tf.concat((state_fw.c, state_bw.c), -1) bi_state_h = tf.concat((state_fw.h, state_bw.h), -1) bi_lstm_state = tf.nn.rnn_cell.LSTMStateTuple(c=bi_state_c, h=bi_state_h) self.encoder_state = tuple([bi_lstm_state] * self.n_layers)

    opened by cdj0311 4
  • CBOW code

    CBOW code

    In the code of the "CBOW", estimator.train(tf.estimator.inputs.numpy_input_fn( x_train, np.expand_dims(y_train, -1), batch_size = PARAMS['batch_size'], num_epochs = PARAMS['n_epochs'], shuffle = True)) when I run this codes, it reminded me that"Traceback (most recent call last): File "D:/pythonWorkSpace/AAB/tensorflow-CBOW.py", line 112, in shuffle = True)) File "E:\Anaconda\lib\site-packages\tensorflow\python\estimator\estimator.py", line 241, in train loss = self._train_model(input_fn=input_fn, hooks=hooks) File "E:\Anaconda\lib\site-packages\tensorflow\python\estimator\estimator.py", line 558, in _train_model features, labels = input_fn() File "E:\Anaconda\lib\site-packages\tensorflow\python\estimator\inputs\numpy_io.py", line 98, in input_fn raise TypeError('x must be dict; got {}'.format(type(x).name)) TypeError: x must be dict; got ndarray" would you help me to solve the problem? I have seen that you can run this code correctly.

    opened by ZuoxiYang 3
  • two questions for

    two questions for "CLUE Emotion Analysis"

    Hi, I have two questions to ask you:

    1. text = ['[CLS]'] + text + ['[SEP]'] , here, why does text not tokenize? like this,text = ['[CLS]'] + tokenizer.tokenize(text) + ['[SEP]']
    2. For BertFinetune, x = x[1] , here, why x = x[1]?
    opened by CoderBinGe 2
  • use baseline_lstm_train_clr to predict and occurred an error,how to fix it

    use baseline_lstm_train_clr to predict and occurred an error,how to fix it

    ValueError: Shape must be rank 2 but is rank 3 for 'Decoder/decoder/while/BeamSearchDecoderStep/tied_dense/MatMul' (op: 'MatMul') with input shapes: [?,10,300], [3853,300].

    opened by hbwzhsh 2
  • the reconstruct performance of Learning to Reconstruct

    the reconstruct performance of Learning to Reconstruct

    Hello. You used VAE for reconstruct sentences from imdb. According to your results, I think the reconstruction performance is not good. The reconstructed sentences have great difference with original ones. I am new in NLP, so I want to know can complete reconstruction be achieved with existing models. Can you give me some hints about the reasons which caused bad reconstruction performance? Is it due to the simple model or lack of training or something else? Thank you.

    opened by zyj008 2
  • Attention is all you need

    Attention is all you need

    I am trying to modify your code to fit the English data set. I modified DataLoader.py ,added English word segmentation.But when I train the model, I get an error.

    INFO:tensorflow:loss = 7.306507, step = 0 INFO:tensorflow:lr = 0.001 ERROR:tensorflow:Model diverged with loss = NaN. ne, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true graph_options { rewrite_options { meta_optimizer_iterations: ONE } } , '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x000001E5D23D9470>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1} Traceback (most recent call last): File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\wrapper.py", line 1425, in done fut.result() File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\futures.py", line 40, in result reraise(self._exc_info) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\reraise3.py", line 8, in reraise raise exc_info[1].with_traceback(exc_info[2]) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\futures.py", line 157, in callback x = next(it) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\wrapper.py", line 2070, in on_evaluate pyd_tid, pyd_fid = self.frame_map.to_pydevd(vsc_fid) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\wrapper.py", line 311, in to_pydevd return self._vscode_to_pydevd[vscode_id] KeyError: 41 Traceback (most recent call last): File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\wrapper.py", line 1425, in done fut.result() File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\futures.py", line 40, in result reraise(self._exc_info) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\reraise3.py", line 8, in reraise raise exc_info[1].with_traceback(exc_info[2]) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\futures.py", line 157, in callback x = next(it) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\wrapper.py", line 2070, in on_evaluate pyd_tid, pyd_fid = self.frame_map.to_pydevd(vsc_fid) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\wrapper.py", line 311, in to_pydevd return self._vscode_to_pydevd[vscode_id] KeyError: 41 Traceback (most recent call last): File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\wrapper.py", line 1425, in done fut.result() File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\futures.py", line 40, in result reraise(self._exc_info) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\reraise3.py", line 8, in reraise raise exc_info[1].with_traceback(exc_info[2]) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\futures.py", line 157, in callback x = next(it) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\wrapper.py", line 2070, in on_evaluate pyd_tid, pyd_fid = self.frame_map.to_pydevd(vsc_fid) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\wrapper.py", line 311, in to_pydevd return self._vscode_to_pydevd[vscode_id] KeyError: 41 Traceback (most recent call last): File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\wrapper.py", line 1425, in done fut.result() File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\futures.py", line 40, in result reraise(self.exc_info) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\reraise3.py", line 8, in reraise raise exc_info[1].with_traceback(exc_info[2]) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\futures.py", line 157, in callback x = next(it) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\wrapper.py", line 2070, in on_evaluate pyd_tid, pyd_fid = self.frame_map.to_pydevd(vsc_fid) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\wrapper.py", line 311, in to_pydevd return self.vscode_to_pydevd[vscode_id] KeyError: 41 WARNING:tensorflow:From C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\tensorflow\python\estimator\inputs\queues\feeding_queue_runner.py:62: QueueRunner.init (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the tf.data module. WARNING:tensorflow:From C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\tensorflow\python\estimator\inputs\queues\feeding_functions.py:500: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the tf.data module. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. 2019-03-26 12:58:05.482552: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 2019-03-26 12:58:06.511141: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: GeForce GTX 1050 major: 6 minor: 1 memoryClockRate(GHz): 1.493 pciBusID: 0000:01:00.0 totalMemory: 4.00GiB freeMemory: 3.30GiB 2019-03-26 12:58:06.528029: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2019-03-26 12:58:08.067124: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-03-26 12:58:08.101398: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2019-03-26 12:58:08.116357: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2019-03-26 12:58:08.133051: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3015 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1) INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. WARNING:tensorflow:From C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\tensorflow\python\training\monitored_session.py:804: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the tf.data module. INFO:tensorflow:Saving checkpoints for 0 into C:\Users\89534\AppData\Local\Temp\tmpc8cb1xnq\model.ckpt. 2019-03-26 13:02:24.506298: E tensorflow/core/grappler/clusters/utils.cc:83] Failed to get device properties, error code: 30 INFO:tensorflow:loss = 7.306507, step = 0 INFO:tensorflow:lr = 0.001 ERROR:tensorflow:Model diverged with loss = NaN. Traceback (most recent call last): File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\ptvsd_launcher.py", line 45, in main(ptvsdArgs) File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd_main.py", line 357, in main run() File "c:\Users\89534.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd_main.py", line 257, in run_file runpy.run_path(target, run_name='main') File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "d:\CDisk\Documents\GitHub\finch\src_nlp\tensorflow\attn_is_all_u_need\train_dialog.py", line 36, in main() File "d:\CDisk\Documents\GitHub\finch\src_nlp\tensorflow\attn_is_all_u_need\train_dialog.py", line 30, in main shuffle = True)) File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\tensorflow\python\estimator\estimator.py", line 354, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\tensorflow\python\estimator\estimator.py", line 1207, in _train_model return self._train_model_default(input_fn, hooks, saving_listeners) File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\tensorflow\python\estimator\estimator.py", line 1241, in _train_model_default saving_listeners) File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\tensorflow\python\estimator\estimator.py", line 1471, in _train_with_estimator_spec _, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss]) File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\tensorflow\python\training\monitored_session.py", line 671, in run run_metadata=run_metadata) File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\tensorflow\python\training\monitored_session.py", line 1156, in run run_metadata=run_metadata) File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\tensorflow\python\training\monitored_session.py", line 1255, in run raise six.reraise(*original_exc_info) File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\six.py", line 693, in reraise raise value File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\tensorflow\python\training\monitored_session.py", line 1240, in run return self._sess.run(*args, **kwargs) File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\tensorflow\python\training\monitored_session.py", line 1320, in run run_metadata=run_metadata)) File "C:\Users\89534\AppData\Local\conda\conda\envs\tf\lib\site-packages\tensorflow\python\training\basic_session_run_hooks.py", line 753, in after_run raise NanLossDuringTrainingError tensorflow.python.training.basic_session_run_hooks.NanLossDuringTrainingError: NaN loss during training. image

    The code I am modifying may have a problem. How can I modify the code to enable it to train the English corpus?Thanks.i write chatbot for the first time

    opened by Kiteflyingee 1
Owner
null
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing ?? ?? ?? We released the 2.0.0 version with TF2 Support. ?? ?? ?? If you

Eliyar Eziz 2.3k Dec 29, 2022
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing ?? ?? ?? We released the 2.0.0 version with TF2 Support. ?? ?? ?? If you

Eliyar Eziz 2k Feb 9, 2021
Grading tools for Advanced NLP (11-711)Grading tools for Advanced NLP (11-711)

Grading tools for Advanced NLP (11-711) Installation You'll need docker and unzip to use this repo. For docker, visit the official guide to get starte

Hao Zhu 2 Sep 27, 2022
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/

Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides

ASYML 2.3k Jan 7, 2023
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/

Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides

ASYML 2.1k Feb 17, 2021
Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech tagging and word segmentation.

Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech tagging and word segmentation.

null 186 Dec 24, 2022
A design of MIDI language for music generation task, specifically for Natural Language Processing (NLP) models.

MIDI Language Introduction Reference Paper: Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions: code This

Robert Bogan Kang 3 May 25, 2022
NLP project that works with news (NER, context generation, news trend analytics)

СоАвтор СоАвтор – платформа и открытый набор инструментов для редакций и журналистов-фрилансеров, который призван сделать процесс создания контента ма

null 38 Jan 4, 2023
Unsupervised text tokenizer for Neural Network-based text generation.

SentencePiece SentencePiece is an unsupervised text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the vocabu

Google 6.4k Jan 1, 2023
Unsupervised text tokenizer for Neural Network-based text generation.

SentencePiece SentencePiece is an unsupervised text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the vocabu

Google 4.8k Feb 18, 2021
A list of NLP(Natural Language Processing) tutorials built on Tensorflow 2.0.

A list of NLP(Natural Language Processing) tutorials built on Tensorflow 2.0.

Won Joon Yoo 335 Jan 4, 2023
Data loaders and abstractions for text and NLP

torchtext This repository consists of: torchtext.data: Generic data loaders, abstractions, and iterators for text (including vocabulary and word vecto

null 3.2k Dec 30, 2022
Data loaders and abstractions for text and NLP

torchtext This repository consists of: torchtext.data: Generic data loaders, abstractions, and iterators for text (including vocabulary and word vecto

null 2.6k Feb 18, 2021
The projects lets you extract glossary words and their definitions from a given piece of text automatically using NLP techniques

Unsupervised technique to Glossary and Definition Extraction Code Files GPT2-DefinitionModel.ipynb - GPT-2 model for definition generation. Data_Gener

Prakhar Mishra 28 May 25, 2021
Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

RAMI ALRFOU 2.1k Jan 7, 2023
Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

RAMI ALRFOU 1.8k Feb 10, 2021
Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

RAMI ALRFOU 1.8k Feb 18, 2021
NLPretext packages in a unique library all the text preprocessing functions you need to ease your NLP project.

NLPretext packages in a unique library all the text preprocessing functions you need to ease your NLP project.

Artefact 114 Dec 15, 2022
Signature remover is a NLP based solution which removes email signatures from the rest of the text.

Signature Remover Signature remover is a NLP based solution which removes email signatures from the rest of the text. It helps to enchance data conten

Forges Alterway 8 Jan 6, 2023