Neural network sequence labeling model

Overview

Sequence labeler

This is a neural network sequence labeling system. Given a sequence of tokens, it will learn to assign labels to each token. Can be used for named entity recognition, POS-tagging, error detection, chunking, CCG supertagging, etc.

The main model implements a bidirectional LSTM for sequence tagging. In addition, you can incorporate character-level information -- either by concatenating a character-based representation, or by using an attention/gating mechanism for combining it with a word embedding.

Run with:

python experiment.py config.conf

Preferably with Tensorflow set up to use CUDA, so the process can run on a GPU. The script will train the model on the training data, test it on the test data, and print various evaluation metrics.

Note: The original sequence labeler was implemented in Theano, but since Theano is soon ending support, I have reimplemented it in TensorFlow. I also used the chance to refactor the code a bit, and it should be better in every way. However, if you need the specific code used in previously published papers, you'll need to refer to older commits.

Requirements

  • python (tested with 2.7.12 and 3.5.2)
  • numpy (tested with 1.13.3 and 1.14.0)
  • tensorflow (tested with 1.3.0 and 1.4.1)

Data format

The training and test data is expected in standard CoNLL-type tab-separated format. One word per line, separate column for token and label, empty line between sentences.

For error detection, this would be something like:

I       c
saws    i
the     c
show    c

The first column is assumed to be the token and the last column is the label. There can be other columns in the middle, which are currently not used. For example:

EU      NNP     I-NP    S-ORG
rejects VBZ     I-VP    O
German  JJ      I-NP    S-MISC
call    NN      I-NP    O
to      TO      I-VP    O
boycott VB      I-VP    O
British JJ      I-NP    S-MISC
lamb    NN      I-NP    O
.       .       O       O

Configuration

Edit the values in config.conf as needed:

  • path_train - Path to the training data, in CoNLL tab-separated format. One word per line, first column is the word, last column is the label. Empty lines between sentences.
  • path_dev - Path to the development data, used for choosing the best epoch.
  • path_test - Path to the test file. Can contain multiple files, colon separated.
  • conll_eval - Whether the standard CoNLL NER evaluation should be run.
  • main_label - The output label for which precision/recall/F-measure are calculated. Does not affect accuracy or measures from the CoNLL eval.
  • model_selector - What is measured on the dev set for model selection: "dev_conll_f:high" for NER and chunking, "dev_acc:high" for POS-tagging, "dev_f05:high" for error detection.
  • preload_vectors - Path to the pretrained word embeddings, in word2vec plain text format. If your embeddings are in binary, you can use convertvec to convert them to plain text.
  • word_embedding_size - Size of the word embeddings used in the model.
  • crf_on_top - If True, use a CRF as the output layer. If False, use softmax instead.
  • emb_initial_zero - Whether word embeddings should have zero initialisation by default.
  • train_embeddings - Whether word embeddings should be updated during training.
  • char_embedding_size - Size of the character embeddings.
  • word_recurrent_size - Size of the word-level LSTM hidden layers.
  • char_recurrent_size - Size of the char-level LSTM hidden layers.
  • hidden_layer_size - Size of the extra hidden layer on top of the bi-LSTM.
  • char_hidden_layer_size - Size of the extra hidden layer on top of the character-based component.
  • lowercase - Whether words should be lowercased when mapping to word embeddings.
  • replace_digits - Whether all digits should be replaced by 0.
  • min_word_freq - Minimal frequency of words to be included in the vocabulary. Others will be considered OOV.
  • singletons_prob - The probability of mapping words that appear only once to OOV instead during training.
  • allowed_word_length - Maximum allowed word length, clipping the rest. Can be necessary if the text contains unreasonably long tokens, eg URLs.
  • max_train_sent_length - Discard sentences longer than this limit when training.
  • vocab_include_devtest - Load words from dev and test sets also into the vocabulary. If they don't appear in the training set, they will have the default representations from the preloaded embeddings.
  • vocab_only_embedded - Whether the vocabulary should contain only words in the pretrained embedding set.
  • initializer - The method used to initialize weight matrices in the network.
  • opt_strategy - The method used for weight updates.
  • learningrate - Learning rate.
  • clip - Clip the gradient to a range.
  • batch_equal_size - Create batches of sentences with equal length.
  • epochs - Maximum number of epochs to run.
  • stop_if_no_improvement_for_epochs - Training will be stopped if there has been no improvement for n epochs.
  • learningrate_decay - If performance hasn't improved for 3 epochs, multiply the learning rate with this value.
  • dropout_input - The probability for applying dropout to the word representations. 0.0 means no dropout.
  • dropout_word_lstm - The probability for applying dropout to the LSTM outputs.
  • tf_per_process_gpu_memory_fraction - The fraction of GPU memory that the process can use.
  • tf_allow_growth - Whether the GPU memory usage can grow dynamically.
  • main_cost - Control the weight of the main labeling objective.
  • lmcost_max_vocab_size = Maximum vocabulary size for the language modeling loss. The remaining words are mapped to a single entry.
  • lmcost_hidden_layer_size = Hidden layer size for the language modeling loss.
  • lmcost_gamma - Weight for the language modeling loss.
  • char_integration_method - How character information is integrated. Options are: "none" (not integrated), "concat" (concatenated), "attention" (the method proposed in Rei et al. (2016)).
  • save - Path to save the model.
  • load - Path to load the model.
  • garbage_collection - Whether garbage collection is explicitly called. Makes things slower but can operate with bigger models.
  • lstm_use_peepholes - Whether to use the LSTM implementation with peepholes.
  • random_seed - Random seed for initialisation and data shuffling. This can affect results, so for robust conclusions I recommend running multiple experiments with different seeds and averaging the metrics.

Printing output

There is now a separate script for loading a saved model and using it to print output for a given input file. Use the save option in the config file for saving the model. The input file needs to be in the same format as the training data (one word per line, labels in a separate column). The labels are expected for printing output as well. If you don't know the correct labels, just print any valid label in that field.

To print the output, run:

python print_output.py labels model_file input_file

This will print the input file to standard output, with an extra column at the end that shows the prediction.

You can also use:

python print_output.py probs model_file input_file

This will print the individual probabilities for each of the possible labels. If the model is using CRFs, the probs option will output unnormalised state scores without taking the transitions into account.

References

The main sequence labeling model is described here:

Compositional Sequence Labeling Models for Error Detection in Learner Writing
Marek Rei and Helen Yannakoudakis
In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL-2016)

The character-level component is described here:

Attending to characters in neural sequence labeling models
Marek Rei, Gamal K.O. Crichton and Sampo Pyysalo
In Proceedings of the 26th International Conference on Computational Linguistics (COLING-2016)

The language modeling objective is described here:

Semi-supervised Multitask Learning for Sequence Labeling
Marek Rei
In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL-2017)

The CRF implementation is based on:

Neural Architectures for Named Entity Recognition
Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami and Chris Dyer
In Proceedings of NAACL-HLT 2016

The conlleval.py script is from: https://github.com/spyysalo/conlleval.py

License

The code is distributed under the Affero General Public License 3 (AGPL-3.0) by default. If you wish to use it under a different license, feel free to get in touch.

Copyright (c) 2018 Marek Rei

This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.

Comments
  • a problem about slicing the tensor

    a problem about slicing the tensor

    Hi,

    I don't know why this bug happened, it seems like tensor cannot be sliced by M[:, slice_num*l:(slice_num+1)*l]

    File "/Users/Xin/Documents/17_Spring/sequence-labeler-master/sequence_labeling_experiment.py", line 300, in run_experiment sequencelabeler = SequenceLabeler(config) File "/Users/Xin/Documents/17_Spring/sequence-labeler-master/sequence_labeler.py", line 42, in init char_output_tensor = recurrence.create_birnn(char_input_tensor, config["char_embedding_size"], char_mask_reshaped, config["char_recurrent_size"], return_combined=True, fn_create_parameter_matrix=self.create_parameter_matrix, name="char_birnn") File "/Users/Xin/Documents/17_Spring/sequence-labeler-master/recurrence.py", line 12, in create_birnn recurrent_size, only_return_final=return_combined, go_backwards=False, fn_create_parameter_matrix=fn_create_parameter_matrix, name=name + "_forward") File "/Users/Xin/Documents/17_Spring/sequence-labeler-master/recurrence.py", line 74, in create_lstm go_backwards=go_backwards) File "/Users/Xin/anaconda/lib/python3.5/site-packages/theano/scan_module/scan.py", line 773, in scan condition, outputs, updates = scan_utils.get_updates_and_outputs(fn(*args)) File "/Users/Xin/Documents/17_Spring/sequence-labeler-master/recurrence.py", line 34, in lstm_mask_step h_new, c_new = lstm_step(x, h_prev, c_prev, W_x, W_h, b, W_ci, W_cf, W_co) File "/Users/Xin/Documents/17_Spring/sequence-labeler-master/recurrence.py", line 26, in lstm_step i = tensor.nnet.sigmoid(_slice(m_xhb, 0, 4) + c_prev * W_ci) File "/Users/Xin/Documents/17_Spring/sequence-labeler-master/recurrence.py", line 47, in _slice return M[:, slice_num*l:(slice_num+1)*l] File "/Users/Xin/anaconda/lib/python3.5/site-packages/theano/tensor/var.py", line 519, in getitem theano.tensor.subtensor.Subtensor.convert(arg) File "/Users/Xin/anaconda/lib/python3.5/site-packages/theano/tensor/subtensor.py", line 370, in convert slice_a = Subtensor.convert(a, False) File "/Users/Xin/anaconda/lib/python3.5/site-packages/theano/tensor/subtensor.py", line 349, in convert raise TypeError("Expected an integer") TypeError: Expected an integer

    opened by Moonet 13
  • Could you provide some parts of the data?

    Could you provide some parts of the data?

    @marekrei Hello, marekrei. It's very nice work, and I want to reproduce your work. But I have some problem in preparing the required format. So could you please provide some parts of the data for me? Thank you very much.

    opened by GuangChen2016 5
  • Using model on new data

    Using model on new data

    Hello, @marekrei Great work! I’m trying to make use of bidirectional lstm for binary sequence labeling. Having trained your model on my data, i want to test it on separate sentence, presumably, not from the training set.

    In other words, how to use trained model to get output for one sentence at a time?

    opened by Slyfest 3
  • Possible wrong lowercase flag is passed?

    Possible wrong lowercase flag is passed?

    Hi,

    Did you mean to pass lowercase_chars flag when building characters vocabulary here?

    https://github.com/marekrei/sequence-labeler/blob/484a6beb1e2a2cccaac74ce717b1ee30c79fc8d8/sequence_labeling_experiment.py#L285

    opened by kmkurn 1
  • Python3 update

    Python3 update

    Yo.

    Here's a more up-to-date version of your code for Python3 (tested on 3.5), Theano (0.9.0) and Lasagne (0.2.dev1).

    I would NOT overwrite your existing version however, since this version is incompatible with Python2. It's more work to make the code compatible with both versions of python, so it might be best simply to let the user decide which they want.

    opened by chrisjbryant 0
  • configparser.NoSectionError: No section: 'config'

    configparser.NoSectionError: No section: 'config'

    Hi, I'm wondering if you have this problem? I set up the following environment, and run with python experiment.py config.conf. python (3.5.2) numpy 1.14.0) tensorflow (1.4.1)

    The program stopped here and the err is in the following. Any thoughts or suggestions on this? Thank you.

    Traceback (most recent call last): File "experiment.py", line 242, in run_experiment(sys.argv[1]) File "experiment.py", line 153, in run_experiment config = parse_config("config", config_path) File "experiment.py", line 56, in parse_config for key, value in config_parser.items(config_section): File "/home/wtzhao/.conda/envs/seq_labeling/lib/python3.5/configparser.py", line 846, in items raise NoSectionError(section) configparser.NoSectionError: No section: 'config'

    opened by wentinghome 0
  • problem of repreducing your work

    problem of repreducing your work

    hi, When I run this code on a sample dataset(only one sentence), the model can not achieve 100 F value. The config file is the same except I change the lr to 0.01. The sample data is : I c saws i the c show c

    do you have any idea about this?

    opened by shaoyn0817 1
  • Label probabilities for the CRF layer

    Label probabilities for the CRF layer

    Hi, Thanks for sharing this great implementation. I know it is possible to get the label probabilities using forward backward algorithm in CRFs. I am finding some difficulties in implementing/modifying the default CRF implementation in tensorflow. For calculation of the partition function, they have only used the forward (message passing) algorithm. Do you have any experience or ideas about how the forward-backward algorithm could be implemented in tf?

    opened by ashim95 0
  • One Problem about backward mask in Language model cost function

    One Problem about backward mask in Language model cost function

    Hi,

    In the laber.py line 243:

                    lmcost_bw_mask = tf.sequence_mask(sentence_lengths, maxlen=tf.shape(target_ids)[1])[:,:-1]
    

    The mask have some issue, for example

    origin_seq: 1 2 3 4 0 0 0
    origin_mask: 1 1 1 1 0 0 0
    lmcost_bw_mask: 1 1 1 1 0 0
    the correct lmcost_bw_mask here should be: 1 1 1 0 0 0
    
    opened by wugh 0
  • theano.function()

    theano.function()

    I want try to change the code from line 116 in "sequence_labeler.py", from: input_vars_train = [word_ids, char_ids, char_mask, label_ids, learningrate] to input_vars_train = [word_ids, char_ids, # myowntag_ids, char_mask, # myowntag_mask, label_ids, learningrate]

    and then I got this error: ValueError: dimension mismatch in args to gemm

    I don't know why is this happen, can you help me to explain why is this happen?

    Thank you very much! Best Regards!

    opened by ttslr 1
Owner
Marek Rei
Researcher in machine learning and natural language processing.
Marek Rei
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

Sockeye This package contains the Sockeye project, an open-source sequence-to-sequence framework for Neural Machine Translation based on Apache MXNet

Amazon Web Services - Labs 1.1k Dec 27, 2022
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

Sockeye This package contains the Sockeye project, an open-source sequence-to-sequence framework for Neural Machine Translation based on Apache MXNet

Amazon Web Services - Labs 986 Feb 17, 2021
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

Amazon Web Services - Labs 1000 Apr 19, 2021
Code for the paper: Sequence-to-Sequence Learning with Latent Neural Grammars

Code for the paper: Sequence-to-Sequence Learning with Latent Neural Grammars

Yoon Kim 43 Dec 23, 2022
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language mod

null 20.5k Jan 8, 2023
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language mod

null 11.3k Feb 18, 2021
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language mod

null 13.2k Jul 7, 2021
Sequence-to-Sequence Framework in PyTorch

nmtpytorch allows training of various end-to-end neural architectures including but not limited to neural machine translation, image captioning and au

LIUM 395 Nov 21, 2022
Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction

This is a fork of Fairseq(-py) with implementations of the following models: Pervasive Attention - 2D Convolutional Neural Networks for Sequence-to-Se

Maha 490 Dec 15, 2022
MASS: Masked Sequence to Sequence Pre-training for Language Generation

MASS: Masked Sequence to Sequence Pre-training for Language Generation

Microsoft 1.1k Dec 17, 2022
Sequence-to-Sequence learning using PyTorch

Seq2Seq in PyTorch This is a complete suite for training sequence-to-sequence models in PyTorch. It consists of several models and code to both train

Elad Hoffer 514 Nov 17, 2022
Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech tagging and word segmentation.

Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech tagging and word segmentation.

null 186 Dec 24, 2022
An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition

CRNN paper:An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition 1. create your ow

Tsukinousag1 3 Apr 2, 2022
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing ?? ?? ?? We released the 2.0.0 version with TF2 Support. ?? ?? ?? If you

Eliyar Eziz 2.3k Dec 29, 2022
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing ?? ?? ?? We released the 2.0.0 version with TF2 Support. ?? ?? ?? If you

Eliyar Eziz 2k Feb 9, 2021
One Stop Anomaly Shop: Anomaly detection using two-phase approach: (a) pre-labeling using statistics, Natural Language Processing and static rules; (b) anomaly scoring using supervised and unsupervised machine learning.

One Stop Anomaly Shop (OSAS) Quick start guide Step 1: Get/build the docker image Option 1: Use precompiled image (might not reflect latest changes):

Adobe, Inc. 148 Dec 26, 2022
Unifying Cross-Lingual Semantic Role Labeling with Heterogeneous Linguistic Resources (NAACL-2021).

Unifying Cross-Lingual Semantic Role Labeling with Heterogeneous Linguistic Resources Description This is the repository for the paper Unifying Cross-

Sapienza NLP group 16 Sep 9, 2022
Constituency Tree Labeling Tool

Constituency Tree Labeling Tool The purpose of this package is to solve the constituency tree labeling problem. Look from the dataset labeled by NLTK,

张宇 6 Dec 20, 2022