ON-LSTM
This repository contains the code used for word-level language model and unsupervised parsing experiments in Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks paper, originally forked from the LSTM and QRNN Language Model Toolkit for PyTorch. If you use this code or our results in your research, we'd appreciate if you cite our paper as following:
@article{shen2018ordered,
title={Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks},
author={Shen, Yikang and Tan, Shawn and Sordoni, Alessandro and Courville, Aaron},
journal={arXiv preprint arXiv:1810.09536},
year={2018}
}
Software Requirements
Python 3.6, NLTK and PyTorch 0.4 are required for the current codebase.
Steps
-
Install PyTorch 0.4 and NLTK
-
Download PTB data. Note that the two tasks, i.e., language modeling and unsupervised parsing share the same model strucutre but require different formats of the PTB data. For language modeling we need the standard 10,000 word Penn Treebank corpus data, and for parsing we need Penn Treebank Parsed data.
-
Scripts and commands
-
Train Language Modeling
python main.py --batch_size 20 --dropout 0.45 --dropouth 0.3 --dropouti 0.5 --wdrop 0.45 --chunk_size 10 --seed 141 --epoch 1000 --data /path/to/your/data
-
Test Unsupervised Parsing
python test_phrase_grammar.py --cuda
The default setting in
main.py
achieves a perplexity of approximately56.17
on PTB test set and unlabeled F1 of approximately47.7
on WSJ test set. -