Fake Shakespearean Text Generator
This project contains an impelementation of stateful Char-RNN model to generate fake shakespearean texts.
Files and folders of the project.
models folder
This folder contains to zip file, one for stateful model and the other for stateless model (this model files are fully saved model architectures,not just weights).
As you its name implies, this zip file contains the model's weights as checkpoint format (see tensorflow model save formats).
This file is an saved and trained (sure on the dataset) instance of Tensorflow Tokenizer (used at inference time).
This file is the dataset and composed of regular texts (see below what does it look like).
First Citizen:
Before we proceed any further, hear me speak.
All:
Speak, speak.
Contains codes for training.
Contains codes for inference.
How to Train the Model
train.py file
A more depth look into
First, it gets the dataset from the specified url (line 11). Then reads the dataset to train the tokenizer object just mentioned above and trains the tokenizer (line 18). After training, encodes the dataset (line 24). Since this is a stateful model, all sequences in batch should be start where the sequences at the same index number in the previous batch left off. Let's say a batch composes of 32 sequences. The 33th sequence (i.e. the first sequence in the second batch) should exactly start where the 1st sequence (i.e. first sequence in the first batch) ended up. The second sequence in the 2nd batch should start where 2nd sequnce in first batch ended up and so on. Codes between line 28 and line 48 do this and result the dataset. Codes between line 53 and line 57 create the stateful model. Note that to be able to adjust recurrent_dropout
hyperparameter you have to train the model on a GPU. After creation of model, a callback to reset states at the beginning of each epoch is created. Then the training start with the calling fit
method and then model (see tensorflow' entire model save), model's weights and the tokenizer is saved.
Usage of the Model
inference.py file)
Where the magic happens (
To be able use the model, it should first converted to a stateless model due to a stateful model expects a batch of inputs instead of just an input. To do this a stateless model with the same architecture of stateful model should be created. Codes between line 44 and line 49 do this. To load weights the model should be builded. After building weight are loaded to the stateless model. This model uses predicted character at time step t as an inputs at time t + 1 to predict character at t + 2 and this operation keep goes until the prediction of last character (in this case it 100 but you can change it whatever you want. Note that the longer sequences end up with more inaccurate results). To predict the next characters, first the provided initial character should be tokenized. preprocess
function does this. To prevent repeated characters to be shown in the generated text, the next character should be selected from candidate characters randomly. The next_char
function does this. The randomness can be controlled with temperature
parameter (to learn usage of it check the comment at line 30). The complete_text
function, takes a character as an argument, predicts the next character via next_char
function and concatenates the predicted character to the text. It repeats the process until to reach n_chars
. Last, the stateless model will be saved also.
Results
Effects of the magic
print(complete_text("a"))
arpet:
like revenge borning and vinged him not.
lady good:
then to know to creat it; his best,--lord
print(complete_text("k"))
ken countents.
we are for free!
first man:
his honour'd in the days ere in any since
and all this ma
print(complete_text("f"))
ford:
hold! we must percy and he was were good.
gabes:
by fair lord, my courters,
sir.
nurse:
well
print(complete_text("h"))
holdred?
what she pass myself in some a queen
and fair little heartom in this trumpet our hands?
the