Natural Language Processing Specialization
- In this folder, Natural Language Processing Specialization projects and notes can be found.
WHAT I LEARNED
-
Use logistic regression, naïve Bayes, and word vectors to implement sentiment analysis, complete analogies & translate words.
-
Use dynamic programming, hidden Markov models, and word embeddings to implement autocorrect, autocomplete & identify part-of-speech tags for words.
-
Use recurrent neural networks, LSTMs, GRUs & Siamese networks in Trax for sentiment analysis, text generation & named entity recognition.
-
Use encoder-decoder, causal, & self-attention to machine translate complete sentences, summarize text, build chatbots & question-answering.
There are 4 Courses in this Specialization
Course 1 - Natural Language Processing with Classification and Vector Spaces
-
In the first course of the Natural Language Processing Specialization
-
I performed sentiment analysis of tweets using logistic regression and then naïve Bayes,
-
I used vector space models to discover relationships between words and used PCA to reduce the dimensionality of the vector space and visualize those relationships, and
-
I wrote a simple English to French translation algorithm using pre-computed word embeddings and locality-sensitive hashing to relate words via approximate k-nearest neighbor search.
Projects
- Sentiment Analysis of Tweets using Logistic Regression
- Sentiment Analysis of Tweets using Naïve Bayes
- Vector Space Models
- Naive Machine Translation and LSH
Course 2 - Natural Language Processing with Probabilistic Models
-
In the second course of the Natural Language Processing Specialization
-
I wrote a simple auto-correct algorithm using minimum edit distance and dynamic programming,
-
I applied the Viterbi Algorithm for part-of-speech (POS) tagging, which is vital for computational linguistics,
-
I wrote a better auto-complete algorithm using an N-gram language model, and
-
I wrote my own Word2Vec model that uses a neural network to compute word embeddings using a continuous bag-of-words model.
Projects
- Auto-correct algorithm using minimum edit distance and dynamic programming
- Parts-of-Speech Tagging (POS)
- Auto-complete algorithm using an N-gram language model
- Word Embeddings- Continuous Bag of Words (CBOW) model
Course 3 - Natural Language Processing with Sequence Models
-
In the third course of the Natural Language Processing Specialization
-
I trained a neural network with GLoVe word embeddings to perform sentiment analysis of tweets,
-
I generated synthetic Shakespeare text using a Gated Recurrent Unit (GRU) language model,
-
I trained a recurrent neural network to perform named entity recognition (NER) using LSTMs with linear layers, and
-
I used so-called ‘Siamese’ LSTM models to compare questions in a corpus and identify those that are worded differently but have the same meaning.
Projects
- Sentiment Analysis with Deep Neural Networks
- Synthetic Shakespeare text using a Gated Recurrent Unit (GRU) language model
- Named Entity Recognition (NER) using LSTM
- Question duplicates using Siamese LSTM models
Course 4 - Natural Language Processing with Attention Models
-
In the fourth course of the Natural Language Processing Specialization
-
I translated complete English sentences into German using an encoder-decoder attention model,
-
I built a Transformer model to summarize text,
-
I used T5 and BERT models to perform question-answering, and
-
I built a chatbot using a Reformer model.
Projects
- Neural Machine Translation using an Encoder-Decoder Attention Model
- Transformer Summarizer
- Question Answering using Text to Text Transfer from Transformers and BERT models
- Chatbot
Disclaimer
- DeepLearning.AI makes course notes available for educational purposes.
- Project solutions are just for educational purposes. I highly recommend trying and solving project/program assignments on your own.
All the best