Deep Learning - All You Need to Know
Sponsorship
To support maintaining and upgrading this project, please kindly consider Sponsoring the project developer.
Any level of support is a great contribution here
Download Free Python Machine Learning Book
Slack Group
Table of Contents
Introduction
The purpose of this project is to introduce a shortcut to developers and researcher for finding useful resources about Deep Learning.
Motivation
There are different motivations for this open source project.
What's the point of this open source project?
There are other repositories similar to this repository that are very comprehensive and useful and to be honest they made me ponder if there is a necessity for this repository!
The point of this repository is that the resources are being targeted. The organization of the resources is such that the user can easily find the things he/she is looking for. We divided the resources to a large number of categories that in the beginning one may have a headache!!! However, if someone knows what is being located, it is very easy to find the most related resources. Even if someone doesn't know what to look for, in the beginning, the general resources have been provided.
Papers
This chapter is associated with the papers published in deep learning.
Models
Convolutional Networks
Imagenet classification with deep convolutional neural networks : [Paper][Code]
Convolutional Neural Networks for Sentence Classification : [Paper][Code]
Large-scale Video Classification with Convolutional Neural Networks : [Paper][Project Page]
Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks : [Paper]
Deep convolutional neural networks for LVCSR : [Paper]
Face recognition: a convolutional neural-network approach : [Paper]
Recurrent Networks
An empirical exploration of recurrent network architectures : [Paper][Code]
LSTM: A search space odyssey : [Paper][Code]
On the difficulty of training recurrent neural networks : [Paper][Code]
Learning to forget: Continual prediction with LSTM : [Paper]
Autoencoders
Extracting and composing robust features with denoising autoencoders : [Paper]
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion : [Paper][Code]
Adversarial Autoencoders : [Paper][Code]
Autoencoders, Unsupervised Learning, and Deep Architectures : [Paper]
Reducing the Dimensionality of Data with Neural Networks : [Paper][Code]
Generative Models
Exploiting generative models discriminative classifiers : [Paper]
Semi-supervised Learning with Deep Generative Models : [Paper][Code]
Generative Adversarial Nets : [Paper][Code]
Generalized Denoising Auto-Encoders as Generative Models : [Paper]
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks : [Paper][Code]
Probabilistic Models
Stochastic Backpropagation and Approximate Inference in Deep Generative Models : [Paper]
Probabilistic models of cognition: exploring representations and inductive biases : [Paper]
On deep generative models with applications to recognition : [Paper]
Core
Optimization
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift : [Paper]
Dropout: A Simple Way to Prevent Neural Networks from Overfitting : [Paper]
Training Very Deep Networks : [Paper]
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification : [Paper]
Large Scale Distributed Deep Networks : [Paper]
Representation Learning
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks : [Paper][Code]
Representation Learning: A Review and New Perspectives : [Paper]
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets : [Paper][Code]
Understanding and Transfer Learning
Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks : [Paper]
Distilling the Knowledge in a Neural Network : [Paper]
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition : [Paper][
How transferable are features in deep neural networks? : [Paper][Code]
Reinforcement Learning
Human-level control through deep reinforcement learning : [Paper][Code]
Playing Atari with Deep Reinforcement Learning : [Paper][Code]
Continuous control with deep reinforcement learning : [Paper][Code]
Deep Reinforcement Learning with Double Q-Learning : [Paper][Code]
Dueling Network Architectures for Deep Reinforcement Learning : [Paper][Code]
Applications
Image Recognition
Deep Residual Learning for Image Recognition : [Paper][Code]
Very Deep Convolutional Networks for Large-Scale Image Recognition : [Paper]
Multi-column Deep Neural Networks for Image Classification : [Paper]
DeepID3: Face Recognition with Very Deep Neural Networks : [Paper]
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps : [Paper][Code]
Deep Image: Scaling up Image Recognition : [Paper]
Long-Term Recurrent Convolutional Networks for Visual Recognition and Description : [Paper][Code]
3D Convolutional Neural Networks for Cross Audio-Visual Matching Recognition : [Paper][Code]
Object Recognition
ImageNet Classification with Deep Convolutional Neural Networks : [Paper]
Learning Deep Features for Scene Recognition using Places Database : [Paper]
Scalable Object Detection using Deep Neural Networks : [Paper]
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks : [Paper][Code]
OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks : [Paper][Code]
CNN Features Off-the-Shelf: An Astounding Baseline for Recognition : [Paper]
What is the best multi-stage architecture for object recognition? : [Paper]
Action Recognition
Long-Term Recurrent Convolutional Networks for Visual Recognition and Description : [Paper]
Learning Spatiotemporal Features With 3D Convolutional Networks : [Paper][Code]
Describing Videos by Exploiting Temporal Structure : [Paper][Code]
Convolutional Two-Stream Network Fusion for Video Action Recognition : [Paper][Code]
Temporal segment networks: Towards good practices for deep action recognition : [Paper][Code]
Caption Generation
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention : [Paper][Code]
Mind's Eye: A Recurrent Visual Representation for Image Caption Generation : [Paper]
Generative Adversarial Text to Image Synthesis : [Paper][Code]
Deep Visual-Semantic Al60ignments for Generating Image Descriptions : [Paper][Code]
Show and Tell: A Neural Image Caption Generator : [Paper][Code]
Natural Language Processing
Distributed Representations of Words and Phrases and their Compositionality : [Paper][Code]
Efficient Estimation of Word Representations in Vector Space : [Paper][Code]
Sequence to Sequence Learning with Neural Networks : [Paper][Code]
Neural Machine Translation by Jointly Learning to Align and Translate : [Paper][Code]
Get To The Point: Summarization with Pointer-Generator Networks : [Paper][Code]
Attention Is All You Need : [Paper][Code]
Convolutional Neural Networks for Sentence Classification : [Paper][Code]
Speech Technology
Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups : [Paper]
Towards End-to-End Speech Recognition with Recurrent Neural Networks : [Paper]
Speech recognition with deep recurrent neural networks : [Paper]
Fast and Accurate Recurrent Neural Network Acoustic Models for Speech Recognition : [Paper]
Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin : [Paper][Code]
A novel scheme for speaker recognition using a phonetically-aware deep neural network : [Paper]
Text-Independent Speaker Verification Using 3D Convolutional Neural Networks : [Paper][Code]
Datasets
Image
General
- MNIST Handwritten digits: [Link]
Face
- Face Recognition Technology (FERET) The goal of the FERET program was to develop automatic face recognition capabilities that could be employed to assist security, intelligence, and law enforcement personnel in the performance of their duties: [Link]
- The CMU Pose, Illumination, and Expression (PIE) Database of Human Faces Between October and December 2000 we collected a database of 41,368 images of 68 people: [Link]
- YouTube Faces DB The data set contains 3,425 videos of 1,595 different people. All the videos were downloaded from YouTube. An average of 2.15 videos are available for each subject: [Link]
- Grammatical Facial Expressions Data Set Developed to assist the the automated analysis of facial expressions: [Link]
- FaceScrub A Dataset With Over 100,000 Face Images of 530 People: [Link]
- IMDB-WIKI 500k+ face images with age and gender labels: [Link]
- FDDB Face Detection Data Set and Benchmark (FDDB): [Link]
Object Recognition
- COCO Microsoft COCO: Common Objects in Context: [Link]
- ImageNet The famous ImageNet dataset: [Link]
- Open Images Dataset Open Images is a dataset of ~9 million images that have been annotated with image-level labels and object bounding boxes: [Link]
- Caltech-256 Object Category Dataset A large dataset object classification: [Link]
- Pascal VOC dataset A large dataset for classification tasks: [Link]
- CIFAR 10 / CIFAR 100 The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes. CIFAR-100 is similar to CIFAR-10 but it has 100 classes containing 600 images each: [Link]
Action recognition
- HMDB a large human motion database: [Link]
- MHAD Berkeley Multimodal Human Action Database: [Link]
- UCF101 - Action Recognition Data Set UCF101 is an action recognition data set of realistic action videos, collected from YouTube, having 101 action categories. This data set is an extension of UCF50 data set which has 50 action categories: [Link]
- THUMOS Dataset A large dataset for action classification: [Link]
- ActivityNet A Large-Scale Video Benchmark for Human Activity Understanding: [Link]
Text and Natural Language Processing
General
- 1 Billion Word Language Model Benchmark: The purpose of the project is to make available a standard training and test setup for language modeling experiments: [Link]
- Common Crawl: The Common Crawl corpus contains petabytes of data collected over the last 7 years. It contains raw web page data, extracted metadata and text extractions: [Link]
- Yelp Open Dataset: A subset of Yelp's businesses, reviews, and user data for use in personal, educational, and academic purposes: [Link]
Text classification
- 20 newsgroups The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups: [Link]
- Broadcast News The 1996 Broadcast News Speech Corpus contains a total of 104 hours of broadcasts from ABC, CNN and CSPAN television networks and NPR and PRI radio networks with corresponding transcripts: [Link]
- The wikitext long term dependency language modeling dataset: A collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. : [Link]
Question Answering
- Question Answering Corpus by Deep Mind and Oxford which is two new corpora of roughly a million news stories with associated queries from the CNN and Daily Mail websites. [Link]
- Stanford Question Answering Dataset (SQuAD) consisting of questions posed by crowdworkers on a set of Wikipedia articles: [Link]
- Amazon question/answer data contains Question and Answer data from Amazon, totaling around 1.4 million answered questions: [Link]
Sentiment Analysis
- Multi-Domain Sentiment Dataset TThe Multi-Domain Sentiment Dataset contains product reviews taken from Amazon.com from many product types (domains): [Link]
- Stanford Sentiment Treebank Dataset The Stanford Sentiment Treebank is the first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language: [Link]
- Large Movie Review Dataset: This is a dataset for binary sentiment classification: [Link]
Machine Translation
- Aligned Hansards of the 36th Parliament of Canada dataset contains 1.3 million pairs of aligned text chunks: [Link]
- Europarl: A Parallel Corpus for Statistical Machine Translation dataset extracted from the proceedings of the European Parliament: [Link]
Summarization
- Legal Case Reports Data Set as a textual corpus of 4000 legal cases for automatic summarization and citation analysis.: [Link]
Speech Technology
- TIMIT Acoustic-Phonetic Continuous Speech Corpus The TIMIT corpus of read speech is designed to provide speech data for acoustic-phonetic studies and for the development and evaluation of automatic speech recognition systems: [Link]
- LibriSpeech LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey: [Link]
- VoxCeleb A large scale audio-visual dataset: [Link]
- NIST Speaker Recognition: [Link]
Courses
- Machine Learning by Stanford on Coursera : [Link]
- Neural Networks and Deep Learning Specialization by Coursera: [Link]
- Intro to Deep Learning by Google: [Link]
- Introduction to Deep Learning by CMU: [Link]
- NVIDIA Deep Learning Institute by NVIDIA: [Link]
- Convolutional Neural Networks for Visual Recognition by Stanford: [Link]
- Deep Learning for Natural Language Processing by Stanford: [Link]
- Deep Learning by fast.ai: [Link]
- Course on Deep Learning for Visual Computing by IITKGP: [Link]
Books
- Deep Learning by Ian Goodfellow: [Link]
- Neural Networks and Deep Learning : [Link]
- Deep Learning with Python: [Link]
- Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems: [Link]
Blogs
- Colah's blog: [Link]
- Andrej Karpathy blog: [Link]
- The Spectator Shakir's Machine Learning Blog: [Link]
- WILDML: [Link]
- Distill blog It is more like a journal than a blog because it has a peer review process and only accepted articles will be published on that.: [Link]
- BAIR Berkeley Artificial Inteliigent Research: [Link]
- Sebastian Ruder's blog: [Link]
- inFERENCe: [Link]
- i am trask A Machine Learning Craftsmanship Blog: [Link]
Tutorials
- Deep Learning Tutorials: [Link]
- Deep Learning for NLP with Pytorch by Pytorch: [Link]
- Deep Learning for Natural Language Processing: Tutorials with Jupyter Notebooks by Jon Krohn: [Link]
Frameworks
- Tensorflow: [Link]
- Pytorch: [Link]
- CNTK: [Link]
- MatConvNet: [Link]
- Keras: [Link]
- Caffe: [Link]
- Theano: [Link]
- CuDNN: [Link]
- Torch: [Link]
- Deeplearning4j: [Link]
Contributing
For typos, unless significant changes, please do not create a pull request. Instead, declare them in issues or email the repository owner. Please note we have a code of conduct, please follow it in all your interactions with the project.
Pull Request Process
Please consider the following criterions in order to help us in a better way:
- The pull request is mainly expected to be a link suggestion.
- Please make sure your suggested resources are not obsolete or broken.
- Ensure any install or build dependencies are removed before the end of the layer when doing a build and creating a pull request.
- Add comments with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations and container parameters.
- You may merge the Pull Request in once you have the sign-off of at least one other developer, or if you do not have permission to do that, you may request the owner to merge it for you if you believe all checks are passed.
Final Note
We are looking forward to your kind feedback. Please help us to improve this open source project and make our work better. For contribution, please create a pull request and we will investigate it promptly. Once again, we appreciate your kind feedback and support.