A collection of machine learning examples and tutorials.

Overview

machine_learning_examples

A collection of machine learning examples and tutorials.

Find associated tutorials at https://lazyprogrammer.me

Find associated courses at https://deeplearningcourses.com

Please note that not all code from all courses will be found in this repository. Some newer code examples (e.g. most of Tensorflow 2.0) were done in Google Colab. Therefore, you should check the instructions given in the lectures for the course you are taking.

How to I find the code for a particular course?

The code for each course is separated by folder. You can determine which folder corresponds with which course by watching the "Where to get the code" lecture inside the course (usually Lecture 2 or 3).

Remember: one folder = one course.

Why you should not fork this repo

I've noticed that many people have out-of-date forks. Thus, I recommend not forking this repository if you take one of my courses. I am constantly updating my courses, and your fork will soon become out-of-date. You should clone the repository instead to make it easy to get updates (i.e. just "git pull" randomly and frequently).

Where is the code for your latest courses?

Beginning with Tensorflow 2, I started to use Google Colab. For those courses, unless otherwise noted, the code will be on Google Colab. Links to the notebooks are provided in the course. See the lecture "Where to get the code" for further details.

VIP Course Links

*** Note: if any of these coupons becomes out of date, check my website (https://lazyprogrammer.me) for the latest version. I will probably just keep incrementing them numerically, e.g. FINANCEVIP2, FINANCEVIP3, etc..

Time Series Analysis, Forecasting, and Machine Learning

https://www.udemy.com/course/time-series-analysis/?couponCode=TIMEVIP4

Financial Engineering and Artificial Intelligence in Python

https://www.udemy.com/course/ai-finance/?couponCode=FINANCEVIP13

PyTorch: Deep Learning and Artificial Intelligence

https://www.udemy.com/course/pytorch-deep-learning/?couponCode=PYTORCHVIP18

Tensorflow 2.0: Deep Learning and Artificial Intelligence (VIP Version) https://deeplearningcourses.com/c/deep-learning-tensorflow-2

Deep Learning Courses Exclusives

Classical Statistical Inference and A/B Testing in Python https://deeplearningcourses.com/c/statistical-inference-in-python

Linear Programming for Linear Regression in Python https://deeplearningcourses.com/c/linear-programming-python

MATLAB for Students, Engineers, and Professionals in STEM https://deeplearningcourses.com/c/matlab

Other Course Links

Tensorflow 2.0: Deep Learning and Artificial Intelligence (non-VIP version) https://www.udemy.com/course/deep-learning-tensorflow-2/?referralCode=E10B72D3848AB70FE1B8

Cutting-Edge AI: Deep Reinforcement Learning in Python https://deeplearningcourses.com/c/cutting-edge-artificial-intelligence

Recommender Systems and Deep Learning in Python https://deeplearningcourses.com/c/recommender-systems

Machine Learning and AI: Support Vector Machines in Python https://deeplearningcourses.com/c/support-vector-machines-in-python

Deep Learning: Advanced Computer Vision https://deeplearningcourses.com/c/advanced-computer-vision

Deep Learning: Advanced NLP and RNNs https://deeplearningcourses.com/c/deep-learning-advanced-nlp

Deep Learning: GANs and Variational Autoencoders https://deeplearningcourses.com/c/deep-learning-gans-and-variational-autoencoders

Advanced AI: Deep Reinforcement Learning in Python https://deeplearningcourses.com/c/deep-reinforcement-learning-in-python

Artificial Intelligence: Reinforcement Learning in Python https://deeplearningcourses.com/c/artificial-intelligence-reinforcement-learning-in-python

Natural Language Processing with Deep Learning in Python https://deeplearningcourses.com/c/natural-language-processing-with-deep-learning-in-python

Deep Learning: Recurrent Neural Networks in Python https://deeplearningcourses.com/c/deep-learning-recurrent-neural-networks-in-python

Unsupervised Machine Learning: Hidden Markov Models in Python https://deeplearningcourses.com/c/unsupervised-machine-learning-hidden-markov-models-in-python

Deep Learning Prerequisites: The Numpy Stack in Python https://deeplearningcourses.com/c/deep-learning-prerequisites-the-numpy-stack-in-python

Deep Learning Prerequisites: Linear Regression in Python https://deeplearningcourses.com/c/data-science-linear-regression-in-python

Deep Learning Prerequisites: Logistic Regression in Python https://deeplearningcourses.com/c/data-science-logistic-regression-in-python

Deep Learning in Python https://deeplearningcourses.com/c/data-science-deep-learning-in-python

Cluster Analysis and Unsupervised Machine Learning in Python https://deeplearningcourses.com/c/cluster-analysis-unsupervised-machine-learning-python

Data Science: Supervised Machine Learning in Python https://deeplearningcourses.com/c/data-science-supervised-machine-learning-in-python

Bayesian Machine Learning in Python: A/B Testing https://deeplearningcourses.com/c/bayesian-machine-learning-in-python-ab-testing

Easy Natural Language Processing in Python https://deeplearningcourses.com/c/data-science-natural-language-processing-in-python

Practical Deep Learning in Theano and TensorFlow https://deeplearningcourses.com/c/data-science-deep-learning-in-theano-tensorflow

Ensemble Machine Learning in Python: Random Forest and AdaBoost https://deeplearningcourses.com/c/machine-learning-in-python-random-forest-adaboost

Deep Learning: Convolutional Neural Networks in Python https://deeplearningcourses.com/c/deep-learning-convolutional-neural-networks-theano-tensorflow

Unsupervised Deep Learning in Python https://deeplearningcourses.com/c/unsupervised-deep-learning-in-python

Comments
  • Add requirements

    Add requirements

    Hello,

    It would be great if you could add the project requirements (requirements.txt, or in some other form). Installing the required libraries is not enough, since we end up not having the correct library versions for running the code.

    Thanks

    opened by joao-faria 5
  • error

    error

    I have installed rnn and tensorflow package but following error showing:

    from rnn_class.brown import abget_sentences_with_word2idx_limit_voc, get_sentences_with_word2idx ModuleNotFoundError: No module named 'rnn_class'

    opened by amitchandnia 4
  • Error in linear_rl_trader.py

    Error in linear_rl_trader.py

    There is some error in the code. Kindly check it and resolve it out as soon as possible. My MacOS terminal says: line 349 with open(f'{models_folder}/scaler.pkl', 'rb') as f: ^ SyntaxError: invalid syntax

    opened by look4yasharth 2
  • Simple Question

    Simple Question

    Hello.

    I bought "Convolutional Neural Networks in Python". And, cnn_tf.py (Chapter 5:Sample Code in Tensorflow) is worked.

    Now, I have simple question. How shold I evaluate it ??? Please give me some advice if you do not mind.

    thanks!!

    opened by C4A-yosy 2
  • question in grid_world.py

    question in grid_world.py

    Hi, @lazyprogrammer ,

    Does it miss a line "(1, 1): step_cost," under line 104 in grid_world.py, according to only two terminal states such as (0, 3) and (1, 3)?

    opened by amiltonwong 2
  • CVS file error/mismatch?

    CVS file error/mismatch?

    opened by rohit-kumar-j 1
  • rl/monte_carlo.py -

    rl/monte_carlo.py - "iterative_policy_evaluation" doesn't exist!

    "iterative_policy_evaluation" in the mentioned file must be changed to "iterative_policy_evaluation_deterministic" (or probabilistic).

    opened by MJamshidnejad 1
  • How to determine num_words variable when creating embedding matrix?

    How to determine num_words variable when creating embedding matrix?

    I have been following the poetry generation notebook and at the point where we have to create an embedding matrix that uses the following code

    # prepare embedding matrix
    num_words = min(MAX_VOCAB_SIZE, len(word2idx) + 1)
    embedding_matrix = np.zeros((num_words, EMBEDDING_DIM))
    for word, i in word2idx.items():
        if i < MAX_VOCAB_SIZE:
            embedding_vector = word2vec.get(word)
            if embedding_vector is not None:
                # words not found in embedding index will be all zeros
                embedding_maxtrix[i] = embedding_vector
    

    in the tutorial MAX_VOCAB_SIZE = 3000 and len(word2idx) + 1 = 4615. Shouldn't the second line be num_words = max(MAX_VOCAB_SIZE, len(word2idx) + 1) because if we take the min, we're essentially dropping out some words from the embedding matrix? As in we should be considering each and every word in word2idx. What really is the point of num_words here if we can just create the embedding matrix of size len(word2idx) + 1) * (EMBEDDING_DIM).

    opened by Nishant-Pall 1
  • Udemy Course

    Udemy Course

    PyTorch: Deep Learning and Artificial Intelligence I just want to ask that your course "PyTorch: Deep Learning and Artificial Intelligence" is very expensive on Udemy. I like your lectures, and learned a lot from previous courses, Thank you!!

    opened by shaziaj 1
  • Error: 'the path of my train.npy file' is not UTF-8 format.

    Error: 'the path of my train.npy file' is not UTF-8 format.

    I am trying to run your linear_rl_trader.py file. but there is a problem with train.npy file. The problem is running but the result can not be saved.

    Error: 'the path of my train.npy file' is not UTF-8 format

    Do you know how I can solve this problem? I added encoding='utf-8' in the following but still it didn't solved.

    np.save(f'{rewards_folder}/{args.mode}.npy', portfolio_value)

    opened by ghost 1
  • ann_class2/tensorflow1.py - migrate from tensorflow version 1 to version 2

    ann_class2/tensorflow1.py - migrate from tensorflow version 1 to version 2

    I'm following the how to install guide of the data-science-linear-regression-in-python course.

    The code for verifying that tensorflow is working correctly, didn't work for me, so I investigated how to fix it. Here is the error I was experiencing.

    PROMPT> python3 tensorflow1.py 
    Traceback (most recent call last):
      File "tensorflow1.py", line 20, in <module>
        A = tf.placeholder(tf.float32, shape=(5, 5), name='A')
    AttributeError: module 'tensorflow' has no attribute 'placeholder'
    

    On TF's website there is a migration guide.

    Change from

    import tensorflow as tf
    

    Change to

    import tensorflow.compat.v1 as tf
    tf.disable_v2_behavior()
    

    This solved it for me.

    opened by neoneye 1
  • Why do we need a manual for loop when implementing Attention?

    Why do we need a manual for loop when implementing Attention?

    Please bear with me here.

    This might be confusing to understand for some because I'm adding the pseudocode to support what's unclear to me. I've been following a tutorial and it was mentioned that we need to define a for loop over the target sequences considering we're doing machine translation using attention mechanisms using LSTMs.

    I've made it something that would look like Keras.

    This is the pseudocode

    
    h = encoder(input) # For getting input sequences for calculation attention weights and context vector
    decoder_hidden_state = 0 # Hidden state
    decoder_cell_state = 0 # Cell state
    outputs = []
    
    for t in range(Ty): # Ty being the length of the target sequence
        context = calc_attention(decoder_hidden_state, h) # decoder_hiddent_state(t-1), h(1),......h(Tx)
        decoder_output, decoder_hidden_state, decoder_cell_state = decoder_lstm(context, init = [decoder_hidden_state,decoder_cell_state])
        probabilities = dense_layer(o)
        outputs.append(probabilities)
    
    model = Model ( input, outputs)
    

    The thing that is unclear to me is why are we using a for loop, It was said that "In a regular seq2seq, we pass in the entire target input sequence all at once because the output was calculated all at once. But we need a loop over Ty steps since each context depends on the previous state"

    But I think that the same can be done in the case of attention because if I just remove the for loop.

    Just like this code below, which is the decoder part of a normal seq2seq

    decoder_inputs_placeholder = Input(shape=(max_len_target,))
    decoder_embedding = Embedding(num_words_output, EMBEDDING_DIM)
    decoder_inputs_x = decoder_embedding(decoder_inputs_placeholder)
    decoder_lstm = LSTM(
       LATENT_DIM,
       return_sequences=True,
       return_state=True,
    )
    

    If I want to add attention can't I just define the states here and call the calc_attention function that would return the context for a particular timestep while decoding, and can be passed onto the lstm call just as done before in pseudocode?

    
    decoder_outputs, decoder_hidden_state, decoder_cell_state = decoder_lstm(
       decoder_inputs_x,
       initial_state=[decoder_hidden_state, decoder_cell_state]
    )
    decoder_outputs = decoder_dense(decoder_outputs)
    
    opened by Nishant-Pall 1
  • docs: fix simple typo, distrbution -> distribution

    docs: fix simple typo, distrbution -> distribution

    There is a small typo in svm_class/svm_gradient.py.

    Should read distribution rather than distrbution.

    Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

    opened by timgates42 0
  • warning with saving the model

    warning with saving the model

    it apperas that the accuracy of the model is so good but after saving the model and load it, it gives a non reasonable results .. the model is not saved correctly

    opened by HoudaKilani 0
Owner
LazyProgrammer.me
https://deeplearningcourses.com
LazyProgrammer.me
A collection of neat and practical data science and machine learning projects

Data Science A collection of neat and practical data science and machine learning projects Explore the docs » Report Bug · Request Feature Table of Co

Will Fong 2 Dec 10, 2021
Pandas Machine Learning and Quant Finance Library Collection

Pandas Machine Learning and Quant Finance Library Collection

null 148 Dec 7, 2022
Model Validation Toolkit is a collection of tools to assist with validating machine learning models prior to deploying them to production and monitoring them after deployment to production.

Model Validation Toolkit is a collection of tools to assist with validating machine learning models prior to deploying them to production and monitoring them after deployment to production.

FINRA 25 Dec 28, 2022
A Collection of Conference & School Notes in Machine Learning 🦄📝🎉

Machine Learning Conference & Summer School Notes. ??????

null 558 Dec 28, 2022
A collection of interactive machine-learning experiments: 🏋️models training + 🎨models demo

?? Interactive Machine Learning experiments: ??️models training + ??models demo

Oleksii Trekhleb 1.4k Jan 6, 2023
PLUR is a collection of source code datasets suitable for graph-based machine learning.

PLUR (Programming-Language Understanding and Repair) is a collection of source code datasets suitable for graph-based machine learning. We provide scripts for downloading, processing, and loading the datasets. This is done by offering a unified API and data structures for all datasets.

Google Research 76 Nov 25, 2022
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Jan 9, 2023
Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Python Extreme Learning Machine (ELM) Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Augusto Almeida 84 Nov 25, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques

Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

Vowpal Wabbit 8.1k Dec 30, 2022
CD) in machine learning projectsImplementing continuous integration & delivery (CI/CD) in machine learning projects

CML with cloud compute This repository contains a sample project using CML with Terraform (via the cml-runner function) to launch an AWS EC2 instance

Iterative 19 Oct 3, 2022
Breast-Cancer-Classification - Using SKLearn breast cancer dataset which contains 569 examples and 32 features classifying has been made with 6 different algorithms

Breast-Cancer-Classification - Using SKLearn breast cancer dataset which contains 569 examples and 32 features classifying has been made with 6 different algorithms

Mert Sezer Ardal 1 Jan 31, 2022
Microsoft contributing libraries, tools, recipes, sample codes and workshop contents for machine learning & deep learning.

Microsoft contributing libraries, tools, recipes, sample codes and workshop contents for machine learning & deep learning.

Microsoft 366 Jan 3, 2023
A data preprocessing package for time series data. Design for machine learning and deep learning.

A data preprocessing package for time series data. Design for machine learning and deep learning.

Allen Chiang 152 Jan 7, 2023
TensorFlow Decision Forests (TF-DF) is a collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.

TensorFlow Decision Forests (TF-DF) is a collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models. The library is a collection of Keras models and supports classification, regression and ranking. TF-DF is a TensorFlow wrapper around the Yggdrasil Decision Forests C++ libraries. Models trained with TF-DF are compatible with Yggdrasil Decision Forests' models, and vice versa.

null 538 Jan 1, 2023
A collection of Scikit-Learn compatible time series transformers and tools.

tsfeast A collection of Scikit-Learn compatible time series transformers and tools. Installation Create a virtual environment and install: From PyPi p

Chris Santiago 0 Mar 30, 2022
A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

Daniel Formoso 5.7k Dec 30, 2022
A comprehensive repository containing 30+ notebooks on learning machine learning!

A comprehensive repository containing 30+ notebooks on learning machine learning!

Jean de Dieu Nyandwi 3.8k Jan 9, 2023
MIT-Machine Learning with Python–From Linear Models to Deep Learning

MIT-Machine Learning with Python–From Linear Models to Deep Learning | One of the 5 courses in MIT MicroMasters in Statistics & Data Science Welcome t

null 2 Aug 23, 2022
Implemented four supervised learning Machine Learning algorithms

Implemented four supervised learning Machine Learning algorithms from an algorithmic family called Classification and Regression Trees (CARTs), details see README_Report.

Teng (Elijah)  Xue 0 Jan 31, 2022