Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Overview

Machine Learning From Scratch

About

Python implementations of some of the fundamental Machine Learning models and algorithms from scratch.

The purpose of this project is not to produce as optimized and computationally efficient algorithms as possible but rather to present the inner workings of them in a transparent and accessible way.

Table of Contents

Installation

$ git clone https://github.com/eriklindernoren/ML-From-Scratch
$ cd ML-From-Scratch
$ python setup.py install

Examples

Polynomial Regression

$ python mlfromscratch/examples/polynomial_regression.py

Figure: Training progress of a regularized polynomial regression model fitting
temperature data measured in Linköping, Sweden 2016.

Classification With CNN

$ python mlfromscratch/examples/convolutional_neural_network.py

+---------+
| ConvNet |
+---------+
Input Shape: (1, 8, 8)
+----------------------+------------+--------------+
| Layer Type           | Parameters | Output Shape |
+----------------------+------------+--------------+
| Conv2D               | 160        | (16, 8, 8)   |
| Activation (ReLU)    | 0          | (16, 8, 8)   |
| Dropout              | 0          | (16, 8, 8)   |
| BatchNormalization   | 2048       | (16, 8, 8)   |
| Conv2D               | 4640       | (32, 8, 8)   |
| Activation (ReLU)    | 0          | (32, 8, 8)   |
| Dropout              | 0          | (32, 8, 8)   |
| BatchNormalization   | 4096       | (32, 8, 8)   |
| Flatten              | 0          | (2048,)      |
| Dense                | 524544     | (256,)       |
| Activation (ReLU)    | 0          | (256,)       |
| Dropout              | 0          | (256,)       |
| BatchNormalization   | 512        | (256,)       |
| Dense                | 2570       | (10,)        |
| Activation (Softmax) | 0          | (10,)        |
+----------------------+------------+--------------+
Total Parameters: 538570

Training: 100% [------------------------------------------------------------------------] Time: 0:01:55
Accuracy: 0.987465181058

Figure: Classification of the digit dataset using CNN.

Density-Based Clustering

$ python mlfromscratch/examples/dbscan.py

Figure: Clustering of the moons dataset using DBSCAN.

Generating Handwritten Digits

$ python mlfromscratch/unsupervised_learning/generative_adversarial_network.py

+-----------+
| Generator |
+-----------+
Input Shape: (100,)
+------------------------+------------+--------------+
| Layer Type             | Parameters | Output Shape |
+------------------------+------------+--------------+
| Dense                  | 25856      | (256,)       |
| Activation (LeakyReLU) | 0          | (256,)       |
| BatchNormalization     | 512        | (256,)       |
| Dense                  | 131584     | (512,)       |
| Activation (LeakyReLU) | 0          | (512,)       |
| BatchNormalization     | 1024       | (512,)       |
| Dense                  | 525312     | (1024,)      |
| Activation (LeakyReLU) | 0          | (1024,)      |
| BatchNormalization     | 2048       | (1024,)      |
| Dense                  | 803600     | (784,)       |
| Activation (TanH)      | 0          | (784,)       |
+------------------------+------------+--------------+
Total Parameters: 1489936

+---------------+
| Discriminator |
+---------------+
Input Shape: (784,)
+------------------------+------------+--------------+
| Layer Type             | Parameters | Output Shape |
+------------------------+------------+--------------+
| Dense                  | 401920     | (512,)       |
| Activation (LeakyReLU) | 0          | (512,)       |
| Dropout                | 0          | (512,)       |
| Dense                  | 131328     | (256,)       |
| Activation (LeakyReLU) | 0          | (256,)       |
| Dropout                | 0          | (256,)       |
| Dense                  | 514        | (2,)         |
| Activation (Softmax)   | 0          | (2,)         |
+------------------------+------------+--------------+
Total Parameters: 533762

Figure: Training progress of a Generative Adversarial Network generating
handwritten digits.

Deep Reinforcement Learning

$ python mlfromscratch/examples/deep_q_network.py

+----------------+
| Deep Q-Network |
+----------------+
Input Shape: (4,)
+-------------------+------------+--------------+
| Layer Type        | Parameters | Output Shape |
+-------------------+------------+--------------+
| Dense             | 320        | (64,)        |
| Activation (ReLU) | 0          | (64,)        |
| Dense             | 130        | (2,)         |
+-------------------+------------+--------------+
Total Parameters: 450

Figure: Deep Q-Network solution to the CartPole-v1 environment in OpenAI gym.

Image Reconstruction With RBM

$ python mlfromscratch/examples/restricted_boltzmann_machine.py

Figure: Shows how the network gets better during training at reconstructing
the digit 2 in the MNIST dataset.

Evolutionary Evolved Neural Network

$ python mlfromscratch/examples/neuroevolution.py

+---------------+
| Model Summary |
+---------------+
Input Shape: (64,)
+----------------------+------------+--------------+
| Layer Type           | Parameters | Output Shape |
+----------------------+------------+--------------+
| Dense                | 1040       | (16,)        |
| Activation (ReLU)    | 0          | (16,)        |
| Dense                | 170        | (10,)        |
| Activation (Softmax) | 0          | (10,)        |
+----------------------+------------+--------------+
Total Parameters: 1210

Population Size: 100
Generations: 3000
Mutation Rate: 0.01

[0 Best Individual - Fitness: 3.08301, Accuracy: 10.5%]
[1 Best Individual - Fitness: 3.08746, Accuracy: 12.0%]
...
[2999 Best Individual - Fitness: 94.08513, Accuracy: 98.5%]
Test set accuracy: 96.7%

Figure: Classification of the digit dataset by a neural network which has
been evolutionary evolved.

Genetic Algorithm

$ python mlfromscratch/examples/genetic_algorithm.py

+--------+
|   GA   |
+--------+
Description: Implementation of a Genetic Algorithm which aims to produce
the user specified target string. This implementation calculates each
candidate's fitness based on the alphabetical distance between the candidate
and the target. A candidate is selected as a parent with probabilities proportional
to the candidate's fitness. Reproduction is implemented as a single-point
crossover between pairs of parents. Mutation is done by randomly assigning
new characters with uniform probability.

Parameters
----------
Target String: 'Genetic Algorithm'
Population Size: 100
Mutation Rate: 0.05

[0 Closest Candidate: 'CJqlJguPlqzvpoJmb', Fitness: 0.00]
[1 Closest Candidate: 'MCxZxdr nlfiwwGEk', Fitness: 0.01]
[2 Closest Candidate: 'MCxZxdm nlfiwwGcx', Fitness: 0.01]
[3 Closest Candidate: 'SmdsAklMHn kBIwKn', Fitness: 0.01]
[4 Closest Candidate: '  lotneaJOasWfu Z', Fitness: 0.01]
...
[292 Closest Candidate: 'GeneticaAlgorithm', Fitness: 1.00]
[293 Closest Candidate: 'GeneticaAlgorithm', Fitness: 1.00]
[294 Answer: 'Genetic Algorithm']

Association Analysis

$ python mlfromscratch/examples/apriori.py
+-------------+
|   Apriori   |
+-------------+
Minimum Support: 0.25
Minimum Confidence: 0.8
Transactions:
    [1, 2, 3, 4]
    [1, 2, 4]
    [1, 2]
    [2, 3, 4]
    [2, 3]
    [3, 4]
    [2, 4]
Frequent Itemsets:
    [1, 2, 3, 4, [1, 2], [1, 4], [2, 3], [2, 4], [3, 4], [1, 2, 4], [2, 3, 4]]
Rules:
    1 -> 2 (support: 0.43, confidence: 1.0)
    4 -> 2 (support: 0.57, confidence: 0.8)
    [1, 4] -> 2 (support: 0.29, confidence: 1.0)

Implementations

Supervised Learning

Unsupervised Learning

Reinforcement Learning

Deep Learning

Contact

If there's some implementation you would like to see here or if you're just feeling social, feel free to email me or connect with me on LinkedIn.

Comments
  • Genetic algorithm mutation rate

    Genetic algorithm mutation rate

    Issue with Genetic algorithm commit at Unsupervised learning.

    The approach taken towards the initiation of mutation is a bit unrealistic. Given the fact that the mutation rate provided by the user isn't treated as a rate in it's original sense but rather it's being compared directly with the Numpy's random value, which makes it useless even if a high mutation rate was set.

    opened by 0xskywalker 7
  • ImportError: when trying example

    ImportError: when trying example

    Terminal Session:

    $ python mlfromscratch/examples/polynomial_regression.py
    Traceback (most recent call last):
      File "mlfromscratch/examples/polynomial_regression.py", line 6, in <module>
        from mlfromscratch.supervised_learning import PolynomialRidgeRegression
      File "C:\users\aashutosh rathi\appdata\local\programs\python\python36\lib\site-packages\mlfromscratch-0.0.4-py3.6.egg\mlfromscratch\supervised_learning\__init__.py", line 14, in <module>
      File "C:\users\aashutosh rathi\appdata\local\programs\python\python36\lib\site-packages\mlfromscratch-0.0.4-py3.6.egg\mlfromscratch\supervised_learning\support_vector_machine.py", line 4, in <module>
    
      File "C:\users\aashutosh rathi\appdata\local\programs\python\python36\lib\site-packages\cvxopt-1.2.0-py3.6-win-amd64.egg\cvxopt\__init__.py", line 50, in <module>
        import cvxopt.base
    ImportError: DLL load failed: The specified module could not be found.
    
    opened by aashutoshrathi 5
  • Why the classifiers are the same in adaboost?

    Why the classifiers are the same in adaboost?

    I have following questions about adaboost implementation

    • Why the classifiers are the same in the end?
    • Why concatenate y_pred in predict function?

    Though the accuracy of classification has no problem, I think the implementation is wrong.

    opened by whcn 5
  • Demo.py AttributeError: no attribute 'solvers'

    Demo.py AttributeError: no attribute 'solvers'

    `Traceback (most recent call last):
      File "demo.py", line 23, in <module>
        from support_vector_machine import SupportVectorMachine
      File "/media/test/V2/lab/hack/ML-From-Scratch/supervised_learning/support_vector_machine.py", line 20, in <module>
        cvxopt.solvers.options['show_progress'] = False
    AttributeError: 'module' object has no attribute 'solvers'`
    
    opened by indrajithi 3
  • Missing random feature selection in Random Forest construction.

    Missing random feature selection in Random Forest construction.

    As far I am concerned when constructing each tree in the random forest it is necessary to choose a random subset of features at "each" node of the tree, so a DecisionTree class can't be reused.

    1. Sample a random subset of the training data. (bagging)
    2. At each node of the tree sample a random subset of the original features and select the best.

    What I observed in this implementation is that you are reusing the DecisionTree class and only randomly sampling the features once throughout all the tree.

    1. Sample a random subset of the training data. (bagging)
    2. Sample a random subset of the features of the sampled data.
    3. Feed the data into a DecisionTree.

    I would like to know whether this is a legit implementation and where could I find more information about this implementation option.

    By the way, your work is awesome and is helping students like me understand deeply machine learning algorithms.

    opened by cesar0205 2
  • An error raised when run dcgan.py.

    An error raised when run dcgan.py.

    I run python mlfromscratch/unsupervised_learning/dcgan.py,

    The output is like that:

    33 [D loss: 0.029268, acc: 100.00%] [G loss: 5.641379, acc: 0.00%]
    34 [D loss: 0.024022, acc: 100.00%] [G loss: 5.165160, acc: 0.00%]
    35 [D loss: 0.043149, acc: 96.88%] [G loss: 5.387241, acc: 0.00%]
    36 [D loss: 0.017758, acc: 100.00%] [G loss: 5.255211, acc: 0.00%]
    python2.7/lib/python2.7/site-packages/mlfromscratch-0.0.4-py2.7.egg/mlfromscratch/utils/layers.py:486: RuntimeWarning: invalid value encountered in multiply
    python2.7/lib/python2.7/site-packages/mlfromscratch-0.0.4-py2.7.egg/mlfromscratch/utils/activation_functions.py:54: RuntimeWarning: invalid value encountered in greater_equal
    python2.7/lib/python2.7/site-packages/mlfromscratch-0.0.4-py2.7.egg/mlfromscratch/utils/activation_functions.py:57: RuntimeWarning: invalid value encountered in greater_equal
    37 [D loss: 0.020248, acc: 100.00%] [G loss: nan, acc: 100.00%]
    38 [D loss: nan, acc: 50.00%] [G loss: nan, acc: 100.00%]
    39 [D loss: nan, acc: 50.00%] [G loss: nan, acc: 100.00%]
    40 [D loss: nan, acc: 50.00%] [G loss: nan, acc: 100.00%]
    ...
    

    Thank you.

    opened by ljch2018 2
  • Apriori - Zero Division Error

    Apriori - Zero Division Error

    After adding s to subsets (previous issue) I'm getting the list of frequent itemsets but no rules. Intentionally set my support very low to see if I could get a 2-itemset antecedent, roughly 2%. Not an issue with greater support.

    ZeroDivisionError Traceback (most recent call last) in () 17 18 # Get and print the rules ---> 19 rules = apriori.generate_rules(transactions) 20 print ("Rules:") 21 for rule in rules:

    in generate_rules(self, transactions) 167 rules = [] 168 for itemset in frequent_itemsets: --> 169 rules += self._rules_from_itemset(itemset, itemset) 170 return rules

    in _rules_from_itemset(self, initial_itemset, itemset) 156 # recursively add rules from subsets 157 if k - 1 > 1: --> 158 rules.append(self._rules_from_itemset(initial_itemset, subsets)) 159 return rules 160

    in _rules_from_itemset(self, initial_itemset, itemset) 136 # Calculate the confidence as sup(A and B) / sup(B), if antecedent 137 # is B in an itemset of A and B --> 138 confidence = float("{0:.2f}".format(support / antecedent_support)) 139 if confidence >= self.min_conf: 140 # The concequent is the initial_itemset except for antecedent

    ZeroDivisionError: float division by zero

    opened by zpencerguy 2
  • Apriori - Subset name error

    Apriori - Subset name error

    Name subset missing s (subsets) at line 158?

    NameError Traceback (most recent call last) in () 17 18 # Get and print the rules ---> 19 rules = apriori.generate_rules(transactions) 20 print ("Rules:") 21 for rule in rules:

    in generate_rules(self, transactions) 167 rules = [] 168 for itemset in frequent_itemsets: --> 169 rules += self._rules_from_itemset(itemset, itemset) 170 return rules

    in _rules_from_itemset(self, initial_itemset, itemset) 156 # recursively add rules from subsets 157 if k - 1 > 1: --> 158 rules.append(self._rules_from_itemset(initial_itemset, subset)) 159 return rules 160

    NameError: name 'subset' is not defined

    opened by zpencerguy 2
  • transposed V from svd in LinearRegression

    transposed V from svd in LinearRegression

    There's a mistake in the LinearRegression module.

    You use the Moore-Penrose pseudoinverse on the values returned from np.linalg.svd().

    However, the values returned from Numpy's svd module are not the ones returned by traditional SVD.

    Normal SVD reduces to A = U @ S @ V.T, but Numpy's implementation is A = U @ S @ V.

    Basically, the V returned by Numpy's SVD is actually V.T.

    However, when you implement the pseudoinverse you use this line:

    X_sq_reg_inv = V.dot(np.linalg.pinv(S)).dot(U.T).

    That's the doctrinaire way of the SVD-based pseudoinverse, but you need to transpose V here in order to get the correct results.

    I can provide more detail to demonstrate how this returns correct results, but right now your LinearRegression isn't correct as is.

    opened by JonathanBechtel 1
  • Init using np.zeros instead of np.empty to avoid overflow

    Init using np.zeros instead of np.empty to avoid overflow

    • Using np.zeros instead of np.empty for initialization
    • Resolve random overflow error:
    Python/2.7/lib/python/site-packages/ipykernel_launcher.py:5: RuntimeWarning: overflow encountered in square
    
    opened by DandilionLau 1
  • The gradient in gradient descent is not correct.

    The gradient in gradient descent is not correct.

    Since the gradient was built from MSE, it should be divided by the number of training examples. However, this number is omitted in the code. This makes the learning rate dependent on the number of training examples, because the more training examples you have, the bigger will be the gradient. The learning rate should not be dependent on the number of training examples.

    https://github.com/eriklindernoren/ML-From-Scratch/blob/f078fc384e3188922431e6747eefaa1561f361c4/mlfromscratch/supervised_learning/regression.py#L76

    opened by dschaehi 1
  • Bug fix, use L1-norm instead of L2-norm for L1 regularization

    Bug fix, use L1-norm instead of L2-norm for L1 regularization

    The default norm for np.linalg.norm is the Frobenius norm or L2-norm. This needs to change to the L1 norm (Manhattan) to be correct.

    PS. This project is somewhat abandoned it seems, but I think it makes sense to have a PR for other people looking for things that might be incorrect. I spent some time trying to figure this one out :P

    opened by klintan 0
  • Project dependencies may have API risk issues

    Project dependencies may have API risk issues

    Hi, In ML-From-Scratch, inappropriate dependency versioning constraints can cause risks.

    Below are the dependencies and version constraints that the project is using

    matplotlib
    numpy
    sklearn
    pandas
    cvxopt
    scipy
    progressbar33
    terminaltables
    gym
    

    The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

    After further analysis, in this project, The version constraint of dependency numpy can be changed to >=1.8.0,<=1.23.0rc3. The version constraint of dependency pandas can be changed to >=0.4.0,<=1.2.5.

    The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

    The invocation of the current project includes all the following methods.

    The calling methods from the numpy
    numpy.linalg.eigh
    numpy.linalg.eig
    numpy.linalg.svd
    numpy.linalg.norm
    numpy.linalg.det
    numpy.linalg.inv
    numpy.linalg.pinv
    
    The calling methods from the pandas
    pandas.read_csv
    
    The calling methods from the all methods
    mlfromscratch.supervised_learning.LassoRegression.predict
    w.T.dot
    self.layer_input.T.dot
    progressbar.Percentage
    mlfromscratch.reinforcement_learning.DeepQNetwork
    self.W.reshape
    numpy.concatenate
    KNN
    model_builder
    matplotlib.pyplot.legend
    GAN.train
    layer.backward_pass
    mlfromscratch.utils.polynomial_features.dot
    mlfromscratch.utils.to_categorical.astype
    numpy.linalg.pinv
    NotImplementedError
    self.U_opt.update
    self._determine_frequent_itemsets
    self._mutate
    NaiveBayes.fit
    mlfromscratch.unsupervised_learning.KMeans
    mlfromscratch.supervised_learning.NaiveBayes.fit
    numpy.argsort
    self._transaction_contains_items
    sample_predictions.astype
    l1_l2_regularization
    sets.append
    self.output_shape
    self._calculate_centroids
    self._get_frequent_items
    mlfromscratch.supervised_learning.XGBoostRegressionTree.fit
    accum_grad.transpose.reshape
    mlfromscratch.supervised_learning.PolynomialRidgeRegression.predict
    y_pred.append
    mlfromscratch.supervised_learning.BayesianRegression.predict
    mlfromscratch.deep_learning.NeuralNetwork.add
    NeuralNetwork.add
    self.layers.append
    self._insert_tree
    numpy.random.shuffle
    self.combined.train_on_batch
    mlfromscratch.supervised_learning.BayesianRegression.fit
    self.output_activation
    ClassificationTree.fit
    sklearn.datasets.load_digits
    mlfromscratch.supervised_learning.LassoRegression
    numpy.mean
    cutoff.i.parent2.layers.w0.copy
    self.train_on_batch
    grad_wrt_state.T.dot
    self._transform
    mlfromscratch.supervised_learning.XGBoost.predict
    mlfromscratch.supervised_learning.KNN.predict
    self._converged
    self.GradientBoostingClassifier.super.__init__
    self.omega0.X_X.np.linalg.pinv.dot
    reversed
    gym.make
    accum_grad.transpose.ravel
    mlfromscratch.utils.calculate_covariance_matrix
    mlfromscratch.unsupervised_learning.PCA
    y.np.array.astype
    self.hidden_activation.gradient
    self.sigmoid
    numpy.split
    sigmoid.dot
    logging.basicConfig
    self.memory.pop
    self._get_frequent_itemsets
    X.diag_gradient.X.T.dot.dot.np.linalg.pinv.dot
    tmp_y2.astype
    mlfromscratch.deep_learning.layers.Reshape
    Autoencoder.train
    self._construct_tree
    self.ElasticNet.super.predict
    diff.any
    max
    self._sample
    mlfromscratch.deep_learning.NeuralNetwork
    isinstance
    self.LassoRegression.super.predict
    mean.sample.T.dot
    self.predict_value
    diag_gradient.X.T.dot.dot
    self.activation_func
    tmp_y1.astype
    os.path.dirname
    j.i.axs.axis
    neighbor_labels.astype
    numpy.expand_dims
    X.T.dot.dot
    dqn.model.summary
    batch.sum
    numpy.mean.append
    self._init_random_centroids
    self._calculate_likelihood
    grad_func
    mlfromscratch.utils.make_diagonal
    self._build_tree
    matplotlib.pyplot.get_cmap
    self.activation_func.gradient
    self.build_encoder
    mlfromscratch.reinforcement_learning.DeepQNetwork.play
    mlfromscratch.supervised_learning.decision_tree.RegressionTree
    mlfromscratch.supervised_learning.ParticleSwarmOptimizedNN
    SupportVectorMachine
    posteriors.append
    LogisticLoss
    hidden_output.T.dot
    self.omega0.dot
    gen_mult_ser
    grad_wrt_state.dot.dot
    mlfromscratch.supervised_learning.SupportVectorMachine.predict
    self.model.train_on_batch
    math.ceil
    V.dot
    sigmoid
    self._forward_pass
    layer.forward_pass
    numpy.array
    numpy.random.randint
    numpy.linalg.norm
    self.LinearRegression.super.__init__
    self._calculate_support
    numpy.round
    mlfromscratch.deep_learning.NeuralNetwork.test_on_batch
    LDA
    numpy.power
    matplotlib.pyplot.figure.add_subplot
    LDA.fit
    self.test_on_batch
    mlfromscratch.supervised_learning.Adaboost
    self.PolynomialRidgeRegression.super.__init__
    mlfromscratch.utils.make_diagonal.dot
    determine_padding
    layer.initialize
    self.save_imgs
    mlfromscratch.supervised_learning.LinearRegression.fit
    mlfromscratch.utils.data_operation.accuracy_score
    self._split
    self.find_frequent_itemsets
    min
    model.evolve.test_on_batch
    numpy.repeat
    self.__call__
    self.build_discriminator
    print
    numpy.linalg.inv
    mlfromscratch.supervised_learning.RandomForest.fit
    X.reshape.repeat
    LDA.predict
    numpy.linspace
    mlfromscratch.unsupervised_learning.PAM.predict
    l2_regularization
    self._calculate_fitness.append
    RandomForest
    enumerate
    mlfromscratch.utils.divide_on_feature
    mlfromscratch.utils.batch_iterator
    mlfromscratch.utils.data_manipulation.train_test_split
    X.resp.sum
    mlfromscratch.supervised_learning.NaiveBayes.predict
    mlfromscratch.utils.train_test_split
    self._pool_forward
    math.floor
    sklearn.datasets.make_classification
    layer.parameters
    self.LassoRegression.super.__init__
    numpy.clip
    f.read.split
    Perceptron.predict
    mlfromscratch.unsupervised_learning.Apriori
    calculate_std_dev
    self.env.render
    log2
    numpy.identity
    Perceptron.fit
    X.reshape.dot
    mlfromscratch.supervised_learning.SupportVectorMachine.fit
    self._expectation
    self.build_generator
    tree.predict
    numpy.tile
    filter_width.filter_height.channels.np.arange.np.repeat.reshape
    self.LassoRegression.super.fit
    mlfromscratch.unsupervised_learning.PCA.transform
    numpy.atleast_2d
    self.discriminator.summary
    batch_errors.append
    self._backward_pass
    numpy.linalg.det
    self.ElasticNet.super.fit
    layer.set_input_shape
    mlfromscratch.deep_learning.layers.Conv2D
    self._vote
    self.sigmoid.gradient
    mlfromscratch.utils.calculate_entropy
    numpy.sum
    self.model.predict
    mlfromscratch.unsupervised_learning.PAM
    individual.test_on_batch
    index_combinations
    progressbar.Bar
    col.mean
    numpy.ones
    numpy.arange
    cvxopt.solvers.qp
    NaiveBayes
    self.letters.index
    t.accum_grad.T.dot
    mlfromscratch.unsupervised_learning.FPGrowth.find_frequent_itemsets
    cutoff.i.parent1.layers.W.copy
    mlfromscratch.supervised_learning.GradientBoostingRegressor.fit
    matplotlib.pyplot.title
    range
    mlfromscratch.reinforcement_learning.DeepQNetwork.train
    matplotlib.pyplot.figure
    mlfromscratch.supervised_learning.ElasticNet.fit
    GradientBoostingClassifier.fit
    numpy.insert
    mlfromscratch.supervised_learning.Adaboost.predict
    os.path.join
    self.hidden_activation
    self.PolynomialRidgeRegression.super.predict
    mlfromscratch.deep_learning.loss_functions.CrossEntropy
    self._expand_cluster
    mlfromscratch.supervised_learning.PolynomialRidgeRegression.fit
    mlfromscratch.utils.k_fold_cross_validation_sets
    matplotlib.pyplot.axis
    self._calculate_scatter_matrices
    X.dot
    frequent.append
    self._get_non_medoids
    mlfromscratch.supervised_learning.BayesianRegression
    self.w0_opt.update
    mlfromscratch.deep_learning.layers.ZeroPadding2D
    mlfromscratch.deep_learning.activation_functions.Sigmoid
    X.std
    DCGAN.train
    sklearn.datasets.load_iris
    numpy.exp
    self.regularization.grad
    mlfromscratch.unsupervised_learning.DBSCAN.predict
    Perceptron
    self.XGBoostRegressionTree.super.fit
    x.strip.replace
    sigmoid.sum
    rules.append
    matplotlib.pyplot.close
    Adaboost.predict
    col.var
    KNN.predict
    matplotlib.pyplot.ylabel
    self._build_model
    mlfromscratch.unsupervised_learning.RBM
    self.cmap
    numpy.amax
    S.np.linalg.pinv.V.dot.dot
    self._calculate_prior
    float
    mlfromscratch.supervised_learning.XGBoost.fit
    mlfromscratch.supervised_learning.LDA.predict
    matplotlib.pyplot.subplots
    mlfromscratch.supervised_learning.KNN
    numpy.random.random_sample
    self.multivariate_gaussian
    image_to_column
    numpy.bincount
    matplotlib.pyplot.suptitle
    mlfromscratch.supervised_learning.ClassificationTree.fit
    self.env.close
    numpy.random.normal
    column_to_image
    sklearn.datasets.make_moons
    items.sort
    mu_n.T.dot
    sample_predictions.astype.np.bincount.argmax
    activation_function
    get_im2col_indices
    self.kernel
    X1.mean
    cutoff.i.parent2.layers.W.copy
    mlfromscratch.deep_learning.optimizers.Adam
    self.print_tree
    X_test.reshape.reshape
    mlfromscratch.supervised_learning.GradientBoostingClassifier.fit
    y_pred.np.sign.flatten
    mlfromscratch.supervised_learning.MultiClassLDA
    XGBoost.fit
    pow
    shuffle_data
    self.W_opt.update
    numpy.empty
    conditional_database.append
    cols.transpose.reshape
    self._closest_centroid
    XGBoost.predict
    mlfromscratch.supervised_learning.GradientBoostingRegressor.predict
    self._inherit_weights
    accum_grad.reshape.transpose
    mlfromscratch.utils.data_operation.calculate_covariance_matrix
    mlfromscratch.utils.accuracy_score
    self.transform
    numpy.ravel
    numpy.argmax
    X_train.reshape.reshape
    self._generate_candidates
    terminaltables.AsciiTable
    numpy.random.uniform
    mnist.data.reshape
    self.LinearRegression.super.fit
    X.T.X_X.np.linalg.pinv.dot.dot
    split_func
    self._leaf_value_calculation
    mlfromscratch.deep_learning.layers.Activation
    mlfromscratch.utils.calculate_variance
    mlfromscratch.supervised_learning.LogisticRegression.predict
    self.param.X.dot.self.sigmoid.np.round.astype
    self.loss.hess
    mlfromscratch.supervised_learning.LinearRegression
    mlfromscratch.supervised_learning.GradientBoostingRegressor
    numpy.pad.transpose
    i.self.parameters.append
    progressbar.ProgressBar
    model.evolve.add
    self.w_updt.any
    j.i.axs.imshow
    self.autoencoder.train_on_batch
    X.reshape.reshape
    x.strip
    self.PolynomialRegression.super.__init__
    numpy.percentile
    mlfromscratch.supervised_learning.LogisticRegression
    numpy.tile.reshape
    numpy.log
    self._calculate_cost
    mlfromscratch.supervised_learning.MultiClassLDA.plot_in_2d
    covar.np.linalg.pinv.mean.sample.T.dot.dot
    mlfromscratch.deep_learning.NeuralNetwork.summary
    mlfromscratch.reinforcement_learning.DeepQNetwork.set_model
    numpy.divide
    numpy.var
    numpy.add.at
    noise.self.generator.predict.reshape
    self.autoencoder.summary
    t.accum_grad.dot
    numpy.random.seed
    t.X.dot
    mlfromscratch.utils.polynomial_features
    mlfromscratch.utils.to_categorical
    mlfromscratch.supervised_learning.RegressionTree.predict
    math.sqrt
    model.velocity.append
    numpy.random.rand
    MultilayerPerceptron.fit
    mlfromscratch.supervised_learning.XGBoostRegressionTree
    y_pred.y.self.loss.hess.sum
    mlfromscratch.supervised_learning.RegressionTree.fit
    self._create_clusters
    mlfromscratch.unsupervised_learning.Apriori.find_frequent_itemsets
    super
    layer.output_shape
    mlfromscratch.supervised_learning.XGBoost
    X.T.X.diag_gradient.X.T.dot.dot.np.linalg.pinv.dot.dot
    i.parent.layers.W.copy
    self.loss_function.gradient
    mlfromscratch.deep_learning.layers.BatchNormalization
    self.trees.append
    self.W_col.T.dot
    numpy.roll
    self._initialize_weights
    self.V_opt.update
    i.self.trees.predict
    mlfromscratch.unsupervised_learning.Apriori.generate_rules
    slice
    self._get_likelihoods
    activation.activation_functions
    SW.np.linalg.inv.dot
    numpy.sign
    mlfromscratch.supervised_learning.ClassificationTree
    self.discriminator.train_on_batch
    math.log
    str
    self._memorize
    mlfromscratch.utils.euclidean_distance
    NeuralNetwork.predict
    self._get_non_medoids.append
    model.evolve.predict
    numpy.atleast_2d.reshape
    numpy.argmax.any
    hasattr
    SupportVectorMachine.predict
    self.X_col.T.accum_grad.dot.reshape
    numpy.zeros
    scipy.stats.chi2.rvs
    self._initialize_population
    DCGAN
    XGBoost
    SupportVectorMachine.fit
    y_pred.y.dot
    mlfromscratch.supervised_learning.LinearRegression.predict
    self.RegressionTree.super.fit
    self._update_weights
    mlfromscratch.supervised_learning.ClassificationTree.predict
    mlfromscratch.supervised_learning.Adaboost.fit
    X.mean
    NaiveBayes.predict
    mlfromscratch.utils.standardize
    mlfromscratch.supervised_learning.LDA
    self.beta_opt.update
    numpy.repeat.reshape
    numpy.full
    format
    batch.dot
    self.autoencoder.predict
    self._determine_prefixes
    os.path.abspath
    mlfromscratch.deep_learning.loss_functions.SquareLoss
    DecisionStump
    self.autoencoder.layers.extend
    self.population.append
    scipy.stats.multivariate_normal.rvs
    list
    f.read
    self.loss.gradient.dot
    sklearn.datasets.make_regression
    self._classify
    cluster.append
    sklearn.datasets.make_blobs
    numpy.multiply
    model
    self._closest_medoid
    numpy.shape
    self.bar
    self.RidgeRegression.super.__init__
    total_mean._mean.dot
    self._pool_backward
    mlfromscratch.supervised_learning.GradientBoostingClassifier.predict
    self.output_activation.gradient
    numpy.unique
    tuple
    accum_grad.reshape.reshape
    self.parameters.append
    numpy.outer
    mlfromscratch.utils.get_random_subsets
    Adaboost.fit
    self.activation.gradient
    numpy.zeros_like
    len
    self._gain
    numpy.array_equal
    X.diag_gradient.dot.dot
    self._get_frequent_items.index
    S.np.linalg.pinv.V.dot.dot.dot
    zip
    y.shape.X.shape.model_builder.summary
    self.clusters.append
    self._has_infrequent_itemsets
    self._init_random_medoids.copy
    t.self.states.dot
    self.reconstruct
    self.env.step
    self.size.X.repeat.repeat
    MultilayerPerceptron
    self.responsibility.argmax
    loss
    mlfromscratch.unsupervised_learning.DBSCAN
    sample.dot
    numpy.atleast_1d
    sklearn.datasets.fetch_mldata
    X.mean.X.T.dot
    mlfromscratch.utils.normalize.dot
    self.fit
    itertools.combinations_with_replacement
    mlfromscratch.deep_learning.layers.RNN
    self._init_random_gaussians
    centroid_i.clusters.append
    self.model_builder
    set
    self._draw_scaled_inv_chi_sq
    l1_regularization
    self.ElasticNet.super.__init__
    self.clfs.append
    self.PolynomialRegression.super.predict
    fig.savefig
    sum
    Y.mean
    mlfromscratch.supervised_learning.LDA.fit
    accum_grad.reshape.dot
    setuptools.setup
    LogisticRegression.predict
    mlfromscratch.supervised_learning.LassoRegression.fit
    self.initialize_weights
    self.memory.append
    mlfromscratch.supervised_learning.decision_tree.RegressionTree.predict
    X_X.np.linalg.pinv.dot
    X_col.np.argmax.flatten
    cmap
    cutoff.i.parent1.layers.w0.copy
    copy.copy
    self._crossover
    layer.layer_name
    y_pred.y.self.loss.gradient.y.sum
    name.activation_functions
    self.hidden_activation.dot
    self.GradientBoostingRegressor.super.__init__
    numpy.pad.reshape
    omega_n.dot
    GradientBoostingClassifier.predict
    numpy.bincount.argmax
    mlfromscratch.supervised_learning.Perceptron.fit
    numpy.inner
    mlfromscratch.supervised_learning.GradientBoostingClassifier
    join
    matplotlib.pyplot.scatter
    matplotlib.pyplot.show
    self._rules_from_itemset
    matplotlib.pyplot.xlabel
    MultilayerPerceptron.predict
    transaction.sort
    mlfromscratch.supervised_learning.PolynomialRidgeRegression
    mlfromscratch.supervised_learning.LogisticRegression.fit
    numpy.diag
    itertools.combinations
    self._sample.dot
    medoid_i.clusters.append
    numpy.append
    mlfromscratch.deep_learning.NeuralNetwork.predict
    self.env.reset
    X.T.dot
    LogisticRegression
    random.sample
    self.generator.predict
    self.responsibilities.append
    mlfromscratch.utils.Plot.plot_in_2d
    setuptools.find_packages
    self._get_cluster_labels
    Adaboost
    mlfromscratch.supervised_learning.RegressionTree
    self.PolynomialRegression.super.fit
    self.log_func
    mlfromscratch.unsupervised_learning.RBM.fit
    Autoencoder
    self.build_decoder
    mlfromscratch.unsupervised_learning.GeneticAlgorithm.run
    self.responsibility.sum
    self.loss_function.acc
    numpy.linalg.svd
    mlfromscratch.deep_learning.layers.UpSampling2D
    eigenvalues.argsort
    mlfromscratch.supervised_learning.RandomForest
    numpy.pad
    mlfromscratch.unsupervised_learning.FPGrowth
    mlfromscratch.supervised_learning.Neuroevolution
    mlfromscratch.supervised_learning.RandomForest.predict
    self._maximization
    self._get_itemset_key
    cols.transpose.transpose
    pandas.read_csv
    self._construct_training_set
    self.training_reconstructions.append
    numpy.linalg.eig
    numpy.argmax.astype
    math.exp
    self.progressbar
    table_data.append
    mlfromscratch.deep_learning.layers.Dropout
    numpy.expand_dims.dot
    numpy.max
    self._get_neighbors
    NeuralNetwork.fit
    mlfromscratch.supervised_learning.Perceptron
    self.training_errors.append
    RandomForest.predict
    cvxopt.matrix
    cov_tot.np.linalg.pinv.dot
    matplotlib.pyplot.plot
    X.T.X_sq_reg_inv.dot.dot
    mlfromscratch.deep_learning.layers.Flatten
    mlfromscratch.unsupervised_learning.GaussianMixtureModel
    self.generator.summary
    int
    fig.add_subplot.scatter
    numpy.where
    self.GradientBoostingClassifier.super.fit
    mlfromscratch.unsupervised_learning.GaussianMixtureModel.predict
    FPTreeNode
    neighbors.append
    self.visited_samples.append
    self.activation
    self.regularization
    cnt.gen_imgs.reshape
    RandomForest.fit
    mlfromscratch.supervised_learning.Perceptron.predict
    DecisionNode
    self._initialize
    i.parent.layers.w0.copy
    self.loss.gradient
    mlfromscratch.supervised_learning.ElasticNet.predict
    self.omega0.self.mu0.T.dot.dot
    self.errors.append
    self.freq_itemsets.append
    main
    abs
    self._generate_candidates.append
    self.combined.layers.extend
    self.W_col.dot
    i.self.trees.fit
    mlfromscratch.supervised_learning.SupportVectorMachine
    numpy.random.random
    self.ClassificationTree.super.fit
    GradientBoostingClassifier
    x.startswith
    GAN
    items.append
    batch_error.append
    codecs.open
    self.discriminator.set_trainable
    ClassificationTree.predict
    Rule
    self.loss_function.loss
    self._init_random_medoids
    numpy.random.binomial
    self._initialize_parameters
    mlfromscratch.supervised_learning.XGBoostRegressionTree.predict
    self.mu0.T.dot
    self.layers.output_shape
    mlfromscratch.utils.mean_squared_error
    self._impurity_calculation
    mean.X.T.dot
    numpy.prod
    model.evolve.evolve
    LogisticRegression.fit
    calculate_variance
    y.T.dot
    numpy.sqrt
    mlfromscratch.deep_learning.layers.Dense
    mlfromscratch.unsupervised_learning.KMeans.predict
    numpy.vstack
    class_distr.append
    mlfromscratch.deep_learning.NeuralNetwork.fit
    negative_visible.T.dot
    mlfromscratch.utils.data_operation.mean_squared_error
    numpy.expand_dims.sum
    numpy.random.choice
    self.gamma_opt.update
    progressbar.ETA
    mlfromscratch.utils.Plot
    X.astype.astype
    mlfromscratch.supervised_learning.NaiveBayes
    NeuralNetwork
    mlfromscratch.unsupervised_learning.GeneticAlgorithm
    ClassificationTree
    mlfromscratch.utils.normalize
    self.PolynomialRidgeRegression.super.fit
    batch.T.dot
    self._is_prefix
    mlfromscratch.deep_learning.activation_functions.Softmax
    X2.mean
    math.pow
    subsets.append
    imgs.self.autoencoder.predict.reshape
    numpy.linalg.eigh
    mlfromscratch.supervised_learning.ElasticNet
    y.reshape
    self._calculate_fitness
    self._select_action
    

    @developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.

    opened by PyDeps 0
  • (dbscan) Use valid index for neighbors

    (dbscan) Use valid index for neighbors

    The DBSCAN implementation was producing garbage results for me because the _get_neighbors function was returning the wrong indices for valid neighbors.

    If comparing a sample with higher index than the argument, the returned indices were off by 1.

    This change fixes it.

    PS thanks for this implementation! It's a better fit for my use-case, where speed is not important but avoiding scipy is very convenient.

    opened by ezheidtmann 0
  • No lambda, gamma for XGBoostRegressionTree?

    No lambda, gamma for XGBoostRegressionTree?

    I am new to XGBoost and I was trying to study it through your code. I found that lambda and gamma(min_split_loss) were missing and these values should be considered in function _gain_by_taylor and _approximate_update. Also I think some of the important features such as pruning are missing in your code... Didn't look at the code thoroughly so I could be wrong

    opened by jinwoolim8180 0
Owner
Erik Linder-Norén
ML engineer at Apple. Excited about machine learning, basketball and building things.
Erik Linder-Norén
A bare-bones TensorFlow framework for Bayesian deep learning and Gaussian process approximation

Aboleth A bare-bones TensorFlow framework for Bayesian deep learning and Gaussian process approximation [1] with stochastic gradient variational Bayes

Gradient Institute 127 Dec 12, 2022
A bare-bones Python library for quality diversity optimization.

pyribs Website Source PyPI Conda CI/CD Docs Docs Status Twitter pyribs.org GitHub docs.pyribs.org A bare-bones Python library for quality diversity op

ICAROS 127 Jan 6, 2023
Hitters Linear Regression - Hitters Linear Regression With Python

Hitters_Linear_Regression Kullanacağımız veri seti Carnegie Mellon Üniversitesi'

AyseBuyukcelik 2 Jan 26, 2022
Linear algebra python - Number of operations and problems in Linear Algebra and Numerical Linear Algebra

Linear algebra in python Number of operations and problems in Linear Algebra and

Alireza 5 Oct 9, 2022
Minimal deep learning library written from scratch in Python, using NumPy/CuPy.

SmallPebble Project status: experimental, unstable. SmallPebble is a minimal/toy automatic differentiation/deep learning library written from scratch

Sidney Radcliffe 92 Dec 30, 2022
Keras like implementation of Deep Learning architectures from scratch using numpy.

Mini-Keras Keras like implementation of Deep Learning architectures from scratch using numpy. How to contribute? The project contains implementations

MANU S PILLAI 5 Oct 10, 2021
Neural-net-from-scratch - A simple Neural Network from scratch in Python using the Pymathrix library

A Simple Neural Network from scratch A Simple Neural Network from scratch in Pyt

Youssef Chafiqui 2 Jan 7, 2022
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the moment, only TensorFlow sequential models are supported. Interfaces to either the Pyomo or Gurobi modeling environments are offered.

ChemEngAI 40 Dec 27, 2022
The Dual Memory is build from a simple CNN for the deep memory and Linear Regression fro the fast Memory

Simple-DMA a simple Dual Memory Architecture for classifications. based on the paper Dual-Memory Deep Learning Architectures for Lifelong Learning of

null 1 Jan 27, 2022
Price-Prediction-For-a-Dream-Home - A machine learning based linear regression trained model for house price prediction.

Price-Prediction-For-a-Dream-Home ROADMAP TO THIS LINEAR REGRESSION BASED HOUSE PRICE PREDICTION PREDICTION MODEL Import all the dependencies of the p

DIKSHA DESWAL 1 Dec 29, 2021
PyTorch implementations of deep reinforcement learning algorithms and environments

Deep Reinforcement Learning Algorithms with PyTorch This repository contains PyTorch implementations of deep reinforcement learning algorithms and env

Petros Christodoulou 4.7k Jan 4, 2023
Composable transformations of Python+NumPy programsComposable transformations of Python+NumPy programs

Chex Chex is a library of utilities for helping to write reliable JAX code. This includes utils to help: Instrument your code (e.g. assertions) Debug

DeepMind 506 Jan 8, 2023
MLP-Numpy - A simple modular implementation of Multi Layer Perceptron in pure Numpy.

MLP-Numpy A simple modular implementation of Multi Layer Perceptron in pure Numpy. I used the Iris dataset from scikit-learn library for the experimen

Soroush Omranpour 1 Jan 1, 2022
Creating a Linear Program Solver by Implementing the Simplex Method in Python with NumPy

Creating a Linear Program Solver by Implementing the Simplex Method in Python with NumPy Simplex Algorithm is a popular algorithm for linear programmi

Reda BELHAJ 2 Oct 12, 2022
Simple Linear 2nd ODE Solver GUI - A 2nd constant coefficient linear ODE solver with simple GUI using euler's method

Simple_Linear_2nd_ODE_Solver_GUI Description It is a 2nd constant coefficient li

:) 4 Feb 5, 2022
Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022
PyTorch implementation of the Deep SLDA method from our CVPRW-2020 paper "Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis"

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis This is a PyTorch implementation of the Deep Streaming Linear Discriminant

Tyler Hayes 41 Dec 25, 2022