Machine learning, in numpy

Overview

numpy-ml

Ever wish you had an inefficient but somewhat legible collection of machine learning algorithms implemented exclusively in NumPy? No?

Installation

For rapid experimentation

To use this code as a starting point for ML prototyping / experimentation, just clone the repository, create a new virtualenv, and start hacking:

$ git clone https://github.com/ddbourgin/numpy-ml.git
$ cd numpy-ml && virtualenv npml && source npml/bin/activate
$ pip3 install -r requirements-dev.txt

As a package

If you don't plan to modify the source, you can also install numpy-ml as a Python package: pip3 install -u numpy_ml.

The reinforcement learning agents train on environments defined in the OpenAI gym. To install these alongside numpy-ml, you can use pip3 install -u 'numpy_ml[rl]'.

Documentation

For more details on the available models, see the project documentation.

Available models

  1. Gaussian mixture model

    • EM training
  2. Hidden Markov model

    • Viterbi decoding
    • Likelihood computation
    • MLE parameter estimation via Baum-Welch/forward-backward algorithm
  3. Latent Dirichlet allocation (topic model)

    • Standard model with MLE parameter estimation via variational EM
    • Smoothed model with MAP parameter estimation via MCMC
  4. Neural networks

    • Layers / Layer-wise ops
      • Add
      • Flatten
      • Multiply
      • Softmax
      • Fully-connected/Dense
      • Sparse evolutionary connections
      • LSTM
      • Elman-style RNN
      • Max + average pooling
      • Dot-product attention
      • Embedding layer
      • Restricted Boltzmann machine (w. CD-n training)
      • 2D deconvolution (w. padding and stride)
      • 2D convolution (w. padding, dilation, and stride)
      • 1D convolution (w. padding, dilation, stride, and causality)
    • Modules
      • Bidirectional LSTM
      • ResNet-style residual blocks (identity and convolution)
      • WaveNet-style residual blocks with dilated causal convolutions
      • Transformer-style multi-headed scaled dot product attention
    • Regularizers
      • Dropout
    • Normalization
      • Batch normalization (spatial and temporal)
      • Layer normalization (spatial and temporal)
    • Optimizers
      • SGD w/ momentum
      • AdaGrad
      • RMSProp
      • Adam
    • Learning Rate Schedulers
      • Constant
      • Exponential
      • Noam/Transformer
      • Dlib scheduler
    • Weight Initializers
      • Glorot/Xavier uniform and normal
      • He/Kaiming uniform and normal
      • Standard and truncated normal
    • Losses
      • Cross entropy
      • Squared error
      • Bernoulli VAE loss
      • Wasserstein loss with gradient penalty
      • Noise contrastive estimation loss
    • Activations
      • ReLU
      • Tanh
      • Affine
      • Sigmoid
      • Leaky ReLU
      • ELU
      • SELU
      • Exponential
      • Hard Sigmoid
      • Softplus
    • Models
      • Bernoulli variational autoencoder
      • Wasserstein GAN with gradient penalty
      • word2vec encoder with skip-gram and CBOW architectures
    • Utilities
      • col2im (MATLAB port)
      • im2col (MATLAB port)
      • conv1D
      • conv2D
      • deconv2D
      • minibatch
  5. Tree-based models

    • Decision trees (CART)
    • [Bagging] Random forests
    • [Boosting] Gradient-boosted decision trees
  6. Linear models

    • Ridge regression
    • Logistic regression
    • Ordinary least squares
    • Bayesian linear regression w/ conjugate priors
      • Unknown mean, known variance (Gaussian prior)
      • Unknown mean, unknown variance (Normal-Gamma / Normal-Inverse-Wishart prior)
  7. n-Gram sequence models

    • Maximum likelihood scores
    • Additive/Lidstone smoothing
    • Simple Good-Turing smoothing
  8. Multi-armed bandit models

    • UCB1
    • LinUCB
    • Epsilon-greedy
    • Thompson sampling w/ conjugate priors
      • Beta-Bernoulli sampler
    • LinUCB
  9. Reinforcement learning models

    • Cross-entropy method agent
    • First visit on-policy Monte Carlo agent
    • Weighted incremental importance sampling Monte Carlo agent
    • Expected SARSA agent
    • TD-0 Q-learning agent
    • Dyna-Q / Dyna-Q+ with prioritized sweeping
  10. Nonparameteric models

    • Nadaraya-Watson kernel regression
    • k-Nearest neighbors classification and regression
    • Gaussian process regression
  11. Matrix factorization

    • Regularized alternating least-squares
    • Non-negative matrix factorization
  12. Preprocessing

    • Discrete Fourier transform (1D signals)
    • Discrete cosine transform (type-II) (1D signals)
    • Bilinear interpolation (2D signals)
    • Nearest neighbor interpolation (1D and 2D signals)
    • Autocorrelation (1D signals)
    • Signal windowing
    • Text tokenization
    • Feature hashing
    • Feature standardization
    • One-hot encoding / decoding
    • Huffman coding / decoding
    • Term frequency-inverse document frequency (TF-IDF) encoding
    • MFCC encoding
  13. Utilities

    • Similarity kernels
    • Distance metrics
    • Priority queue
    • Ball tree
    • Discrete sampler
    • Graph processing and generators

Contributing

Am I missing your favorite model? Is there something that could be cleaner / less confusing? Did I mess something up? Submit a PR! The only requirement is that your models are written with just the Python standard library and NumPy. The SciPy library is also permitted under special circumstances ;)

See full contributing guidelines here.

Comments
  • A little bug!

    A little bug!

    Hi, I think there is a little bug at numpy-ml/numpy_ml/neural_nets/activations/activations.py Line 64.

    your code

    fn_x = self.fn_x
    

    but

    self.fn_x Never defined

    bug 
    opened by real-zhangzhe 8
  • Feature Request: Online Linear Regression

    Feature Request: Online Linear Regression

    A lot of linear regression operate in static mode which means that we have to retrain to get the parameters after getting a new data point. This limits the functionality of the linear regression and make working in real-time difficult. I will implement this online formulation using ideas from the paper attached to this issue. This is based on recursive least square formulation. I have implemented in one of my past project. However, if I get approvals, then I will clean it up and create a new PR

    rls.pdf

    model request 
    opened by kenluck2001 7
  • Naive bayes

    Naive bayes

    • What I did

    Implement Gaussian naive Bayes Class and Basic documentation.

    • How I did it

    Refer to the formula of Gaussian naive Bayes from the notes

    • How to verify it

    compare the performance with the gaussianNB from sci-kit learn.

    opened by sfsf9797 6
  • Add the alpha dropout method

    Add the alpha dropout method

    Dear Post Dr.Bourgin, I wanna give a perfect method code, however , unfortunately may be it can not work well (I have not test the method).So.....uh~~ may I ask your help to finish it, then get the honor that be a contributor of this great project (5k+ stars).... THANK YOU!

    opened by BoltzmannZhaung 6
  • [Question] Why `gamma * beta` stand for ` L2 in  LogisticRegression._NLL_grad

    [Question] Why `gamma * beta` stand for ` L2 in LogisticRegression._NLL_grad

    Hello, this is a great project , I am learning how to implement model without sklearn/tensorflow , it really help me a lot .

    I have a question on https://github.com/ddbourgin/numpy-ml/blob/4f37707c6c7c390645dec5a503c12a48e624b249/numpy_ml/linear_models/lm.py#L252

    Since P-norm is defined as image

    l1norms(self.beta) means the sum of all absulote value of each element in self.beta . I don't quite understand why the simple gamma * beta stand for `L2 ?

    PS: May I ask what IDE and code document plugin you are using ? I see some annotation don't beyond to latex , it would be nice to see beautiful math symbols than raw latex :)

    bug 
    opened by eromoe 5
  • neural_nets/utils/utils.py line 797 has a bug!

    neural_nets/utils/utils.py line 797 has a bug!

    hi, I think neural_nets/utils/utils.py line 797 has a bug!

    your code

    i0, i1 = i * s, (i * s) + fr * (d + 1) - d j0, j1 = j * s, (j * s) + fc * (d + 1) - d

    right code

    i0, i1 = i * s, (i * s) + fr j0, j1 = j * s, (j * s) + fc

    because fr and fc are already dilated size !

    bug 
    opened by real-zhangzhe 5
  • Added support for Online Linear Regression

    Added support for Online Linear Regression

    This work is to create an online version of Linear regression as described in issue https://github.com/ddbourgin/numpy-ml/issues/70 . The implement the online version of the Ordinary least square using Matrix Inversion Lemma RLS, version 1 ( https://github.com/ddbourgin/numpy-ml/files/6536069/rls.pdf ).

    Beware that there will be trade-off in some performance during the noisy nature of training on a sample at a time, in comparison to a batch. On the average, this should not be too significant.

    To Do

    Add unit test

    • [x] Is the code you are submitting your own work? Yes
    • [x] Did you properly attribute the authors of any code you referenced? Yes
    • [x] Did you write unit tests for your new model?
    • [x] Does your submission pass the unit tests?
    • [ ] Did you write documentation for your new model?
    • [ ] Have you formatted your code using the black deaults?

    A sample for the API call is

    olr = LinearRegression()
    olr.fit(X, y)
    olr.predict(rest-x)
    # on new data just call update to modify beta field and update the model in an online manner
    olr.update(x_new, y_new)
    

    Note: no modification has happened to existing implementation as it is only active when update method is called.

    opened by kenluck2001 4
  • activations.py optimizations

    activations.py optimizations

    Taking a quick look, some of the grad and grad2 functions might benefit from some optimizations. Here's on example:

    https://github.com/ddbourgin/numpy-ml/blob/fce2acfd7c370f55373bdc6dff1761a8258bfe27/numpy_ml/neural_nets/activations/activations.py#L38-L39

    Here the function could be changed such that fn(x) is only computed once:

    def grad(self, x):
        fn_x = self.fn(x)
        return fn_x * (1 - fn_x)
    

    The extra mem used to store the calculation should be immediately collected after the function ends so that shouldn't be a problem. Would love a second opinion @ddbourgin before making a PR with the necessary changes.

    enhancement 
    opened by jaymody 4
  • fix: multi dimension update for covariance in gmm

    fix: multi dimension update for covariance in gmm

    - What bug I fixed

    According to @jjjjohnson in #16 , we can apply multi dimension covariance in gmm.

    This pull request fixes #16.

    - How I fixed it

    1. change dimension from 2 to self.d.

    - How you can verify it

    The tests did not pass because of the following: And tests did pass before changing. We can ask @jjjjohnson to take a look.

    /Users/zhuoran/Documents/git/numpy-ml/gmm/gmm.py:66: RuntimeWarning: invalid value encountered in double_scalars
      if np.isnan(vlb) or np.abs((vlb - prev_vlb) / prev_vlb) <= tol:
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    /Users/zhuoran/Documents/git/numpy-ml/gmm/gmm.py:116: RuntimeWarning: invalid value encountered in true_divide
      self.mu[ix, :] = num / den
    /Users/zhuoran/Documents/git/numpy-ml/gmm/gmm.py:43: RuntimeWarning: divide by zero encountered in log
      log_pi_k = np.log(pi_k)
    /usr/local/lib/python3.7/site-packages/numpy/linalg/linalg.py:1817: RuntimeWarning: invalid value encountered in slogdet
      sign, logdet = _umath_linalg.slogdet(a, signature=signature)
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Traceback (most recent call last):
      File "/Users/zhuoran/Documents/git/numpy-ml/gmm/tests.py", line 111, in <module>
        plot()
      File "/Users/zhuoran/Documents/git/numpy-ml/gmm/tests.py", line 100, in plot
        ax = plot_clusters(G, X, ax)
      File "/Users/zhuoran/Documents/git/numpy-ml/gmm/tests.py", line 52, in plot_clusters
        rv = multivariate_normal(model.mu[c], model.sigma[c], allow_singular=True)
      File "/usr/local/lib/python3.7/site-packages/scipy/stats/_multivariate.py", line 363, in __call__
        seed=seed)
      File "/usr/local/lib/python3.7/site-packages/scipy/stats/_multivariate.py", line 736, in __init__
        self.cov_info = _PSD(self.cov, allow_singular=allow_singular)
      File "/usr/local/lib/python3.7/site-packages/scipy/stats/_multivariate.py", line 156, in __init__
        s, u = scipy.linalg.eigh(M, lower=lower, check_finite=check_finite)
      File "/usr/local/lib/python3.7/site-packages/scipy/linalg/decomp.py", line 374, in eigh
        a1 = _asarray_validated(a, check_finite=check_finite)
      File "/usr/local/lib/python3.7/site-packages/scipy/_lib/_util.py", line 239, in _asarray_validated
        a = toarray(a)
      File "/usr/local/lib/python3.7/site-packages/numpy/lib/function_base.py", line 1233, in asarray_chkfinite
        "array must not contain infs or NaNs")
    ValueError: array must not contain infs or NaNs
    
    opened by WuZhuoran 4
  • Naive Bayes

    Naive Bayes

    Hi, I am thinking about implementing Naive Bayes Methods and make a pull request. But I am unsure about the Unit testing part, Should I compare the performance with the ScikitLearn library?

    model request 
    opened by sfsf9797 3
  • Installation not documented; couldn't find PyPi package or run tests

    Installation not documented; couldn't find PyPi package or run tests

    This is an awesome library, thanks @ddbourgin!!

    Users might not know the best way to install this package and try it out. (I didn't, so I eventually just copied the source files.) Neither the readme nor readthedocs have install instructions.

    I couldn't find it on PyPi or Anaconda, and there doesn't appear to be a pyproject.toml, setup.cfg, setup.py, or conda recipe.

    Moreover, the tests aren't in a standard path like tests/. This is uncommon and therefore confusion, and it makes it harder to run them. Edit: I wasn't expecting them under the source, so I initially wrote that I couldn't find them.

    I think it would be great to document how to install numpy-ml, and run its tests & see them to clarify the behavior of some of the functions.

    There are some great build and CI tools for Python available, which I recently learned how to use effectively. I'm happy to make a pull request if it would be helpful.

    opened by dmyersturnbull 3
  • Update README.md

    Update README.md

    All Submissions

    • [x] Is the code you are submitting your own work?
    • [ ] Have you followed the contributing guidelines?
    • [ ] Have you checked to ensure there aren't other open Pull Requests for the same update/change?

    New Model Submissions

    • [x] Is the code you are submitting your own work?
    • [ ] Did you properly attribute the authors of any code you referenced?
    • [ ] Did you write unit tests for your new model?
    • [ ] Does your submission pass the unit tests?
    • [ ] Did you write documentation for your new model?
    • [ ] Have you formatted your code using the black deaults?

    Changes to Existing Models

    • [ ] Have you added an explanation of what your changes do and why you'd like us to include them?
    • [ ] Have you written new tests for your changes, as applicable?
    • [ ] Have you successfully ran tests with your changes locally?
    opened by Velcon-Zheng 0
  • Import of collections.Hashable fails in Python 3.10

    Import of collections.Hashable fails in Python 3.10

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS 12.6
    • Python version: 3.10.7
    • NumPy version: 1.23.3

    Describe the current behavior import of numpy_ml fails due to an ImportError with the collections module.

    Describe the expected behavior Importing just the module should not generate an ImportError

    Code to reproduce the issue

    # Python 3.10 or newer
    import numpy_ml
    

    Other info / logs In Python 3.10 the deprecated aliases were removed.

    Remove deprecated aliases to Collections Abstract Base Classes from the collections module. (Contributed by Victor Stinner in bpo-37324.)

    from What’s New In Python 3.10

    fix To fix the bug, in /numpy_ml/utils/data_structures.py change

    from collections import Hashable
    

    to

    from collections.abc import Hashable
    
    opened by PTriebold 0
  • neural nets optimizer shape mismatch during backward pass

    neural nets optimizer shape mismatch during backward pass

    @ddbourgin Have an issue where updates to gradients cannot be performed since shapes conflict during backprop... specifically in the optimizer file.

    Error reads:

    C[param_name]["mean"] = d1 * mean + (1 - d1) * param_grad
    ValueError: operands could not be broadcast together with shapes (100,10) (3072,100) 
    

    Model architecture is as follows:

    Input -> n_samples, 3072 FC1 -> 3072, 100 FC2 -> 100, 10

    The model code is as follows:

    def _build_model(self):
        self.model = OrderedDict()
        self.model['fc1'] = FullyConnected(n_out=self.layers[0],
                                           act_fn=ReLU(),
                                           init=self.initializer,
                                           optimizer=self.optimizer)
    
    
        self.model['fc2'] = FullyConnected(n_out=self.layers[1],
                                           act_fn=Affine(slope=1, intercept=0),
                                           init=self.initializer,
                                           optimizer=self.optimizer)
    
    
        self.model['out'] = Softmax(dim=-1,
                                    optimizer=self.optimizer)
    
    @property
    def parameters(self):
        return {k: v.parameters for k, v in self.model.items()}
    
    @property
    def hyperparameters(self):
        return {k: v.hyperparameters for k, v in self.model.items()}
    
    @property
    def derived_variables(self):
        return {k: v.derived_variables for k, v in self.model.items()}
    
    @property
    def gradients(self):
        return {k: v.gradients for k, v in self.model.items()}
    
    def forward(self, x):
        out = x
        for k, v in self.model.items():
            out = v.forward(out)
        return out
    
    def backward(self, y, y_pred):
        """Compute dLdy and then backprop through the layers in self.model"""
        dY_pred = self.loss.grad(y, y_pred)
        for k, v in reversed(list(self.model.items())):
            dY_pred = v.backward(dY_pred)
            self._dv['d' + k] = dY_pred
        return dY_pred
    
    def update(self, cur_loss):
        """Perform gradient updates"""
        for k, v in reversed(list(self.model.items())):
            v.update(cur_loss)
        self.flush_gradients()
    

    Hoping we can fix this and also create an example for people to follow. Thanks

    opened by srs3 0
  • `numpy_ml.linear_model.LinearRegression.predict()` generates `ValueError` when used with copy-pasted code, but pip installed version works as expected!!

    `numpy_ml.linear_model.LinearRegression.predict()` generates `ValueError` when used with copy-pasted code, but pip installed version works as expected!!

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
    • Python version: 3.7.12
    • NumPy version: 1.21.5 (environment is Google Colab on 20-Mar, 2022.)

    Describe the current behavior I have copy-pasted the code for numpy_ml.linear_model.LinearRegression from github and did .fit() and .predict() on some dummy data. I got ValueError on .predict() like this:

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    [<ipython-input-10-4be896198177>](https://localhost:8080/#) in <module>()
    ----> 1 npml_lin_reg2_preds = npml_lin_reg2.predict(X_val)
          2 npml_lin_reg2_preds[:10]
    
    [<ipython-input-8-fc521849e158>](https://localhost:8080/#) in predict(self, X)
        206         if self.fit_intercept:
        207             X = np.c_[np.ones(X.shape[0]), X]
    --> 208         return X @ self.beta
    
    ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 1 is different from 11)
    

    Describe the expected behavior Expected behaviour is that .predict() doesn't generate ValueError.

    Code to reproduce the issue not code, here is the link to the notebook: https://colab.research.google.com/drive/12q9r2j4-UpUrPnzvMiPC6rxafa73cY5L?usp=sharing

    Other info / logs

    bug 
    opened by naveen-marthala 1
  • Best choice for my use case?

    Best choice for my use case?

    G'day, how's it going?

    I've just started looking into machine learning stuff, and stumbled upon this, looks awesome!

    I just want to know what kind of methods I should use for the following:

    • Text identification (Spam checker for example)
    • Image analysis (Detects whether the image given after training is a male or female)

    Kind regards,

    Machine-Learning newbie, Mitch!

    question 
    opened by Mitch0S 1
  • In response to Issue #67: Naive Bayes Classifier added along with a unit testing file; also, included a test_use_cases.py file to compare the accuracy of naive bayes models using both numpy-ml and scikit-learn using dataset in the file wine.data

    In response to Issue #67: Naive Bayes Classifier added along with a unit testing file; also, included a test_use_cases.py file to compare the accuracy of naive bayes models using both numpy-ml and scikit-learn using dataset in the file wine.data

    All Submissions

    • [Yes] Is the code you are submitting your own work?
    • [Yes] Have you followed the contributing guidelines?
    • [Yes] Have you checked to ensure there aren't other open Pull Requests for the same update/change?

    New Model Submissions

    • [Yes] Is the code you are submitting your own work?
    • [Yes] Did you properly attribute the authors of any code you referenced?
    • [Yes] Did you write unit tests for your new model?
    • [Yes] Does your submission pass the unit tests?
    • [No] Did you write documentation for your new model?
    • [No] Have you formatted your code using the black deaults?

    Changes to Existing Models

    • [Yes] Have you added an explanation of what your changes do and why you'd like us to include them?
    • [Yes] Have you written new tests for your changes, as applicable?
    • [Yes] Have you successfully ran tests with your changes locally?
    opened by rishabh-jain14 2
Composable transformations of Python+NumPy programsComposable transformations of Python+NumPy programs

Chex Chex is a library of utilities for helping to write reliable JAX code. This includes utils to help: Instrument your code (e.g. assertions) Debug

DeepMind 506 Jan 8, 2023
MLP-Numpy - A simple modular implementation of Multi Layer Perceptron in pure Numpy.

MLP-Numpy A simple modular implementation of Multi Layer Perceptron in pure Numpy. I used the Iris dataset from scikit-learn library for the experimen

Soroush Omranpour 1 Jan 1, 2022
Machine learning, in numpy

numpy-ml Ever wish you had an inefficient but somewhat legible collection of machine learning algorithms implemented exclusively in NumPy? No? Install

David Bourgin 11.6k Dec 30, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Vowpal Wabbit 8.1k Jan 6, 2023
Minimal deep learning library written from scratch in Python, using NumPy/CuPy.

SmallPebble Project status: experimental, unstable. SmallPebble is a minimal/toy automatic differentiation/deep learning library written from scratch

Sidney Radcliffe 92 Dec 30, 2022
Keras like implementation of Deep Learning architectures from scratch using numpy.

Mini-Keras Keras like implementation of Deep Learning architectures from scratch using numpy. How to contribute? The project contains implementations

MANU S PILLAI 5 Oct 10, 2021
Pytoydl: A toy deep learning framework built upon numpy.

Documents: https://pytoydl.readthedocs.io/zh/latest/ Pytoydl A toy deep learning framework built upon numpy. You can star this repository to keep trac

null 28 Dec 10, 2022
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Dec 30, 2022
This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Developed By Google!

Machine Learning Hand Detector This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Dev

Popstar Idhant 3 Feb 25, 2022
Crab is a flexible, fast recommender engine for Python that integrates classic information filtering recommendation algorithms in the world of scientific Python packages (numpy, scipy, matplotlib).

Crab - A Recommendation Engine library for Python Crab is a flexible, fast recommender engine for Python that integrates classic information filtering r

python-recsys 1.2k Dec 21, 2022
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

JAX: Autograd and XLA Quickstart | Transformations | Install guide | Neural net libraries | Change logs | Reference docs | Code search News: JAX tops

Google 21.3k Jan 1, 2023
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

JAX: Autograd and XLA Quickstart | Transformations | Install guide | Neural net libraries | Change logs | Reference docs | Code search News: JAX tops

Google 11.4k Feb 13, 2021
Construct a neural network frame by Numpy

本项目的CSDN博客链接:https://blog.csdn.net/weixin_41578567/article/details/111482022 1. 概览 本项目主要用于神经网络的学习,通过基于numpy的实现,了解神经网络底层前向传播、反向传播以及各类优化器的原理。 该项目目前已实现的功

null 24 Jan 22, 2022
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

Marcel R. 349 Aug 6, 2022
Efficiently computes derivatives of numpy code.

Note: Autograd is still being maintained but is no longer actively developed. The main developers (Dougal Maclaurin, David Duvenaud, Matt Johnson, and

Formerly: Harvard Intelligent Probabilistic Systems Group -- Now at Princeton 6.1k Jan 8, 2023
Devkit for 3D -- Some utils for 3D object detection based on Numpy and Pytorch

D3D Devkit for 3D: Some utils for 3D object detection and tracking based on Numpy and Pytorch Please consider siting my work if you find this library

Jacob Zhong 27 Jul 7, 2022
Official NumPy Implementation of Deep Networks from the Principle of Rate Reduction (2021)

Deep Networks from the Principle of Rate Reduction This repository is the official NumPy implementation of the paper Deep Networks from the Principle

Ryan Chan 49 Dec 16, 2022
TensorFlow, PyTorch and Numpy layers for generating Orthogonal Polynomials

OrthNet TensorFlow, PyTorch and Numpy layers for generating multi-dimensional Orthogonal Polynomials 1. Installation 2. Usage 3. Polynomials 4. Base C

Chuan 29 May 25, 2022