Machine learning, in numpy

Overview

numpy-ml

Ever wish you had an inefficient but somewhat legible collection of machine learning algorithms implemented exclusively in NumPy? No?

Installation

For rapid experimentation

To use this code as a starting point for ML prototyping / experimentation, just clone the repository, create a new virtualenv, and start hacking:

$ git clone https://github.com/ddbourgin/numpy-ml.git
$ cd numpy-ml && virtualenv npml && source npml/bin/activate
$ pip3 install -r requirements-dev.txt

As a package

If you don't plan to modify the source, you can also install numpy-ml as a Python package: pip3 install -u numpy_ml.

The reinforcement learning agents train on environments defined in the OpenAI gym. To install these alongside numpy-ml, you can use pip3 install -u 'numpy_ml[rl]'.

Documentation

For more details on the available models, see the project documentation.

Available models

  1. Gaussian mixture model

    • EM training
  2. Hidden Markov model

    • Viterbi decoding
    • Likelihood computation
    • MLE parameter estimation via Baum-Welch/forward-backward algorithm
  3. Latent Dirichlet allocation (topic model)

    • Standard model with MLE parameter estimation via variational EM
    • Smoothed model with MAP parameter estimation via MCMC
  4. Neural networks

    • Layers / Layer-wise ops
      • Add
      • Flatten
      • Multiply
      • Softmax
      • Fully-connected/Dense
      • Sparse evolutionary connections
      • LSTM
      • Elman-style RNN
      • Max + average pooling
      • Dot-product attention
      • Embedding layer
      • Restricted Boltzmann machine (w. CD-n training)
      • 2D deconvolution (w. padding and stride)
      • 2D convolution (w. padding, dilation, and stride)
      • 1D convolution (w. padding, dilation, stride, and causality)
    • Modules
      • Bidirectional LSTM
      • ResNet-style residual blocks (identity and convolution)
      • WaveNet-style residual blocks with dilated causal convolutions
      • Transformer-style multi-headed scaled dot product attention
    • Regularizers
      • Dropout
    • Normalization
      • Batch normalization (spatial and temporal)
      • Layer normalization (spatial and temporal)
    • Optimizers
      • SGD w/ momentum
      • AdaGrad
      • RMSProp
      • Adam
    • Learning Rate Schedulers
      • Constant
      • Exponential
      • Noam/Transformer
      • Dlib scheduler
    • Weight Initializers
      • Glorot/Xavier uniform and normal
      • He/Kaiming uniform and normal
      • Standard and truncated normal
    • Losses
      • Cross entropy
      • Squared error
      • Bernoulli VAE loss
      • Wasserstein loss with gradient penalty
      • Noise contrastive estimation loss
    • Activations
      • ReLU
      • Tanh
      • Affine
      • Sigmoid
      • Leaky ReLU
      • ELU
      • SELU
      • Exponential
      • Hard Sigmoid
      • Softplus
    • Models
      • Bernoulli variational autoencoder
      • Wasserstein GAN with gradient penalty
      • word2vec encoder with skip-gram and CBOW architectures
    • Utilities
      • col2im (MATLAB port)
      • im2col (MATLAB port)
      • conv1D
      • conv2D
      • deconv2D
      • minibatch
  5. Tree-based models

    • Decision trees (CART)
    • [Bagging] Random forests
    • [Boosting] Gradient-boosted decision trees
  6. Linear models

    • Ridge regression
    • Logistic regression
    • Ordinary least squares
    • Bayesian linear regression w/ conjugate priors
      • Unknown mean, known variance (Gaussian prior)
      • Unknown mean, unknown variance (Normal-Gamma / Normal-Inverse-Wishart prior)
  7. n-Gram sequence models

    • Maximum likelihood scores
    • Additive/Lidstone smoothing
    • Simple Good-Turing smoothing
  8. Multi-armed bandit models

    • UCB1
    • LinUCB
    • Epsilon-greedy
    • Thompson sampling w/ conjugate priors
      • Beta-Bernoulli sampler
    • LinUCB
  9. Reinforcement learning models

    • Cross-entropy method agent
    • First visit on-policy Monte Carlo agent
    • Weighted incremental importance sampling Monte Carlo agent
    • Expected SARSA agent
    • TD-0 Q-learning agent
    • Dyna-Q / Dyna-Q+ with prioritized sweeping
  10. Nonparameteric models

    • Nadaraya-Watson kernel regression
    • k-Nearest neighbors classification and regression
    • Gaussian process regression
  11. Matrix factorization

    • Regularized alternating least-squares
    • Non-negative matrix factorization
  12. Preprocessing

    • Discrete Fourier transform (1D signals)
    • Discrete cosine transform (type-II) (1D signals)
    • Bilinear interpolation (2D signals)
    • Nearest neighbor interpolation (1D and 2D signals)
    • Autocorrelation (1D signals)
    • Signal windowing
    • Text tokenization
    • Feature hashing
    • Feature standardization
    • One-hot encoding / decoding
    • Huffman coding / decoding
    • Term frequency-inverse document frequency (TF-IDF) encoding
    • MFCC encoding
  13. Utilities

    • Similarity kernels
    • Distance metrics
    • Priority queue
    • Ball tree
    • Discrete sampler
    • Graph processing and generators

Contributing

Am I missing your favorite model? Is there something that could be cleaner / less confusing? Did I mess something up? Submit a PR! The only requirement is that your models are written with just the Python standard library and NumPy. The SciPy library is also permitted under special circumstances ;)

See full contributing guidelines here.

Issues
  • Feature: add more activation functions

    Feature: add more activation functions

    Fix #7 , part of them.

    • [x] Linear
    • [x] Softmax
    • [x] Hard Sigmoid
    • [x] Exponential
    • [x] SELU
    • [x] SELU Test
    • [x] LeakyRelu Test

    Plot test as follow:

    plot

    opened by WuZhuoran 15
  • Can I write a K-means model? then pull request.

    Can I write a K-means model? then pull request.

    I can't find K-means model, so I think I can coding one. Thanks!

    question 
    opened by daidai21 8
  • A little bug!

    A little bug!

    Hi, I think there is a little bug at numpy-ml/numpy_ml/neural_nets/activations/activations.py Line 64.

    your code

    fn_x = self.fn_x
    

    but

    self.fn_x Never defined

    bug 
    opened by real-zhangzhe 8
  • Feature Request: Online Linear Regression

    Feature Request: Online Linear Regression

    A lot of linear regression operate in static mode which means that we have to retrain to get the parameters after getting a new data point. This limits the functionality of the linear regression and make working in real-time difficult. I will implement this online formulation using ideas from the paper attached to this issue. This is based on recursive least square formulation. I have implemented in one of my past project. However, if I get approvals, then I will clean it up and create a new PR

    rls.pdf

    model request 
    opened by kenluck2001 7
  • Add the alpha dropout method

    Add the alpha dropout method

    Dear Post Dr.Bourgin, I wanna give a perfect method code, however , unfortunately may be it can not work well (I have not test the method).So.....uh~~ may I ask your help to finish it, then get the honor that be a contributor of this great project (5k+ stars).... THANK YOU!

    opened by BoltzmannZhaung 6
  • Naive bayes

    Naive bayes

    • What I did

    Implement Gaussian naive Bayes Class and Basic documentation.

    • How I did it

    Refer to the formula of Gaussian naive Bayes from the notes

    • How to verify it

    compare the performance with the gaussianNB from sci-kit learn.

    opened by sfsf9797 6
  • neural_nets/utils/utils.py line 797 has a bug!

    neural_nets/utils/utils.py line 797 has a bug!

    hi, I think neural_nets/utils/utils.py line 797 has a bug!

    your code

    i0, i1 = i * s, (i * s) + fr * (d + 1) - d j0, j1 = j * s, (j * s) + fc * (d + 1) - d

    right code

    i0, i1 = i * s, (i * s) + fr j0, j1 = j * s, (j * s) + fc

    because fr and fc are already dilated size !

    bug 
    opened by real-zhangzhe 5
  • [Question] Why `gamma * beta` stand for ` L2 in  LogisticRegression._NLL_grad

    [Question] Why `gamma * beta` stand for ` L2 in LogisticRegression._NLL_grad

    Hello, this is a great project , I am learning how to implement model without sklearn/tensorflow , it really help me a lot .

    I have a question on https://github.com/ddbourgin/numpy-ml/blob/4f37707c6c7c390645dec5a503c12a48e624b249/numpy_ml/linear_models/lm.py#L252

    Since P-norm is defined as image

    l1norms(self.beta) means the sum of all absulote value of each element in self.beta . I don't quite understand why the simple gamma * beta stand for `L2 ?

    PS: May I ask what IDE and code document plugin you are using ? I see some annotation don't beyond to latex , it would be nice to see beautiful math symbols than raw latex :)

    bug 
    opened by eromoe 5
  • fix: multi dimension update for covariance in gmm

    fix: multi dimension update for covariance in gmm

    - What bug I fixed

    According to @jjjjohnson in #16 , we can apply multi dimension covariance in gmm.

    This pull request fixes #16.

    - How I fixed it

    1. change dimension from 2 to self.d.

    - How you can verify it

    The tests did not pass because of the following: And tests did pass before changing. We can ask @jjjjohnson to take a look.

    /Users/zhuoran/Documents/git/numpy-ml/gmm/gmm.py:66: RuntimeWarning: invalid value encountered in double_scalars
      if np.isnan(vlb) or np.abs((vlb - prev_vlb) / prev_vlb) <= tol:
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Singular matrix: components collapsed
    Components collapsed; Refitting
    /Users/zhuoran/Documents/git/numpy-ml/gmm/gmm.py:116: RuntimeWarning: invalid value encountered in true_divide
      self.mu[ix, :] = num / den
    /Users/zhuoran/Documents/git/numpy-ml/gmm/gmm.py:43: RuntimeWarning: divide by zero encountered in log
      log_pi_k = np.log(pi_k)
    /usr/local/lib/python3.7/site-packages/numpy/linalg/linalg.py:1817: RuntimeWarning: invalid value encountered in slogdet
      sign, logdet = _umath_linalg.slogdet(a, signature=signature)
    Singular matrix: components collapsed
    Components collapsed; Refitting
    Traceback (most recent call last):
      File "/Users/zhuoran/Documents/git/numpy-ml/gmm/tests.py", line 111, in <module>
        plot()
      File "/Users/zhuoran/Documents/git/numpy-ml/gmm/tests.py", line 100, in plot
        ax = plot_clusters(G, X, ax)
      File "/Users/zhuoran/Documents/git/numpy-ml/gmm/tests.py", line 52, in plot_clusters
        rv = multivariate_normal(model.mu[c], model.sigma[c], allow_singular=True)
      File "/usr/local/lib/python3.7/site-packages/scipy/stats/_multivariate.py", line 363, in __call__
        seed=seed)
      File "/usr/local/lib/python3.7/site-packages/scipy/stats/_multivariate.py", line 736, in __init__
        self.cov_info = _PSD(self.cov, allow_singular=allow_singular)
      File "/usr/local/lib/python3.7/site-packages/scipy/stats/_multivariate.py", line 156, in __init__
        s, u = scipy.linalg.eigh(M, lower=lower, check_finite=check_finite)
      File "/usr/local/lib/python3.7/site-packages/scipy/linalg/decomp.py", line 374, in eigh
        a1 = _asarray_validated(a, check_finite=check_finite)
      File "/usr/local/lib/python3.7/site-packages/scipy/_lib/_util.py", line 239, in _asarray_validated
        a = toarray(a)
      File "/usr/local/lib/python3.7/site-packages/numpy/lib/function_base.py", line 1233, in asarray_chkfinite
        "array must not contain infs or NaNs")
    ValueError: array must not contain infs or NaNs
    
    opened by WuZhuoran 4
  • Added support for Online Linear Regression

    Added support for Online Linear Regression

    This work is to create an online version of Linear regression as described in issue https://github.com/ddbourgin/numpy-ml/issues/70 . The implement the online version of the Ordinary least square using Matrix Inversion Lemma RLS, version 1 ( https://github.com/ddbourgin/numpy-ml/files/6536069/rls.pdf ).

    Beware that there will be trade-off in some performance during the noisy nature of training on a sample at a time, in comparison to a batch. On the average, this should not be too significant.

    To Do

    Add unit test

    • [x] Is the code you are submitting your own work? Yes
    • [x] Did you properly attribute the authors of any code you referenced? Yes
    • [x] Did you write unit tests for your new model?
    • [x] Does your submission pass the unit tests?
    • [ ] Did you write documentation for your new model?
    • [ ] Have you formatted your code using the black deaults?

    A sample for the API call is

    olr = LinearRegression()
    olr.fit(X, y)
    olr.predict(rest-x)
    # on new data just call update to modify beta field and update the model in an online manner
    olr.update(x_new, y_new)
    

    Note: no modification has happened to existing implementation as it is only active when update method is called.

    opened by kenluck2001 4
  • In response to Issue #67: Naive Bayes Classifier added along with a unit testing file; also, included a test_use_cases.py file to compare the accuracy of naive bayes models using both numpy-ml and scikit-learn using dataset in the file wine.data

    In response to Issue #67: Naive Bayes Classifier added along with a unit testing file; also, included a test_use_cases.py file to compare the accuracy of naive bayes models using both numpy-ml and scikit-learn using dataset in the file wine.data

    All Submissions

    • [Yes] Is the code you are submitting your own work?
    • [Yes] Have you followed the contributing guidelines?
    • [Yes] Have you checked to ensure there aren't other open Pull Requests for the same update/change?

    New Model Submissions

    • [Yes] Is the code you are submitting your own work?
    • [Yes] Did you properly attribute the authors of any code you referenced?
    • [Yes] Did you write unit tests for your new model?
    • [Yes] Does your submission pass the unit tests?
    • [No] Did you write documentation for your new model?
    • [No] Have you formatted your code using the black deaults?

    Changes to Existing Models

    • [Yes] Have you added an explanation of what your changes do and why you'd like us to include them?
    • [Yes] Have you written new tests for your changes, as applicable?
    • [Yes] Have you successfully ran tests with your changes locally?
    opened by rishabh-jain14 2
  • Added hard and soft Kmeans clustering with tests

    Added hard and soft Kmeans clustering with tests

    This submission addresses the issue tracked in https://github.com/ddbourgin/numpy-ml/issues/69 We have implemented a soft and hard version of kmeans clustering. The works done can be summarized as follows:

    1. Hard kmeans clustering with fixed assignment of data points to only one cluster at a time.
    2. Soft kmeans clustering with probabilistic assignment of data points. Each data point has a membership degree in each cluster. The highest probable cluster could then be assigned as the cluster index of the data. Alternatively, the probability distribution can be used for any other purpose as it captures our uncertainty of the clustering routine.
    • [x] Is the code you are submitting your own work? Yes it is my original work
    • [x] Did you properly attribute the authors of any code you referenced? Yes I did
    • [x] Did you write unit tests for your new model? Yes I added unit tests by comparing algorithms to base implementation in scikit-learn.
    • [x] Does your submission pass the unit tests? Yes it does
    • [ ] Did you write documentation for your new model? For now, it is readme and code documentation
    • [ ] Have you formatted your code using the black deaults? Only followed the numpy styling guide
    opened by kenluck2001 1
  • Feature Request: Clustering Kmeans (hard and soft version)

    Feature Request: Clustering Kmeans (hard and soft version)

    There is no clustering apart from the EM for Gaussian mixtures already in the project. Hence, I would like to implement a kmeans algorithm both the hard clustering version which is common and the soft clustering derivation of the kmeans algorithm. Once I get a go-ahead, then I will proceed to raising a PR within the next few days.

    The hard version of K-means will follow the implementation in this slide image

    The soft version of K-means will also follow the implementation in this slide image

    I have written up both efficient implementations before checking the contribution guide that specifies that there must be an issue opened. Please give your approval and I will raise the PR right away

    model request 
    opened by kenluck2001 2
  • Update LICENSE

    Update LICENSE

    All Submissions

    • [ ] Is the code you are submitting your own work?
    • [ ] Have you followed the contributing guidelines?
    • [ ] Have you checked to ensure there aren't other open Pull Requests for the same update/change?

    New Model Submissions

    • [ ] Is the code you are submitting your own work?
    • [ ] Did you properly attribute the authors of any code you referenced?
    • [ ] Did you write unit tests for your new model?
    • [ ] Does your submission pass the unit tests?
    • [ ] Did you write documentation for your new model?
    • [ ] Have you formatted your code using the black deaults?

    Changes to Existing Models

    • [ ] Have you added an explanation of what your changes do and why you'd like us to include them?
    • [ ] Have you written new tests for your changes, as applicable?
    • [ ] Have you successfully ran tests with your changes locally?
    opened by Aditi014 0
  • error in DecisionTree

    error in DecisionTree

    data = pd.read_csv('Data/Bankloan.csv', sep=';') for i in ['debtinc', 'creddebt', 'othdebt']: data[i] = data[i].str.replace(',', '.').astype('float') train, test, y_train, y_test = train_test_split(data.drop('default', axis=1), data['default'], test_size=0.3, stratify=data['default'], random_state=42) X_train = pd.get_dummies(train) X_test = pd.get_dummies(test) tree = DecisionTree(seed=42, max_depth=4, n_feats=2) tree.fit(X_train.values, y_train.values)


    ValueError Traceback (most recent call last) in 1 tree = DecisionTree(seed=42, max_depth=4, n_feats=2) ----> 2 tree.fit(X_train.values, y_train.values)

    in fit(self, X, Y) 78 self.n_classes = max(Y) + 1 if self.classifier else None 79 self.n_feats = X.shape[1] if not self.n_feats else min(self.n_feats, X.shape[1]) ---> 80 self.root = self._grow(X, Y) 81 82 def predict(self, X):

    in _grow(self, X, Y, cur_depth) 138 139 # grow the children that result from the split --> 140 left = self._grow(X[l, :], Y[l], cur_depth) 141 right = self._grow(X[r, :], Y[r], cur_depth) 142 return Node(left, right, (feat, thresh))

    in _grow(self, X, Y, cur_depth) 139 # grow the children that result from the split 140 left = self._grow(X[l, :], Y[l], cur_depth) --> 141 right = self._grow(X[r, :], Y[r], cur_depth) 142 return Node(left, right, (feat, thresh)) 143

    in _grow(self, X, Y, cur_depth) 139 # grow the children that result from the split 140 left = self._grow(X[l, :], Y[l], cur_depth) --> 141 right = self._grow(X[r, :], Y[r], cur_depth) 142 return Node(left, right, (feat, thresh)) 143

    in _grow(self, X, Y, cur_depth) 133 134 # greedily select the best split according to criterion --> 135 feat, thresh = self._segment(X, Y, feat_idxs) 136 l = np.argwhere(X[:, feat] <= thresh).flatten() 137 r = np.argwhere(X[:, feat] > thresh).flatten()

    in _segment(self, X, Y, feat_idxs) 155 gains = np.array([self._impurity_gain(Y, t, vals) for t in thresholds]) 156 --> 157 if gains.max() > best_gain: 158 split_idx = i 159 best_gain = gains.max()

    /anaconda3/lib/python3.7/site-packages/numpy/core/_methods.py in _amax(a, axis, out, keepdims, initial, where) 28 def _amax(a, axis=None, out=None, keepdims=False, 29 initial=_NoValue, where=True): ---> 30 return umr_maximum(a, axis, None, out, keepdims, initial, where) 31 32 def _amin(a, axis=None, out=None, keepdims=False,

    ValueError: zero-size array to reduction operation maximum which has no identity

    Link to dataset https://drive.google.com/file/d/1lj7qUyG7BOV6cAGm8-tDNUqS62IEgk5p/view?usp=sharing

    bug 
    opened by Gewissta 0
  • Using numpy.tensordot for Conv2D

    Using numpy.tensordot for Conv2D

    From this link: https://stackoverflow.com/questions/56085669/convolutional-layer-in-python-using-numpy

    and

    https://numpy.org/doc/stable/reference/generated/numpy.tensordot.html

    Z = np.tensordot(X_pad, weights, axes=3) + self.bias

    Does this function is more relevant that using im2col?

    opened by tetrahydra 1
  • There is no CRF here? Why

    There is no CRF here? Why

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    • Python version:
    • NumPy version:

    Describe the current behavior

    Describe the expected behavior

    Code to reproduce the issue

    Other info / logs

    model request 
    opened by yishen-zhao 1
  • Check if trainable

    Check if trainable

    If conv is not trainable you must check and not update "W".

    https://github.com/ddbourgin/numpy-ml/blob/4f37707c6c7c390645dec5a503c12a48e624b249/numpy_ml/neural_nets/layers/layers.py#L2488

    opened by vahmelk99 0
  • numpy-ml

    numpy-ml

    All Submissions

    • [ ] Is the code you are submitting your own work?
    • [ ] Have you followed the contributing guidelines?
    • [ ] Have you checked to ensure there aren't other open Pull Requests for the same update/change?

    New Model Submissions

    • [ ] Is the code you are submitting your own work?
    • [ ] Did you properly attribute the authors of any code you referenced?
    • [ ] Did you write unit tests for your new model?
    • [ ] Does your submission pass the unit tests?
    • [ ] Did you write documentation for your new model?
    • [ ] Have you formatted your code using the black deaults?

    Changes to Existing Models

    • [ ] Have you added an explanation of what your changes do and why you'd like us to include them?
    • [ ] Have you written new tests for your changes, as applicable?
    • [ ] Have you successfully ran tests with your changes locally?
    opened by yang-chenyu104 0
  • update forks

    update forks

    update forks

    opened by bruce1408 1
All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

Daniel Bourke 1.7k Oct 24, 2021
StellarGraph - Machine Learning on Graphs

StellarGraph Machine Learning Library StellarGraph is a Python library for machine learning on graphs and networks. Table of Contents Introduction Get

S T E L L A R 2.2k Oct 22, 2021
HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow

Class HiddenMarkovModel HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow 2.0 Installatio

Susara Thenuwara 1 Oct 23, 2021
Web-interface + rest API for classification and regression (https://jeff1evesque.github.io/machine-learning.docs)

Machine Learning This project provides a web-interface, as well as a programmatic-api for various machine learning algorithms. Supported algorithms: S

Jeff Levesque 245 Oct 5, 2021
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.

This is a curated list of tutorials, projects, libraries, videos, papers, books and anything related to the incredible PyTorch. Feel free to make a pu

Ritchie Ng 8.5k Oct 22, 2021
Advanced Deep Learning with TensorFlow 2 and Keras (Updated for 2nd Edition)

Advanced Deep Learning with TensorFlow 2 and Keras (Updated for 2nd Edition)

Packt 1.1k Oct 21, 2021
A curated (most recent) list of resources for Learning with Noisy Labels

A curated (most recent) list of resources for Learning with Noisy Labels

Jiaheng Wei 99 Oct 17, 2021
🛠 All-in-one web-based IDE specialized for machine learning and data science.

All-in-one web-based development environment for machine learning Getting Started • Features & Screenshots • Support • Report a Bug • FAQ • Known Issu

Machine Learning Tooling 2.2k Oct 23, 2021
Annotated notes and summaries of the TensorFlow white paper, along with SVG figures and links to documentation

TensorFlow White Paper Notes Features Notes broken down section by section, as well as subsection by subsection Relevant links to documentation, resou

Sam Abrahams 438 Sep 10, 2021
Simple machine learning library / 簡單易用的機器學習套件

FukuML Simple machine learning library / 簡單易用的機器學習套件 Installation $ pip install FukuML Tutorial Lesson 1: Perceptron Binary Classification Learning Al

Fukuball Lin 283 Sep 3, 2021
A little Python application to auto tag your photos with the power of machine learning.

Tag Machine A little Python application to auto tag your photos with the power of machine learning. Report a bug or request a feature Table of Content

Florian Torres 1 Oct 25, 2021
A MNIST-like fashion product database. Benchmark

Fashion-MNIST Table of Contents Why we made Fashion-MNIST Get the Data Usage Benchmark Visualization Contributing Contact Citing Fashion-MNIST License

Zalando Research 9.5k Oct 22, 2021
Python parser for DTED data.

DTED Parser This is a package written in pure python (with help from numpy) to parse and investigate Digital Terrain Elevation Data (DTED) files. This

Ben Bonenfant 6 Sep 13, 2021
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

Marcel R. 346 Sep 20, 2021
GNN4Traffic - This is the repository for the collection of Graph Neural Network for Traffic Forecasting

GNN4Traffic - This is the repository for the collection of Graph Neural Network for Traffic Forecasting

null 215 Oct 18, 2021
An easier way to build neural search on the cloud

An easier way to build neural search on the cloud Jina is a deep learning-powered search framework for building cross-/multi-modal search systems (e.g

Jina AI 11.8k Oct 26, 2021
An unofficial styleguide and best practices summary for PyTorch

A PyTorch Tools, best practices & Styleguide This is not an official style guide for PyTorch. This document summarizes best practices from more than a

IgorSusmelj 1.2k Oct 22, 2021
Machine learning, in numpy

numpy-ml Ever wish you had an inefficient but somewhat legible collection of machine learning algorithms implemented exclusively in NumPy? No? Install

David Bourgin 10.7k Oct 20, 2021
Using python and scikit-learn to make stock predictions

MachineLearningStocks in python: a starter project and guide EDIT as of Feb 2021: MachineLearningStocks is no longer actively maintained MachineLearni

Robert Martin 1.1k Oct 23, 2021