50% faster, 50% less RAM Machine Learning. Numba rewritten Sklearn. SVD, NNMF, PCA, LinearReg, RidgeReg, Randomized, Truncated SVD/PCA, CSR Matrices all 50+% faster

Overview

[Due to the time taken @ uni, work + hell breaking loose in my life, since things have calmed down a bit, will continue commiting!!!] [By the way, I'm still looking for new contributors! Please help make HyperLearn no1!!]

drawing

HyperLearn is what drives Umbra's AI engines. It is open source to everyone, everywhere, and we hope humanity can rise to the stars.

[Notice - I will be updating the package monthly or bi-weekly due to other commitments]


drawing https://hyperlearn.readthedocs.io/en/latest/index.html

Faster, Leaner GPU Sklearn, Statsmodels written in PyTorch

GitHub issues Github All Releases

50%+ Faster, 50%+ less RAM usage, GPU support re-written Sklearn, Statsmodels combo with new novel algorithms.

HyperLearn is written completely in PyTorch, NoGil Numba, Numpy, Pandas, Scipy & LAPACK, and mirrors (mostly) Scikit Learn. HyperLearn also has statistical inference measures embedded, and can be called just like Scikit Learn's syntax (model.confidence_interval_) Ongoing documentation: https://hyperlearn.readthedocs.io/en/latest/index.html

I'm also writing a mini book! A sneak peak: drawing

drawing

Comparison of Speed / Memory

Algorithm n p Time(s) RAM(mb) Notes
Sklearn Hyperlearn Sklearn Hyperlearn
QDA (Quad Dis A) 1000000 100 54.2 22.25 2,700 1,200 Now parallelized
LinearRegression 1000000 100 5.81 0.381 700 10 Guaranteed stable & fast

Time(s) is Fit + Predict. RAM(mb) = max( RAM(Fit), RAM(Predict) )

I've also added some preliminary results for N = 5000, P = 6000 drawing

Since timings are not good, I have submitted 2 bug reports to Scipy + PyTorch:

  1. EIGH very very slow --> suggesting an easy fix #9212 https://github.com/scipy/scipy/issues/9212
  2. SVD very very slow and GELS gives nans, -inf #11174 https://github.com/pytorch/pytorch/issues/11174

Help is really needed! Message me!


Key Methodologies and Aims

1. Embarrassingly Parallel For Loops

2. 50%+ Faster, 50%+ Leaner

3. Why is Statsmodels sometimes unbearably slow?

4. Deep Learning Drop In Modules with PyTorch

5. 20%+ Less Code, Cleaner Clearer Code

6. Accessing Old and Exciting New Algorithms


1. Embarrassingly Parallel For Loops

  • Including Memory Sharing, Memory Management
  • CUDA Parallelism through PyTorch & Numba

2. 50%+ Faster, 50%+ Leaner

3. Why is Statsmodels sometimes unbearably slow?

  • Confidence, Prediction Intervals, Hypothesis Tests & Goodness of Fit tests for linear models are optimized.
  • Using Einstein Notation & Hadamard Products where possible.
  • Computing only what is necessary to compute (Diagonal of matrix and not entire matrix).
  • Fixing the flaws of Statsmodels on notation, speed, memory issues and storage of variables.

4. Deep Learning Drop In Modules with PyTorch

  • Using PyTorch to create Scikit-Learn like drop in replacements.

5. 20%+ Less Code, Cleaner Clearer Code

  • Using Decorators & Functions where possible.
  • Intuitive Middle Level Function names like (isTensor, isIterable).
  • Handles Parallelism easily through hyperlearn.multiprocessing

6. Accessing Old and Exciting New Algorithms

  • Matrix Completion algorithms - Non Negative Least Squares, NNMF
  • Batch Similarity Latent Dirichelt Allocation (BS-LDA)
  • Correlation Regression
  • Feasible Generalized Least Squares FGLS
  • Outlier Tolerant Regression
  • Multidimensional Spline Regression
  • Generalized MICE (any model drop in replacement)
  • Using Uber's Pyro for Bayesian Deep Learning

Goals & Development Schedule

Will Focus on & why:

1. Singular Value Decomposition & QR Decomposition

* SVD/QR is the backbone for many algorithms including:
    * Linear & Ridge Regression (Regression)
    * Statistical Inference for Regression methods (Inference)
    * Principal Component Analysis (Dimensionality Reduction)
    * Linear & Quadratic Discriminant Analysis (Classification & Dimensionality Reduction)
    * Pseudoinverse, Truncated SVD (Linear Algebra)
    * Latent Semantic Indexing LSI (NLP)
    * (new methods) Correlation Regression, FGLS, Outlier Tolerant Regression, Generalized MICE, Splines (Regression)

On Licensing: HyperLearn is under a GNU v3 License. This means:

  1. Commercial use is restricted. Only software with 0 cost can be released. Ie: no closed source versions are allowed.
  2. Using HyperLearn must entail all of the code being avaliable to everyone who uses your public software.
  3. HyperLearn is intended for academic, research and personal purposes. Any explicit commercialisation of the algorithms and anything inside HyperLearn is strictly prohibited.

HyperLearn promotes a free and just world. Hence, it is free to everyone, except for those who wish to commercialise on top of HyperLearn. Ongoing documentation: https://hyperlearn.readthedocs.io/en/latest/index.html [As of 2020, HyperLearn's license has been changed to BSD 3]

Comments
  • `setup.py` install fails on Ubuntu 16.04

    `setup.py` install fails on Ubuntu 16.04

    This is what I see when running python setup.py install (gcc8, python 3.6):

    running install
    running build
    running build_py
    file hyperlearn.py (for module hyperlearn) not found
    file hyperlearn.py (for module hyperlearn) not found
    running install_lib
    warning: install_lib: 'build/lib' does not exist -- no Python modules to install
    
    running install_egg_info
    running egg_info
    writing hyperlearn.egg-info/PKG-INFO
    writing dependency_links to hyperlearn.egg-info/dependency_links.txt
    writing requirements to hyperlearn.egg-info/requires.txt
    writing top-level names to hyperlearn.egg-info/top_level.txt
    file hyperlearn.py (for module hyperlearn) not found
    reading manifest file 'hyperlearn.egg-info/SOURCES.txt'
    writing manifest file 'hyperlearn.egg-info/SOURCES.txt'
    Copying hyperlearn.egg-info to /home/share/miniconda3/envs/torch/lib/python3.6/site-packages/hyperlearn-0.0.1-py3.6.egg-info
    running install_scripts
    ###########
    >>> Building HyperLearn C extensions. <<<
    running build_ext
    building 'DEFINE' extension
    creating build/temp.linux-x86_64-3.6
    gcc -pthread -B /home/share/miniconda3/envs/torch/compiler_compat -Wl,--sysroot=/ -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -O3 -march=native -fPIC -I/home/share/miniconda3/envs/torch/lib/python3.6/site-packages/numpy/core/include -I/home/share/miniconda3/envs/torch/include/python3.6m -c DEFINE.c -o build/temp.linux-x86_64-3.6/DEFINE.o
    In file included from /home/share/miniconda3/envs/torch/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h:1823:0,
                     from /home/share/miniconda3/envs/torch/lib/python3.6/site-packages/numpy/core/include/numpy/ndarrayobject.h:18,
                     from /home/share/miniconda3/envs/torch/lib/python3.6/site-packages/numpy/core/include/numpy/arrayobject.h:4,
                     from DEFINE.c:580:
    /home/share/miniconda3/envs/torch/lib/python3.6/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
     #warning "Using deprecated NumPy API, disable it by " \
      ^
    gcc -pthread -shared -B /home/share/miniconda3/envs/torch/compiler_compat -L/home/share/miniconda3/envs/torch/lib -Wl,-rpath=/home/share/miniconda3/envs/torch/lib -Wl,--no-as-needed -Wl,--sysroot=/ -O3 -march=native build/temp.linux-x86_64-3.6/DEFINE.o -o /home/share/software/sources/hyperlearn/hyperlearn/cython/DEFINE.cpython-36m-x86_64-linux-gnu.so
    building '__temp__' extension
    gcc -pthread -B /home/share/miniconda3/envs/torch/compiler_compat -Wl,--sysroot=/ -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -O3 -march=native -fPIC -I/home/share/miniconda3/envs/torch/lib/python3.6/site-packages/numpy/core/include -I/home/share/miniconda3/envs/torch/include/python3.6m -c __temp__.c -o build/temp.linux-x86_64-3.6/__temp__.o
    In file included from /usr/lib/gcc/x86_64-linux-gnu/5/include/immintrin.h:79:0,
                     from __temp__.c:578:
    __temp__.c: In function ‘__pyx_f_8__temp____mult_add’:
    /usr/lib/gcc/x86_64-linux-gnu/5/include/fmaintrin.h:55:1: error: inlining failed in call to always_inline ‘_mm_fmadd_ps’: target specific option mismatch
     _mm_fmadd_ps (__m128 __A, __m128 __B, __m128 __C)
     ^
    __temp__.c:1084:3: error: called from here
       _mm_store_ps1(__pyx_v_x, _mm_fmadd_ps(__pyx_v_mult, _mm_load_ps1(__pyx_v_x), __pyx_v_shift));
       ^
    In file included from /usr/lib/gcc/x86_64-linux-gnu/5/include/immintrin.h:79:0,
                     from __temp__.c:578:
    /usr/lib/gcc/x86_64-linux-gnu/5/include/fmaintrin.h:55:1: error: inlining failed in call to always_inline ‘_mm_fmadd_ps’: target specific option mismatch
     _mm_fmadd_ps (__m128 __A, __m128 __B, __m128 __C)
     ^
    __temp__.c:1084:3: error: called from here
       _mm_store_ps1(__pyx_v_x, _mm_fmadd_ps(__pyx_v_mult, _mm_load_ps1(__pyx_v_x), __pyx_v_shift));
       ^
    error: command 'gcc' failed with exit status 1
    ###########
    >>> Successfully built C extensions. <<<
    

    Not sure what to make of this.

    opened by themightyoarfish 7
  • **IMPORTANT: On Contributing to HyperLearn + Note to Contributors

    **IMPORTANT: On Contributing to HyperLearn + Note to Contributors

    Hey Contributor!

    Thanks for checking out HyperLearn!! Super appreciate it.

    Since the package is new (only started like August 27th)..., Issues are the best place to start helping out, and or check out the Projects tab. There's a whole list of stuff I envisioned to complete.

    Also, if you have a NEW idea: please post an issue and label it new enhancement.

    In terms of priorities, I wanted to start from the bottom up, as to make all functions faster; that means:

    1. Since Singular Value Decomp is the backbone for nearly all Linear algos (PCA, Linear, Ridge reg, LDA, QDA, LSI, Partial LS, etc etc...), we need to focus on making SVD faster! (Also Linear solvers).

    2. When SVD optimization is OK, then slowly creep into Least Squares / L1 solvers. These need to be done before other algorithms in order for speedups to be apparent.

    3. If NUMBA code is used, it needs to be PRE-COMPILED in order to save time, or else we need to wait a whopping 2-3 seconds before each call...

    4. Then, focus on Linear Algorithms, including but not limited to:

    • Linear Regression
    • Ridge Regression
    • SVD Solving, QR Solving, Cholesky Solving for backend
    • Linear Discriminant Analysis
    • Quadratic Discriminant Analysis
    • Partial SVD (Truncated SVD --> maybe used Facebook's PCA? / Gensim's LSI? --> try not to use ARPACK's svds...)
    • Full Principal Component Analysis (complete SVD decomp)
    • Partial PCA (Truncated SVD)
    • Canonical Correlation Analysis
    • Partial Least Squares
    • Spline Regression based on Least Squares (will talk more later on this)
    • Correlation Regression (will talk more later on this)
    • Outlier Tolerant Regression (will talk more later on this)
    • SGDRegressor / SGDCLassifier easily through PyTorch
    • Batch Least Squares? --> Closed Form soln + averaging
    • Others...
    help wanted good first issue 
    opened by danielhanchen 5
  • Cannot import hyperlearn subpackages

    Cannot import hyperlearn subpackages

    After python setup.py install I get

    ModuleNotFoundError: No module named 'hyperlearn.cython'
    

    when trying to import hyperlearn.(random|linalg|…). What could the problem be?

    opened by themightyoarfish 4
  • GPU RAM consumption?

    GPU RAM consumption?

    It seems like the RAM consumption refers to the CPU RAM, right? However, GPU computation is highly limited to its memory capacity. Computation over super large dataset such as K-Means/t-SNE tends to cause out-of-memory w/o careful breakdown. Does HyperLean specifically deal with super large input (Hosted on CPU memory but cannot reside on GPU memory all at once) regardless GPU memory size limit?

    opened by farleylai 4
  • Install instructions?

    Install instructions?

    Hi there, Thanks for the package! It looks very cool and seems like you put alot of work into it. Kudos!!! Sadly I can't find installation instructions anywhere. Can you point me to them? Thanks,

    opened by hoangthienan95 3
  • Compiling numba code failes on Ubuntu 16.04

    Compiling numba code failes on Ubuntu 16.04

    numba.errors.LoweringError: Failed at nopython (nopython mode backend) Buffer dtype cannot be buffer

    File "hyperlearn/numba.py", line 126: def squaresum(v): s = v[0]**2 for i in prange(1,len(v)): ^ [1] During: lowering "id=6[LoopNest(index_variable = parfor_index.395, range = ($const28.2, $28.5, 1))]{44: <ir.Block at /home/lee/hyperlearn/hyperlearn/numba.py (126)>}Var(parfor_index.395, /home/lee/hyperlearn/hyperlearn/numba.py (126))" at /home/lee/hyperlearn/hyperlearn/numba.py (126)

    opened by MCDM2018 1
  • Undefined function in `randomized.linalg.svd`

    Undefined function in `randomized.linalg.svd`

    https://github.com/danielhanchen/hyperlearn/blob/a2af0ba0420be6a2f036c761e7ecae9f0e66c21a/hyperlearn/randomized/linalg.py#L299

    The name _min is not defined. Seems like that existed in hyperlearn_old/numba but not the current one.

    opened by themightyoarfish 1
  • Compiling Numba code fails on 16.04

    Compiling Numba code fails on 16.04

    At the end of python setup.py install, it says to run

    #### If you want to compile Numba code, please run:
        >>>>   python numba_compile.py
    

    to compile Numba. Doing this gives me

    Progress: |||||||||||||||Traceback (most recent call last):
      File "numba_compile.py", line 6, in <module>
        from hyperlearn.numba.funcs import *
      File "/home/share/software/sources/hyperlearn/hyperlearn/numba/funcs.py", line 4, in <module>
        from ..cfuncs import uinteger
      File "/home/share/software/sources/hyperlearn/hyperlearn/cfuncs.py", line 3, in <module>
        from .cython.base import isComplex, available_memory, isList
    ModuleNotFoundError: No module named 'hyperlearn.cython.base'
    
    opened by themightyoarfish 1
  • Shocking Confusing Speed / Timing results of Algorithms (Sklearn, Numpy, Scipy, Pytorch, Numba) | Prelim results

    Shocking Confusing Speed / Timing results of Algorithms (Sklearn, Numpy, Scipy, Pytorch, Numba) | Prelim results

    Anyways, I didn't update the code a lot, but that's because I was busily testing and finding out which algos were the most stable and best.

    Key findings for N = 5,000 P = 6,000 [more features than N near square matrix]

    1. For pseudoinverse, (used in Linear Reg, Ridge Reg, lots of other algos), JIT, Scipy MKL, PinvH, Pinv2 and HyperLearn's Pinv are very similar. PyTorch's is clearly problematic, having close to over x4 slower than Scipy MKL.

    2. For Eigh (used in PCA, LDA, QDA, other algos), Sklearn's PCA utilises SVD. Clearly, not a good idea, since it is much better to compute the eigenvec / eigenval on XTX. JIT Eigh is the clear winner at 14.5 seconds on XTX, whilst Numpy is 2x slower. Torch likewise is slower once again...

    3. So, for PCA, a speedup of 3 times is seem if using JIT compiled Eigh when compared to Sklearn's PCA

    4. To solve X*theta = y, Torch GELS is super unstable. Like really. If you use Torch GELS, don't forget to call theta_hat[np.isnan(theta_hat) | np.isinf(theta_hat)] = 0, or else results are problematic. All other algos have very similar MSEs, and HyperLearn's Regularized Cholesky Solve takes a mere 0.635 seconds when compared to say using Sklearn's next fastest Ridge Solve (via cholesky) by over 100% (after considering matrix multiplication time) --> HyperLearn 2.89s vs 4.53s Sklearn.

    So to conclude:

    1. HyperLearn's Pseudoinverse has no speed improvement

    2. HyperLearn's PCA will have over 2 times speed boost. (200% improvement)

    3. HyperLearn's Linear Solvers will be over 1 times faster. (100% improvement)

    Help make HyperLearn better! All contributors are welcome, as this is truly an overwhelming project...

    drawing research 
    opened by danielhanchen 1
  • Cholskey Decompsition for Linear Regression

    Cholskey Decompsition for Linear Regression

    I tried N = 1,000,000 and P = 100, and Cholskey blew me away!!! (it took a whopping 400ms... YES milli seconds) to fit.

    Goodness. PyTorch / Numba SVD takes at least 2 seconds! The issue with Cholskey is STABILITY. If a matrix is near singular, the results will be horrible. So, need to do the following:

    1. Add cholskey_solve
    2. Add regularization default NOT = 0, but 0.0001 or something to enforce stability (Ridge Regression theory saying XTX+alpha is always invertible for alpha > 0.
    3. Use cholskey_solve to make cholskey_stats for Conf, Pred Intervals etc.
    enhancement nearly-done 
    opened by danielhanchen 1
  • Discord Server!

    Discord Server!

    Hi everyone!!! This package will be revamped ASAP to make it a functional beast!!!

    Join our Discord server: https://discord.gg/QPaNysgA to chat on everything about fast algorithms, AI and more!!!

    opened by danielhanchen 0
  • Installation Problem

    Installation Problem

    I installed the hpyerlearn successfully usingpython setup.py install. It shows:

    #### Welcome to Umbra's HyperLearn! ####
    #### During installation, code will be compiled down to C / LLVM via Numba. ####
    #### This could mean you have to wait...... ####
    
    #### You MUST have a C compiler AND MKL/LAPACK enabled Scipy. ####
    #### If you have Anaconda, then you are set to go! ####
    running install
    running bdist_egg
    running egg_info
    writing manifest file 'hyperlearn.egg-info/SOURCES.txt'
    running install_lib
    running build_py
    creating build/bdist.linux-x86_64/egg
    creating build/bdist.linux-x86_64/egg/hyperlearn
    byte-compiling build/bdist.linux-x86_64/egg/hyperlearn/__init__.py to __init__.cpython-37.pyc
    creating build/bdist.linux-x86_64/egg/EGG-INFO
    copying hyperlearn.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying hyperlearn.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying hyperlearn.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying hyperlearn.egg-info/not-zip-safe -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying hyperlearn.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying hyperlearn.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    removing 'build/bdist.linux-x86_64/egg' (and everything under it)
    creating /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages/hyperlearn-0.0.1-py3.7.egg
    Extracting hyperlearn-0.0.1-py3.7.egg to /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    byte-compiling /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages/hyperlearn-0.0.1-py3.7.egg/hyperlearn/__init__.py to __init__.cpython-37.pyc
    Adding hyperlearn 0.0.1 to easy-install.pth file
    
    Installed /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages/hyperlearn-0.0.1-py3.7.egg
    Processing dependencies for hyperlearn==0.0.1
    Searching for Cython==0.29
    Best match: Cython 0.29
    Adding Cython 0.29 to easy-install.pth file
    Installing cygdb script to /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/bin
    Installing cython script to /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/bin
    Installing cythonize script to /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/bin
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for psutil==5.7.0
    Best match: psutil 5.7.0
    Adding psutil 5.7.0 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for numba==0.55.2
    Best match: numba 0.55.2
    Adding numba 0.55.2 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for torch==1.6.0
    Best match: torch 1.6.0
    Adding torch 1.6.0 to easy-install.pth file
    Installing convert-caffe2-to-onnx script to /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/bin
    Installing convert-onnx-to-caffe2 script to /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/bin
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for pandas==1.3.5
    Best match: pandas 1.3.5
    Adding pandas 1.3.5 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for scipy==1.7.3
    Best match: scipy 1.7.3
    Adding scipy 1.7.3 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for scikit-learn==1.0.2
    Best match: scikit-learn 1.0.2
    Adding scikit-learn 1.0.2 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for torchvision==0.7.0+cu101
    Best match: torchvision 0.7.0+cu101
    Adding torchvision 0.7.0+cu101 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for numpy==1.21.6
    Best match: numpy 1.21.6
    Adding numpy 1.21.6 to easy-install.pth file
    Installing f2py script to /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/bin
    Installing f2py3 script to /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/bin
    Installing f2py3.7 script to /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/bin
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for llvmlite==0.38.1
    Best match: llvmlite 0.38.1
    Adding llvmlite 0.38.1 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for setuptools==62.3.2
    Best match: setuptools 62.3.2
    Adding setuptools 62.3.2 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for future==0.18.2
    Best match: future 0.18.2
    Adding future 0.18.2 to easy-install.pth file
    Installing futurize script to /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/bin
    Installing pasteurize script to /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/bin
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for python-dateutil==2.8.2
    Best match: python-dateutil 2.8.2
    Adding python-dateutil 2.8.2 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for pytz==2022.1
    Best match: pytz 2022.1
    Adding pytz 2022.1 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for joblib==1.1.0
    Best match: joblib 1.1.0
    Adding joblib 1.1.0 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for threadpoolctl==3.1.0
    Best match: threadpoolctl 3.1.0
    Adding threadpoolctl 3.1.0 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for Pillow==7.2.0
    Best match: Pillow 7.2.0
    Adding Pillow 7.2.0 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Searching for six==1.16.0
    Best match: six 1.16.0
    Adding six 1.16.0 to easy-install.pth file
    
    Using /home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/sgcc36/lib/python3.7/site-packages
    Finished processing dependencies for hyperlearn==0.0.1
    #### HyperLearn has been installed! ####
    
    #### If you want to compile Numba code, please run:
        >>>>   python numba_compile.py
    
    

    However, when I import hyperleran, I found there is not any sub method can be found and there is only a hyperlearn-0.0.1-py3.7.egg file in the site-package file. Is there anything wrong during my installation? Thanks for your reply.

    opened by lk1983823 1
Owner
Daniel Han-Chen
Fast energy efficient machine learning algorithms
Daniel Han-Chen
High performance implementation of Extreme Learning Machines (fast randomized neural networks).

High Performance toolbox for Extreme Learning Machines. Extreme learning machines (ELM) are a particular kind of Artificial Neural Networks, which sol

Anton Akusok 174 Dec 7, 2022
Contains an implementation (sklearn API) of the algorithm proposed in "GENDIS: GEnetic DIscovery of Shapelets" and code to reproduce all experiments.

GENDIS GENetic DIscovery of Shapelets In the time series classification domain, shapelets are small subseries that are discriminative for a certain cl

IDLab Services 90 Oct 28, 2022
Optimal Randomized Canonical Correlation Analysis

ORCCA Optimal Randomized Canonical Correlation Analysis This project is for the python version of ORCCA algorithm. It depends on Numpy for matrix calc

Yinsong Wang 1 Nov 21, 2021
Python/Sage Tool for deriving Scattering Matrices for WDF R-Adaptors

R-Solver A Python tools for deriving R-Type adaptors for Wave Digital Filters. This code is not quite production-ready. If you are interested in contr

null 8 Sep 19, 2022
Test symmetries with sklearn decision tree models

Test symmetries with sklearn decision tree models Setup Begin from an environment with a recent version of python 3. source setup.sh Leave the enviro

Rupert Tombs 2 Jul 19, 2022
In this Repo a simple Sklearn Model will be trained and pushed to MLFlow

SKlearn_to_MLFLow In this Repo a simple Sklearn Model will be trained and pushed to MLFlow Install This Repo is based on poetry python3 -m venv .venv

null 1 Dec 13, 2021
Turning images into '9-pan' palettes using KMeans clustering from sklearn.

img2palette Turning images into '9-pan' palettes using KMeans clustering from sklearn. Requirements We require: Pillow, for opening and processing ima

Samuel Vidovich 2 Jan 1, 2022
Napari sklearn decomposition

napari-sklearn-decomposition A simple plugin to use with napari This napari plug

null 1 Sep 1, 2022
Breast-Cancer-Classification - Using SKLearn breast cancer dataset which contains 569 examples and 32 features classifying has been made with 6 different algorithms

Breast-Cancer-Classification - Using SKLearn breast cancer dataset which contains 569 examples and 32 features classifying has been made with 6 different algorithms

Mert Sezer Ardal 1 Jan 31, 2022
Multiple Linear Regression using the LinearRegression class from sklearn.linear_model library

Multiple-Linear-Regression-master - A python program to implement Multiple Linear Regression using the LinearRegression class from sklearn.linear model library

Kushal Shingote 1 Feb 6, 2022
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Jan 9, 2023
Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Python Extreme Learning Machine (ELM) Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Augusto Almeida 84 Nov 25, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques

Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

Vowpal Wabbit 8.1k Dec 30, 2022
CD) in machine learning projectsImplementing continuous integration & delivery (CI/CD) in machine learning projects

CML with cloud compute This repository contains a sample project using CML with Terraform (via the cml-runner function) to launch an AWS EC2 instance

Iterative 19 Oct 3, 2022
Azure Cloud Advocates at Microsoft are pleased to offer a 12-week, 24-lesson curriculum all about Machine Learning

Azure Cloud Advocates at Microsoft are pleased to offer a 12-week, 24-lesson curriculum all about Machine Learning

Microsoft 43.4k Jan 4, 2023
ml4h is a toolkit for machine learning on clinical data of all kinds including genetics, labs, imaging, clinical notes, and more

ml4h is a toolkit for machine learning on clinical data of all kinds including genetics, labs, imaging, clinical notes, and more

Broad Institute 65 Dec 20, 2022
Microsoft contributing libraries, tools, recipes, sample codes and workshop contents for machine learning & deep learning.

Microsoft contributing libraries, tools, recipes, sample codes and workshop contents for machine learning & deep learning.

Microsoft 366 Jan 3, 2023
A data preprocessing package for time series data. Design for machine learning and deep learning.

A data preprocessing package for time series data. Design for machine learning and deep learning.

Allen Chiang 152 Jan 7, 2023
A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

Daniel Formoso 5.7k Dec 30, 2022