The SHOGUN machine learning toolbox

Unified and efficient Machine Learning since 1999.

Latest release:


Cite Shogun:


Develop branch build status:

Build status codecov

Donate to Shogun via NumFocus:

Powered by NumFOCUS



Shogun is implemented in C++ and offers automatically generated, unified interfaces to Python, Octave, Java / Scala, Ruby, C#, R, Lua. We are currently working on adding more languages including JavaScript, D, and Matlab.

Interface Status
Python mature (no known problems)
Octave mature (no known problems)
Java/Scala stable (no known problems)
Ruby stable (no known problems)
C# stable (no known problems)
R beta (most examples work, static calls unavailable)
Perl pre-alpha (work in progress quality)
JS pre-alpha (work in progress quality)

See our website for examples in all languages.


Shogun is supported under GNU/Linux, MacOSX, FreeBSD, and Windows.

Directory Contents

The following directories are found in the source distribution. Note that some folders are submodules that can be checked out with git submodule update --init.

  • src - source code, separated into C++ source and interfaces
  • doc - readmes (doc/readme, submodule), Jupyter notebooks, cookbook (API examples), licenses
  • examples - example files for all interfaces
  • data - data sets (submodule, required for examples)
  • tests - unit tests and continuous integration of interface examples
  • applications - applications of SHOGUN (outdated)
  • benchmarks - speed benchmarks
  • cmake - cmake build scripts


Shogun is distributed under BSD 3-clause license, with optional GPL3 components. See doc/licenses for details.

  • Implement heterogeneous (GPU+CPU) dot product computation routines (Deep learning project)

    Implement heterogeneous (GPU+CPU) dot product computation routines (Deep learning project)

    The dot product operation is one of the major building blocks for deep architecture neural networks. The routine implemented in this task should be able to handle batch computation of dot products. For some references see Theano, CUDA, OpenCL, ViennaCL. It is also worth to implement some tests to measure performance and memory specific things.

    Please join the discussion before starting working on any code. We expect to refine the task with further discussion.

    good first issue 
    opened by lisitsyn 101
  • #2068 Simple Gaussian Process Regression on Movielens.

    #2068 Simple Gaussian Process Regression on Movielens.

    How to commit data to the shogun-data? Need I use another pull request on shogun-data?

    This is the simple example of using Gaussian Process Regression on Movielens.

    opened by pl8787 61
  • Add kmeans page to cookbook

    Add kmeans page to cookbook

    • There's no parameter CLabels in class CKMeans, so I can't find a way to use apply_*, eval.evaluate, so as to compare test and training dataset.
    • Though I didn't see why CKMeans cannot have CLabels - we can just label the clusters 1..N.
    • I thought about evaluating the clustering performance by calculating the Euclidean distances between centers of training dataset and test dataset, but there's no handy method for now.
    • I didn't see the difference between dataset fm_train_real.dat and classifier_binary__2d_linear_features_train.datbut I think it doesn't really matter which one to use..?
    opened by OXPHOS 59
  • Add meta example features-char-string

    Add meta example features-char-string

    A simple meta example for CStringFeatures.

    I would like to make changes and add output file for an integration test but I am not sure if the current outputs are enough for that. Currently, it stores "max_string_length", "number_of_strings" and "length_of_first_string". I don't think that is possible to practically check all the values of "strings".

    However, if you don't have a better idea I could add eight variables that store the value of the first vector before and after the change to "test".

    opened by avramidis 56
  • Refactor laplacian

    Refactor laplacian

    @karlnapf take a look at this. I will send the link for the notebook tomorrow.

    Note that the original implementation of LaplacianInferenceMethod in Shogun used log(lu.determinant()) to compute the log_determinant is not numerical stable. (In fact, this implementation do not follow the GPML code)

    Maybe MatrixOperations.h will be merged in Math.h. However, I think in that case the Math.h file need to include the Eigen3 header.

    Another issue is currently I use MatrixXd and VectorXd to pass variables in MatrixOperations.h. maybe SGVector and SGMatrix will be better. (should I use "SGVector &" or "SGVector") I do now know whether passing SGVector to a function is to copy elements in the SGVector.

    opened by yorkerlin 54
  • Implement an example of variational approximation for binary GP classification

    Implement an example of variational approximation for binary GP classification

    This task is for the Variational Learning for Recommendations with Big data

    Our goal is to reproduce a simple example of variational approximation. We will use a GP prior with zero mean and a linear Kernel, and generate synthetic data using a logit likelihood. We will then compute an approximate Gaussian posterior N(m,V) with a restriction that the diagonal of V is 1. Our goal is to find m and V. We will use the KL method of Kuss and Rasmussen, 2005

    I have a demo code in MATLAB here, and the hope is to generate this using Shogun.

    You need to do the following two main tasks: (1) Write a function similar to ElogLik.m for logit likelihood (2) Interface the optimization in example.m using Shogun's LBFGS implementation.

    Please let us know that you are working on it, and feel free to ask any questions to @karlnapf or me.

    Tag: Development Task good first issue 
    opened by emtiyaz 51
  • cv::Mat to CDenseFeature conversion Factory and vice versa.

    cv::Mat to CDenseFeature conversion Factory and vice versa.

    I have made a factory which directly converts any cvMat object into any(required) type of CDenseFeatures. and CDenseFeature<float64_t> into the required type of cvMat (any)

    opened by kislayabhi 48
  • Added Documentation regarding issue #1878

    Added Documentation regarding issue #1878

    Added 'pca_notebook.ipynb' named python notebook in doc/ipython-notebooks/pca Implemented PCA on toy data for 2d to 1d and 3d to 2d projection. Implemented Eigenfaces for data compression and face recognition using att_face dataset.

    opened by kislayabhi 48
  • WIP Write Generalized Linear Machine class

    WIP Write Generalized Linear Machine class

    #5005 #5000 This is the basic framework for the Generalized Linear Machine class. This class is supposed to implement the following distributions BINOMIAL, GAMMA, SOFTPLUS, PROBIT, POISSON

    The code has been written keeping in mind this reference: PyGLMNet library However, I have only written code for the Poisson distribution till now.


    This PR is so that a discussion can be held about the implementation of the GLM and so that Some feedback can be obtained for my code. @lgoetz @geektoni


    • [x] Write code.
    • [x] Add basic test.
    • [x] Add gradient test.
    • [X] Link github gists for generating data.
    • [X] Check why the SGObject test is failing.
    • [ ] Use FeatureDispatchCRTP.
    opened by Hephaestus12 47
  • Added DEPRECATED versions of statistic and variance in streaming MMD

    Added DEPRECATED versions of statistic and variance in streaming MMD

    DEPRECATED versions are available with

    • statistic type S_UNBIASED_DEPRECATED
    • null variance estimation method NO_PERMUTATION_DEPRECATED
    • null approximation method MMD1_GAUSSIAN_DEPRECATED
    opened by lambday 47
  • one issue  about using Shogun's optimizers in target languages

    one issue about using Shogun's optimizers in target languages

    @karlnapf In CInference class

    virtual void register_minimizer(Minimizer* minimizer);

    In Minimizer class

    #ifndef MINIMIZER_H
    #define MINIMIZER_H
    #include <shogun/lib/config.h>
    namespace shogun
    /** @brief The minimizer base class.
    class Minimizer
            /** Do minimization and get the optimal value 
             * @return optimal value
            virtual float64_t minimize()=0;
            virtual ~Minimizer() {}

    Note that

    • CInference is a sub-class of SGObject
    • LBFGSMinimizer class is a sub-class of Minimizer
    • CSingleLaplaceInferenceMethod is a sub-class of CInference
    • Minimizer is NOT a sub-class of CSGObject

    The following lines of C++ code work.

    CSingleLaplaceInferenceMethod* inf = new CSingleLaplaceInferenceMethod();
    LBFGSMinimizer* opt=new LBFGSMinimizer();

    However, the following lines of Python code do not work

    opt = LBFGSMinimizer()

    Error output:

    TypeError: in method 'Inference_register_minimizer', argument 2 of type 'Minimizer *'
    Type: Bug 
    opened by yorkerlin 44
  • Examples are outdated

    Examples are outdated

    Just tried to make the Newton example work against 6.1.4 under Windows. Does not even compile because of API changes all over the place. Would be great to have at least one C/C++ AI/DL package with working examples. Shogun was more or less my last hope as tensorflow's C API is incomplete.

    opened by jjYBdx4IL 0
  • Machine object should return a reference to themselves

    Machine object should return a reference to themselves

    Machine object should return a reference to themselves (like in sklearn)

    auto machine = pipeline->over(std::make_shared<NormOne>())
    machine->train(train_feats, train_labels);
    auto pred = machine->apply_multiclass(test_feats);

    should be simply

    auto pred = pipeline->over(std::make_shared<NormOne>())
                                     ->train(train_feats, train_labels)

    This should be a simple fix in Machine::train signature, but it might break some code..

    good first issue 
    opened by gf712 7
  • Error freeing memory LibSVM when exiting sample application

    Error freeing memory LibSVM when exiting sample application

    I build shogun master on Windows 10 x64, VisualStudio 2019. I built the sample classifier_minimal_svm, it works but I get this error exiting the application

    Critical error detected c0000374
    classifier_minimal_svm.exe has triggered a breakpoint.
    Exception thrown at 0x00007FFC395DB0B9 (ntdll.dll) in classifier_minimal_svm.exe: 0xC0000374: A heap has been corrupted 
    (parameters: 0x00007FFC396427F0).
    Unhandled exception at 0x00007FFC395DB0B9 (ntdll.dll) in classifier_minimal_svm.exe: 0xC0000374: A heap has been corrupted (parameters: 0x00007FFC396427F0).

    This is the stack trace:

    ntdll.dll!00007ffc395db0b9()	Unknown
    ntdll.dll!00007ffc395db083()	Unknown
    ntdll.dll!00007ffc395e390e()	Unknown
    ntdll.dll!00007ffc395e3c1a()	Unknown
    ntdll.dll!00007ffc3957ecb1()	Unknown
    ntdll.dll!00007ffc3958ce62()	Unknown
    ucrtbase.dll!00007ffc357ec7eb()	Unknown
    classifier_minimal_svm.exe!shogun::sg_free(void * ptr) Line 186	C++
    classifier_minimal_svm.exe!shogun::sg_generic_free<int,0>(int * ptr) Line 124	C++
    classifier_minimal_svm.exe!shogun::SGVector<int>::free_data() Line 405	C++
    classifier_minimal_svm.exe!shogun::SGReferencedData::unref() Line 102	C++
    classifier_minimal_svm.exe!shogun::SGVector<int>::~SGVector<int>() Line 173	C++
    classifier_minimal_svm.exe!shogun::KernelMachine::~KernelMachine() Line 79	C++
    classifier_minimal_svm.exe!shogun::SVM::~SVM() Line 40	C++
    classifier_minimal_svm.exe!shogun::LibSVM::~LibSVM() Line 37	C++
    classifier_minimal_svm.exe!shogun::LibSVM::`scalar deleting destructor'(unsigned int)	C++
    classifier_minimal_svm.exe!std::_Destroy_in_place<shogun::LibSVM>(shogun::LibSVM & _Obj) Line 269	C++
    classifier_minimal_svm.exe!std::_Ref_count_obj2<shogun::LibSVM>::_Destroy() Line 1446	C++
    classifier_minimal_svm.exe!std::_Ref_count_base::_Decref() Line 542	C++
    classifier_minimal_svm.exe!std::_Ptr_base<shogun::LibSVM>::_Decref() Line 776	C++
    classifier_minimal_svm.exe!std::shared_ptr<shogun::LibSVM>::~shared_ptr<shogun::LibSVM>() Line 1034	C++
    classifier_minimal_svm.exe!main(int argc, char * * argv) Line 41	C++
    [Inline Frame] classifier_minimal_svm.exe!invoke_main() Line 78	C++
    classifier_minimal_svm.exe!__scrt_common_main_seh() Line 288	C++

    I see in previous release there was this line of code now removed

    // free up memory
    Type: Bugfixing Tag: Cleanup 
    opened by spiovesan 15
  • Make Machine class stateless

    Make Machine class stateless

    @LiuYuHui's main GSoC project. Machine class becomes stateless wrt Features and Labels which means that the user has to provide features and labels when fitting a Machine. This is essentially done by adding the notion of (Non)Parametric Machines.

    Tag: GSoC 
    opened by gf712 2
  • Remove SVM from interfaces

    Remove SVM from interfaces

    Currently we expose SVM to interfaces, but this is somewhat redundant as we expose Machine (and SVM is a Machine derived class). This was just a hack to make sure that only SVM types can be passed with put to objects expecting an SVM parameter. But now we can test at runtime if the object being put can be casted to the expected type:

    So all Machine which expect an SVM parameter (e.g. MKLClassification) should have this castable check and register the svm parameter as a Machine. This will avoid having to cast machine to svm when passing it to things like MKLClassifcation:

    @karlnapf the only issue will be how does the user have access to compute_svm_primal_objective and compute_svm_dual_objective? Maybe SVM shouldn't be removed from swig, but classes like MKL should register SVM as Machine? That way we avoid having to cast the Machine object when it is just passed as an argument in put?

    Tag: Cleanup Tag: SWIG 
    opened by gf712 11
  • Replace Math::fequals with new utility function

    Replace Math::fequals with new utility function

    We are in the process of dropping Math.h and one of the more commonly used operation there is fequals, which checks if the difference between two values is less than the type's epsilon. The task is to write this as a free function, probably in a new file in the util directory and drop Math::fequals.

    good first issue Tag: Cleanup 
    opened by gf712 16
  • shogun_6.1.4(Jul 5, 2019)

  • shogun_6.1.3(Dec 7, 2017)


    • Drop all <math.h> function calls [Viktor Gal]
    • Use c++11 std::isnan, std:isfinite, std::isinf [Viktor Gal]


    • Port ipython notebooks to be python3 compatible [Viktor Gal]
    • Use the shogun-static library on Windows when linking the interface library [Viktor Gal]
    • Fix python typemap when compiling with MSVC [Viktor Gal]
    • Fix ShogunConfig.cmake paths [Viktor Gal]
    • Fix meta example parser bug in parallel builds [Esben Sørig]
    Source code(tar.gz)
    Source code(zip)
  • shogun_6.1.2(Nov 29, 2017)

  • shogun_6.1.1(Nov 29, 2017)


    • Install headers of GPL models when LICENSE_GPL_SHOGUN is enabled [Viktor Gal]
    • Always turn on LIBSHOGUN_BUILD_STATIC when compiling with MSVC [Viktor Gal]
    • Fix ipython notebook errors [Viktor Gal]
    Source code(tar.gz)
    Source code(zip)
  • shogun_6.1.0(Nov 28, 2017)

    • This release is dedicated for Heiko's successful PhD defense!

    • Add conda-forge packages, to get prebuilt binaries via the cross-platform conda package manager [Dougal Sutherland]

    • Change interface cmake variables to INTERFACE_*

    • Move GPL code to gpl submodule [Heiko Strathmann]


    • Enable using BLAS/LAPACK from Eigen by default [Viktor Gal]
    • Add iterators to SGVector and SGMatrix [Viktor Gal]
    • Significantly lower the runtime of KernelPCA (GSoC '17) [Michele Mazzoni]
    • Refactor FisherLDA and LDA solvers (GSoC '17) [Michele Mazzoni]
    • Add automated test for trained model serialization (GSoC '17) [Michele Mazzoni]
    • Enable SWIG director classes by default [Viktor Gal]
    • Vectorize DotFeatures covariance/mean calculation [Michele Mazzoni]
    • Support for premature stopping of model training (GSoC '17) [Giovanni De Toni]
    • Add support for observable variables (GSoC '17) [Giovanni De Toni]
    • Use TFLogger to serialize observed variables for TensorBoard (GSoC '17) [Giovanni De Toni]
    • Drop CMath::dot and SGVector::dot and use linalg::dot [Viktor Gal]
    • Added class probabilities for BaggingMachine (GSoC '17) [Olivier Nguyen]


    • Fix transpose bug in Ruby typemap for matrices [Elias Saalmann]
    • Fix MKL detection and linking; use mkl_rt when available [Viktor Gal]
    • Fix Windows static linking [Viktor Gal]
    • Fix SWIG interface compilation on Windows [qcrist]
    • Fix CircularBuffer bug that broke parsing of big CSV and LibSVM files #1991 [Viktor Gal]
    • Fix R interface when using clang to compile the interface [Viktor Gal]
    Source code(tar.gz)
    Source code(zip)
  • shogun_6.0.0(Apr 23, 2017)

    • Add native MS Windows support [Viktor Gal]
    • Shogun requires the compiler to support C++11 features
    • Shogun cloud online: Jupyter notebook with Shogun from the browser,


    • LDA now supports 32, 64 and 128 bit floating point numbers [Chris Goldsworthy]
    • Add SHOGUN_NUM_THREADS enviroment variable to control the number of threads used by the models in runtime [Viktor Gal]
    • Added Scala Interface to the build [Abhinav Rai]
    • Major re-writing and API changes in kernel statistical hypothesis testing framework, significant speed up in permutation test for quadratic time MMD, new kernel selection algorithms for quadratic time MMD [Soumyajit De]


    • Fix build error of R interface for R>=3.3.0, #3460 [Heiko Strathmann]
    • Make the code compatible with Eigen 3.3.0 [Viktor Gal]
    • Fix number of CPUs detected on Linux [Viktor Gal]
    • Fix multi-threading in KMeansBase [Viktor Gal]
    • Make ExponentialARDKernel thread-safe [Viktor Gal]
    • Make PRNG thread-safe [Viktor Gal]
    • Fix python interface when using libshogun compiled with OpenMP [Viktor Gal]
    • Fix CART to work with cross-validation [Fernando Iglesias]

    Cleanup, efficiency updates, and API Changes:

    • Port multi-threading to use OpenMP backend in Kernel [Viktor Gal]
    • Fix false sharing in EuclideanDistance [Viktor Gal]
    • Fix out of source build of the whole project [Viktor Gal]
    • Add LIBSHOGUN cmake flag to turn off libshogun compilation [Viktor Gal]
    • Export Shogun target with cmake to enable to build modular interfaces to a pre-compiled libshogun on the system without requiring to compile libshogun itself [Viktor Gal]


    • Contains major rewrite and clean-up of developer documentation in doc/readme [Heiko Strathmann, Lea Götz]
    • Known issue: Octave multithreaded crashes, currently bindings are initialized single-threaded, [Heiko Strathmann]
    Source code(tar.gz)
    Source code(zip)
  • shogun_5.0.0(Nov 4, 2016)


    • GSoC 2016 project of Saurabh Mahindre: Major efficiency improvements for KMeans, LARS, Random Forests, Bagging, KNN.
    • Add new Shogun cookbook for documentation and testing across all target languages [Heiko Strathmann, Sergey Lisitsyn, Esben Sorig, Viktor Gal].
    • Added option to learn CombinedKernel weights with GP approximate inference [Wu Lin].
    • LARS now supports 32, 64, and 128 bit floating point numbers [Chris Goldsworthy].


    • Fix gTest segfaults with GCC >= 6.0.0 [Björn Esser].
    • Make Java and CSharp install-dir configurable [Björn Esser].
    • Autogenerate modshogun.rb with correct module-suffix [Björn Esser].
    • Fix KMeans++ initialization [Saurabh Mahindre].

    Cleanup, efficiency updates, and API Changes:

    • Make Eigen3 a hard requirement. Bundle if not found on system. [Heiko Strathmann]
    • Drop ALGLIB (GPL) dependency in CStatistics and ship CDFLIB (public domain) instead [Heiko Strathmann]
    • Drop p-value estimation in model-selection [Heiko Strathmann]
    • Static interfaces have been removed [Viktor Gal]
    • New base class ShiftInvariantKernel of which GaussianKernel inherits [Rahul De].


    This version contains a new CMake option USE_GPL_SHOGUN, which when set to OFF will exclude all GPL codes from Shogun [Heiko Strathmann].

    Source code(tar.gz)
    Source code(zip)
  • shogun_4.1.0(May 17, 2016)

    This is a new feature and cleanup release.


    • Added GEMPLP for approximate inference to the structured output framework [Jiaolong Xu].
    • Effeciency improvements of the FITC framework for GP inference (FITC_Laplce, FITC, VarDTC) [Wu Lin].
    • Added optimisation of inducing variables in sparse GP inference [Wu Lin].
    • Added optimisation methods for GP inference (Newton, Cholesky, LBFGS, ...) [Wu Lin].
    • Added Automatic Relevance Determination (ARD) kernel functionality for variational GP inference [Wu Lin].
    • Updated Notebook for variational GP inference [Wu Lin].
    • New framework for stochastic optimisation (L1/2 loss, mirror descent, proximal gradients, adagrad, SVRG, RMSProp, adadelta, ...) [Wu Lin].
    • New Shogun meta-language for automatically generating code listings in all target languages [Esben Sörig].
    • Added periodic kernel [Esben Sörig].
    • Add gradient output functionality in Neural Nets [Sanuj Sharma].


    • Fixes for java_modular build using OpenJDK [Björn Esser].
    • Catch uncaught exceptions in Neural Net code [Khaled Nasr].
    • Fix build of modular interfaces with SWIG 3.0.5 on MacOSX [Björn Esser].
    • Fix segfaults when calling delete[] twice on SGMatrix-instances [Björn Esser].
    • Fix for building with full-hardening-(CXX|LD)FLAGS [Björn Esser].
    • Patch SWIG to fix a problem with SWIG and Python >= 3.5 [Björn Esser].
    • Add modshogun.rb: make sure narray is loaded before [Björn Esser].
    • set working-dir properly when running R (#2654) [Björn Esser].

    Cleanup, efficiency updates, and API Changes:

    • Added GPU based dot-products to linalg [Rahul De].
    • Added scale methods to linalg [Rahul De].
    • Added element wise products to linalg [Rahul De].
    • Added element-wise unary operators in linalg [Rahul De].
    • Dropped parameter migration framework [Heiko Strathmann].
    • Disabled Python integration tests by default [Sergey Lisitsyn, Heiko Strathmann].
    Source code(tar.gz)
    Source code(zip)
  • shogun_4.0.0(Jan 18, 2015)

    • This release features the work of our 8 GSoC 2014 students [student; mentors]:
      • OpenCV Integration and Computer Vision Applications [Abhijeet Kislay; Kevin Hughes]
      • Large-Scale Multi-Label Classification [Abinash Panda; Thoralf Klein]
      • Large-scale structured prediction with approximate inference [Jiaolong Xu; Shell Hu]
      • Essential Deep Learning Modules [Khaled Nasr; Sergey Lisitsyn, Theofanis Karaletsos]
      • Fundamental Machine Learning: decision trees, kernel density estimation [Parijat Mazumdar ; Fernando Iglesias]
      • Shogun Missionary & Shogun in Education [Saurabh Mahindre; Heiko Strathmann]
      • Testing and Measuring Variable Interactions With Kernels [Soumyajit De; Dino Sejdinovic, Heiko Strathmann]
      • Variational Learning for Gaussian Processes [Wu Lin; Heiko Strathmann, Emtiyaz Khan]
    • This release also contains several cleanups and bugfixes:
      • Features:
        • New Shogun project description [Heiko Strathmann]
        • ID3 algorithm for decision tree learning [Parijat Mazumdar]
        • New modes for PCA matrix factorizations: SVD & EVD, in-place or reallocating [Parijat Mazumdar]
        • Add Neural Networks with linear, logistic and softmax neurons [Khaled Nasr]
        • Add kernel multiclass strategy examples in multiclass notebook [Saurabh Mahindre]
        • Add decision trees notebook containing examples for ID3 algorithm [Parijat Mazumdar]
        • Add sudoku recognizer ipython notebook [Alejandro Hernandez]
        • Add in-place subsets on features, labels, and custom kernels [Heiko Strathmann]
        • Add Principal Component Analysis notebook [Abhijeet Kislay]
        • Add Multiple Kernel Learning notebook [Saurabh Mahindre]
        • Add Multi-Label classes to enable Multi-Label classification [Thoralf Klein]
        • Add rectified linear neurons, dropout and max-norm regularization to neural networks [Khaled Nasr]
        • Add C4.5 algorithm for multiclass classification using decision trees [Parijat Mazumdar]
        • Add support for arbitrary acyclic graph-structured neural networks [Khaled Nasr]
        • Add CART algorithm for classification and regression using decision trees [Parijat Mazumdar]
        • Add CHAID algorithm for multiclass classification and regression using decision trees [Parijat Mazumdar]
        • Add Convolutional Neural Networks [Khaled Nasr]
        • Add Random Forests algorithm for ensemble learning using CART [Parijat Mazumdar]
        • Add Restricted Botlzmann Machines [Khaled Nasr]
        • Add Stochastic Gradient Boosting algorithm for ensemble learning [Parijat Mazumdar]
        • Add Deep contractive and denoising autoencoders [Khaled Nasr]
        • Add Deep belief networks [Khaled Nasr]
      • Bugfixes:
        • Fix reference counting bugs in CList when reference counting is on [Heiko Strathmann, Thoralf Klein, lambday]
        • Fix memory problem in PCA::apply_to_feature_matrix [Parijat Mazumdar]
        • Fix crash in LeastAngleRegression for the case D greater than N [Parijat Mazumdar]
        • Fix memory violations in bundle method solvers [Thoralf Klein]
        • Fix fail in library_mldatahdf5.cpp example when is not working properly [Parijat Mazumdar]
        • Fix memory leaks in Vowpal Wabbit, LibSVMFile and KernelPCA [Thoralf Klein]
        • Fix memory and control flow issues discovered by Coverity [Thoralf Klein]
        • Fix R modular interface SWIG typemap (Requires SWIG >= 2.0.5) [Matt Huska]
      • Cleanup and API Changes:
        • PCA now depends on Eigen3 instead of LAPACK [Parijat Mazumdar]
        • Removing redundant and fixing implicit imports [Thoralf Klein]
        • Hide many methods from SWIG, reducing compile memory by 500MiB [Heiko Strathmann, Fernando Iglesias, Thoralf Klein]
    Source code(tar.gz)
    Source code(zip)
  • shogun_3.2.0(Feb 17, 2014)

    we are pleased to announce Shogun 3.2.0 !

    This release also contains several cleanups and bugfixes:

    • Features:
      • Fully support python3 now
      • Add mini-batch k-means [Parijat Mazumdar]
      • Add k-means++ for more details see the notebook [Parijat Mazumdar]
      • Add sub-sequence string kernel [lambday]
    • Bugfixes:
      • Compile fixes for upcoming swig3.0
      • Speedup for gaussian process' apply()
      • Improve unit / integration test checks
      • libbmrm uninitialized memory reads
      • libocas uninitialized memory reads
      • Octave 3.8 compile fixes [Orion Poplawski]
      • Fix java modular compile error [Bjoern Esser]
    Source code(tar.gz)
    Source code(zip)

The SHOGUN machine learning toolbox Unified and efficient Machine Learning since 1999. Latest release: Cite Shogun: Develop branch build status: Donat

Shōgun ML 2.8k Feb 13, 2021