A library of sklearn compatible categorical variable encoders

Overview

Categorical Encoding Methods

Test Suite and Linting DOI

A set of scikit-learn-style transformers for encoding categorical variables into numeric by means of different techniques.

Important Links

Documentation: http://contrib.scikit-learn.org/category_encoders/

Encoding Methods

Unsupervised:

  • Backward Difference Contrast [2][3]
  • BaseN [6]
  • Binary [5]
  • Count [10]
  • Hashing [1]
  • Helmert Contrast [2][3]
  • Ordinal [2][3]
  • One-Hot [2][3]
  • Polynomial Contrast [2][3]
  • Sum Contrast [2][3]

Supervised:

  • CatBoost [11]
  • Generalized Linear Mixed Model [12]
  • James-Stein Estimator [9]
  • LeaveOneOut [4]
  • M-estimator [7]
  • Target Encoding [7]
  • Weight of Evidence [8]

Installation

The package requires: numpy, statsmodels, and scipy.

To install the package, execute:

$ python setup.py install

or

pip install category_encoders

or

conda install -c conda-forge category_encoders

To install the development version, you may use:

pip install --upgrade git+https://github.com/scikit-learn-contrib/category_encoders

Usage

All of the encoders are fully compatible sklearn transformers, so they can be used in pipelines or in your existing scripts. Supported input formats include numpy arrays and pandas dataframes. If the cols parameter isn't passed, all columns with object or pandas categorical data type will be encoded. Please see the docs for transformer-specific configuration options.

Examples

There are two types of encoders: unsupervised and supervised. An unsupervised example:

from category_encoders import *
import pandas as pd
from sklearn.datasets import load_boston

# prepare some data
bunch = load_boston()
y = bunch.target
X = pd.DataFrame(bunch.data, columns=bunch.feature_names)

# use binary encoding to encode two categorical features
enc = BinaryEncoder(cols=['CHAS', 'RAD']).fit(X)

# transform the dataset
numeric_dataset = enc.transform(X)

And a supervised example:

from category_encoders import *
import pandas as pd
from sklearn.datasets import load_boston

# prepare some data
bunch = load_boston()
y_train = bunch.target[0:250]
y_test = bunch.target[250:506]
X_train = pd.DataFrame(bunch.data[0:250], columns=bunch.feature_names)
X_test = pd.DataFrame(bunch.data[250:506], columns=bunch.feature_names)

# use target encoding to encode two categorical features
enc = TargetEncoder(cols=['CHAS', 'RAD'])

# transform the datasets
training_numeric_dataset = enc.fit_transform(X_train, y_train)
testing_numeric_dataset = enc.transform(X_test)

For the transformation of the training data with the supervised methods, you should use fit_transform() method instead of fit().transform(), because these two methods do not have to generate the same result. The difference can be observed with LeaveOneOut encoder, which performs a nested cross-validation for the training data in fit_transform() method (to decrease over-fitting of the downstream model) but uses all the training data for scoring with transform() method (to get as accurate estimates as possible).

Furthermore, you may benefit from following wrappers:

  • PolynomialWrapper, which extends supervised encoders to support polynomial targets
  • NestedCVWrapper, which helps to prevent overfitting

Additional examples and benchmarks can be found in the examples directory.

Contributing

Category encoders is under active development, if you'd like to be involved, we'd love to have you. Check out the CONTRIBUTING.md file or open an issue on the github project to get started.

References

  1. Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing for Large Scale Multitask Learning. Proc. ICML.
  2. Contrast Coding Systems for categorical variables. UCLA: Statistical Consulting Group. From https://stats.idre.ucla.edu/r/library/r-library-contrast-coding-systems-for-categorical-variables/.
  3. Gregory Carey (2003). Coding Categorical Variables. From http://psych.colorado.edu/~carey/Courses/PSYC5741/handouts/Coding%20Categorical%20Variables%202006-03-03.pdf
  4. Strategies to encode categorical variables with many categories. From https://www.kaggle.com/c/caterpillar-tube-pricing/discussion/15748#143154.
  5. Beyond One-Hot: an exploration of categorical variables. From http://www.willmcginnis.com/2015/11/29/beyond-one-hot-an-exploration-of-categorical-variables/
  6. BaseN Encoding and Grid Search in categorical variables. From http://www.willmcginnis.com/2016/12/18/basen-encoding-grid-search-category_encoders/
  7. Daniele Miccii-Barreca (2001). A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification and Prediction Problems. SIGKDD Explor. Newsl. 3, 1. From http://dx.doi.org/10.1145/507533.507538
  8. Weight of Evidence (WOE) and Information Value Explained. From https://www.listendata.com/2015/03/weight-of-evidence-woe-and-information.html
  9. Empirical Bayes for multiple sample sizes. From http://chris-said.io/2017/05/03/empirical-bayes-for-multiple-sample-sizes/
  10. Simple Count or Frequency Encoding. From https://www.datacamp.com/community/tutorials/encoding-methodologies
  11. Transforming categorical features to numerical features. From https://tech.yandex.com/catboost/doc/dg/concepts/algorithm-main-stages_cat-to-numberic-docpage/
  12. Andrew Gelman and Jennifer Hill (2006). Data Analysis Using Regression and Multilevel/Hierarchical Models. From https://faculty.psau.edu.sa/filedownload/doc-12-pdf-a1997d0d31f84d13c1cdc44ac39a8f2c-original.pdf
Comments
  • Add Multi-Process Supported HashingEncoder

    Add Multi-Process Supported HashingEncoder

    Add multiple-process supported HashingEncoder called NHashingEncoder. By using multi-process, it's several times faster than HashingEncoder. On i5-8259U, encoding 1,000,000 samples by HashingEncoder takes 720+ seconds while NHashingEncoder with parameter "max_process=4" only takes 230+ seconds. On 16x3.2GHz CPU + 64G Memory linux, encoding 20million samples by HashingEncoder takes over 4 hours while NHashingEncoder with parameter"max_process=8" only takes 20 minutes.

    opened by liushulun 30
  • Ordinal encoder support new handle unknown handle missing

    Ordinal encoder support new handle unknown handle missing

    Here is the first pass at making Ordinal Encoder support the fields handle_unknown and handle_missing as described at https://github.com/scikit-learn-contrib/categorical-encoding/issues/92.

    Lets go through the fields and their logic.

    handle_unknown

    1. value
      • unknown values go to -1 at transform time
    2. error
      • throw ValueError if encounter new categories during transform time
    3. return_nan
      • at transform time, return nan

    Ok, now handle_missing has configurations for each setting depending on if nan is present at fit time.

    handle_missing

    1. value
      • Nan present at fit time-> nan is treated as category
      • Nan not present at fit time -> transform returns -2
    2. return_nan
      • fit add -2 mapping and at transform return -2 with nan
    3. error
      • At fit or transform, throw error

    Ok, for a total implementation every encoder will have to be changed. What do we want to do avoiding gigantic Pull Requests? Have a long lived feature branch?

    Ok thoughts,

    1. I am going to implement cucumber tests for the handle_unknown and .handle_missing because trying to keep it all straight in my head is difficult.
    2. I need to go through inverse transform and check it against every new setting.
    3. My implementation for return_nan make processing in the downstream encoders more difficult because we are mapping nan to -2.
    4. The relationship between value and indicator for the multi-column encoders and the output of the ordinal encoder currently confuses me. I am going to sit down and write it all out so I know what should lead to what.
    5. Check the changes to the test_ordinal_dist test in test_ordinal. Why was None not being treated as a category?

    Tell me what you think and I can get started o the other encoders.

    opened by JohnnyC08 23
  • Fix binary encoder for columntransformer

    Fix binary encoder for columntransformer

    I discovered that when using the BinaryEncoder in a sklearn.ColumnTransformer, the passed params are lost.

    This is because the encoder gets instantiated twice in a ColumnTransformer. Currently, params are not registered to self in BinaryEncoder.init(), so they are lost when the ColumnTransformer is put to work.

    Disclaimer: I was able to correctly binary encode in a local debug session. However, as there are so many tests failing on the upstream master currently, it was hard to find out whether my solution has an undesired impact.

    Also, I am confused by ordinal.py L323-L326. Is this a bug? It seems to correctly encode both with the -2 and np.nan...

    opened by datarian 21
  • Quantile encoder

    Quantile encoder

    opened by cmougan 19
  • 1.4.0 Release Organization

    1.4.0 Release Organization

    Hey all, been away from the project for a bit, but I'm going back through all of the issues and PRs worked on recently (looks like a bunch of good progress!). Special thanks to @janmotl for all of the work as primary maintainer over the past months.

    Our last release was 1.3.0 on October 14th. Since then ya'll have:

    • Sped up the TargetEncoder and LeaveOneOutEncoder w/ vectorization (significantly)
    • Added support for Categorical types in many encoders
    • Implemented get_feature_names in remaining transformers
    • Improved testing coverage and quality
    • Solved edge cases in repeated column names for some transformers
    • Added support for transforming pandas Series as well as DataFrames and numpy Arrays
    • Fixed inverse transform for many encoders
    • Lots of smaller performance enhancements and code cleanups

    Which I think is a quite full set of features to constitute a release. I will be opening a separate issue to discuss how we as a community can improve our release cycle, but for now will be going through open issues and tagging anything that should be included before the v1.4.0 release. Any input on what should or shouldn't be completed prior to release is welcome.

    Thank you all for the work and support this year, and Happy Holidays.

    Release 
    opened by wdm0006 17
  • Behavior of OneHotEncoder handle_unknown option

    Behavior of OneHotEncoder handle_unknown option

    I'm trying to understand the behavior (and intent) of the handle_unknown option for OneHotEncoder (and by extension OrdinalEncoder). The docs imply that this should control NaN handling but below examples seem to indicate otherwise (category_encoders==1.2.8)

    In [2]: import pandas as pd
       ...: import numpy as np
       ...: from category_encoders import OneHotEncoder
       ...: 
    
    In [3]: X = pd.DataFrame({'a': ['foo', 'bar', 'bar'],
       ...:                   'b': ['qux', np.nan, 'foo']})
       ...: X
       ...: 
    Out[3]: 
         a    b
    0  foo  qux
    1  bar  NaN
    2  bar  foo
    
    In [4]: encoder = OneHotEncoder(cols=['a', 'b'], handle_unknown='ignore', 
       ...:                         impute_missing=True, use_cat_names=True)
       ...: encoder.fit_transform(X)
       ...: 
    Out[4]: 
       a_foo  a_bar  b_qux  b_nan  b_foo
    0      1      0      1      0      0
    1      0      1      0      1      0
    2      0      1      0      0      1
    
    In [5]: encoder = OneHotEncoder(cols=['a', 'b'], handle_unknown='impute', 
       ...:                         impute_missing=True, use_cat_names=True)
       ...: encoder.fit_transform(X)
       ...: 
    Out[5]: 
       a_foo  a_bar  a_-1  b_qux  b_nan  b_foo  b_-1
    0      1      0     0      1      0      0     0
    1      0      1     0      0      1      0     0
    2      0      1     0      0      0      1     0
    
    In [6]: encoder = OneHotEncoder(cols=['a', 'b'], handle_unknown='error', 
       ...:                         impute_missing=True, use_cat_names=True)
       ...: encoder.fit_transform(X)
       ...: 
    Out[6]: 
       a_foo  a_bar  b_qux  b_nan  b_foo
    0      1      0      1      0      0
    1      0      1      0      1      0
    2      0      1      0      0      1
    
    In [7]: encoder = OneHotEncoder(cols=['a', 'b'], handle_unknown='ignore', 
       ...:                         impute_missing=False, use_cat_names=True)
       ...: encoder.fit_transform(X)
       ...: 
    Out[7]: 
       a_foo  a_bar  b_qux  b_nan  b_foo
    0      1      0      1      0      0
    1      0      1      0      1      0
    2      0      1      0      0      1
    
    

    In particular, 'error' and 'ignore' give the same behavior, treating missing observations as another category. 'impute' adds constant zero-valued columns but also treats missing observations as another category. Naively would've expected behavior similar to pd.get_dummies(X, dummy_na={True|False}), with handle_unknown=ignore corresponding to dummy_na=False.

    bug 
    opened by multiloc 17
  • Get feature names

    Get feature names

    Implemented get_feature_names for HashingEncoder, OneHotEncoder and OrdinalEncoder.

    For my purposes, these work now. Not fully tested. It's more of a proposal for a concept. If liked, I will gladly implement for the rest of the encoders, incorporating any feedback.

    opened by datarian 16
  • Question: Difference between TargetEncoder and LeaveOneOutEncoder

    Question: Difference between TargetEncoder and LeaveOneOutEncoder

    It's not really clear to me what the difference between TargetEncoder and LeaveOneOutEncoder, as both encode using the target with leave-one-out. Can you maybe clarify and clarify this in the docs? Does either work for multi-class classification?

    question 
    opened by amueller 13
  • Implement Target Encoding with Hierarchical Structure Smoothing

    Implement Target Encoding with Hierarchical Structure Smoothing

    From section 4 of the paper sited in TargetEncoding.

    Instead of choosing the prior probability of the target as the null hypothesis, it is reasonable to replace it with the estimated probability at the next higher level of aggregation in the attribute hierarchy

    In other words, if we have a single zipcode 54321 but 100 zipcodes 54322 and 100 zipcodes 54323, we could use the mean of zipcode level 4 5432X as our mean smoothing term for zipcode 54321, instead of the mean for all zipcodes XXXXX.

    This would be a really nice additional piece of functionality to add as an encoder.

    enhancement 
    opened by JoshuaC3 13
  • In the ordinal encoder go ahead and update the existing column instea…

    In the ordinal encoder go ahead and update the existing column instea…

    To fix https://github.com/scikit-learn-contrib/categorical-encoding/issues/100

    The issue was arising because _tmp columns were being appended to the end of the data frame as part of the transform process.

    First, we noticed that the transform process was to append a temporary column, drop the existing column, and rename the temporary column to the existing column name.

    So, we went ahead and reduced that step to one step where we update the existing column using our mapping which preserves the order. I wasn't sure why the above mentioned transform method had that many steps and a single update seems to ensure the tests are passing.

    @janmotl I also noticed in travis the python3 step seems to be running python 2.7 instead of python3. From the install.sh I see mentions of a conda create and in the CI logs I see the mention of a virtualenv being set which I don't see mentioned in the project. Perhaps the travis cache needs to be cleared?

    opened by JohnnyC08 13
  • Differing dimensions for training and test

    Differing dimensions for training and test

    Hi,

    I would like to fit encodings on my training set and then using this fitted encoding to transform both the training and the test set:

    import category_encoders as ce
    
    train = ['Brunswick East', 'Fitzroy', 'Williamstown', 'Newport', 'Balwyn North', 'Doncaster', 'Melbourne', 'Albert Park', 'Bentleigh', 'Northcote']
    test = ['Fitzroy North', 'Fitzroy', 'Richmond', 'Surrey Hills', 'Blackburn', 'Port Melbourne', 'Footscray', 'Yarraville', 'Carnegie', 'Surrey Hills']
    
    encoder = ce.HelmertEncoder()
    encoder.fit(train)
    
    train_t = encoder.transform(train)
    test_t = encoder.transform(test)
    
    print train_t.shape
    >> (10, 10)
    print test_t.shape
    >> (10, 2)
    

    The problem is that the dimensions do not fit. What do I do wrong or how can I fix this issue?

    Best regards, Felix

    opened by FelixNeutatz 12
  • ValueError: `X` and `y` both have indexes, but they do not match.

    ValueError: `X` and `y` both have indexes, but they do not match.

    Expected Behavior

    When running any of the category encoders e.g. TargetEncoder() within a pipeline through permutation_test_score() it errors out with the above message. The error occurs in the convert_inputs() function which checks for if any(X.index != y.index): before raising the error.

    Actual Behavior

    The error is not correct and shouldn't occur. When I ran the same check above on my input X (dataframe) and y (series), the error doesn't occur.

    In fact, when I load input data, after splitting the data into X and y, and after label encoding y, I explicitly convert it into a pd.Series and assign it the X.index, so they are in fact identical.

    If in contrast, I do not convert the label encoded y into a pd.Series and leave it as an ndarray, then this error doesn't occur!

    Also, note that the same pipeline when fitted with the same X, y df and series works absolutely fine.

    Steps to Reproduce the Problem

    See an example of my pipeline below: image

    1. Create an arbitrary pipeline as follows:
    from sklearn.linear_model import SGDClassifier
    from category_encoders import TargetEncoder
    
    test_pipe = Pipeline([('enc', TargetEncoder()), ('clf', SGDClassifier(loss='log_loss'))])
    
    1. Run
    score, perm_scores, pvalue = permutation_test_score(test_pipe, X, y)
    

    Specifications

    • Version: 2.5.1.post0
    • Platform: Python 3.10.8
    • Subsystem: Pandas 1.5.1
    opened by RNarayan73 1
  • OneHotEncoder: handle_missing = 'ignore' would be very useful

    OneHotEncoder: handle_missing = 'ignore' would be very useful

    Expected Behavior

    It would be nice to be able to ignore missing values instead of creating new columns with an "_nan" suffix. Just like it is possible with pandas. What do you think?

    Actual Behavior

    Doesn't exist in the current latest version (accoring to my knowledge)

    Steps to Reproduce

    import pandas as pd
    import numpy as np
    from category_encoders import OneHotEncoder
    
    encoder = OneHotEncoder(
        cols=None,  # all non-numeric
        return_df=True,
        handle_missing="value",  # would be nice to have the option 'ignore'
        use_cat_names=True,
    )
    df = pd.DataFrame(
        {"this": ["GREEN", "GREEN", "YELLOW", "YELLOW"], "that": ["A", "B", "A", np.nan]}
    )
    
    encoder.fit_transform(df) # unwanted result
    pd.get_dummies(df, dummy_na=False) # wanted result
    

    Specifications

    • Version: 2.5.1.post0
    opened by woodly0 0
  • Intercept in Contrast Coding Schemes

    Intercept in Contrast Coding Schemes

    Expected Behavior

    The constant (all values 1) intercept column should not be added when applying contrast coding schemes (i.e. backward difference, sum, polynomial and helmert coding)

    I don't think this intercept column is needed. If you fit a supervised learning model it is probably gonna help to remove the intercept column. I think it is there because when fitting linear models with statsmodels you have to add the intercept.
    However I don't like that the output of an encoder would then depend on whether the intercept column is already there or not, e.g. if I first apply encoder A on column A and then encoder B on column B the intercept column of B overwrite A's intercept column hence not adding a new column. Also if I have (for some reason) a column called intercept that is not constant it would get overwritten.

    Any opinion? Am I missing something? Is the intercept necessary?

    Actual Behavior

    A constant column with all values 1 is added

    Steps to Reproduce the Problem

    Run transform on any fitted contrast coding encoder, e.g.

            train = ['A', 'B', 'C']
            encoder = encoders.BackwardDifferenceEncoder(handle_unknown='value', handle_missing='value')
            encoder.fit_transform(train)
    
    opened by PaulWestenthanner 3
  • No need to check if # of dimensions of testing set align with training set in target_encoder

    No need to check if # of dimensions of testing set align with training set in target_encoder

    https://github.com/scikit-learn-contrib/category_encoders/blob/6a13c14919d56fed8177a173d4b3b82c5ea2fef5/category_encoders/utils.py#L322-L323

    For the function _check_transform_inputs(), I do not want it to report error when # of dimensions for testing set don't align with training set. However, the default is it has to align. Considering that the purpose of target encoder is to transform designated columns using target encoders, nothing else, logically we don't have to validate the dimension alignment.

    opened by hongG1997EQ 1
  • Memory increase of WOEEncoder for newer category_encoders version

    Memory increase of WOEEncoder for newer category_encoders version

    Memory increase of WOEEncoder for category_encoders version >=2.0.0

    Hi, I noticed another memory issue with WOEEncoder. I have submitted the same bug before in #335, the difference between two bugs is the different encoder methods used and different datasets. In order to distinguish between the two encoder APIs, I resubmitted a new bug report.

    Expected Behavior

    Similar memory usage

    Actual Behavior

    According to the experiment results, when the category_encoders version is higher than 2.0.0, weight_enc.fit(train[weight_encode], train['target']) memory usage increase from 58MB to 206MB.

    Memory(MB) | Version -- | -- 209| 2.3.0 209| 2.2.2 209| 2.1.0 209| 2.0.0 58| 1.3.0

    Steps to Reproduce the Problem

    Step 1: Download the dataset

    train.zip

    Step 2: install category_encoders

    pip install  category_encoders == #version#
    

    Step 3: change category_encoders version and save the memory usage

    import numpy as np 
    import pandas as pd 
    train = pd.read_csv('train.csv')
    test = pd.read_csv('test.csv')
    columns = [x for x in train.columns if x != 'target']
    object_col_label = ['bin_0','bin_1','bin_2','bin_3','bin_4']
    one_hot_encode = ['nom_0', 'nom_1', 'nom_2', 'nom_3', 'nom_4']
    target_encode = ['nom_5', 'nom_6', 'nom_7', 'nom_8', 'nom_9']
    weight_encode = target_encode + ['ord_4', 'ord_5' ,'ord_3'] + one_hot_encode + object_col_label
    import category_encoders as ce
    weight_enc = ce.woe.WOEEncoder(cols=weight_encode)
    import tracemalloc
    tracemalloc.start()
    weight_enc.fit(train[weight_encode], train['target'])
    current3, peak3 = tracemalloc.get_traced_memory()
    print("Get_dummies memory usage is {",current3 /1024/1024,"}MB; Peak memory was :{",peak3 / 1024/1024,"}MB")
    

    Specifications

    Version: 2.3.0, 2.2.2, 2.1.0, 2.0.0, 1.3.0 Platform: ubuntu 16.4 OS : Ubuntu CPU : Intel(R) Core(TM) i9-9900K CPU GPU : TITAN V

    opened by Piecer-plc 1
  • Poor performance of OneHotEncoder for category_encoders version >=2.0.0

    Poor performance of OneHotEncoder for category_encoders version >=2.0.0

    Expected Behavior

    Similar memory usage for the different category_encoders versions or better performance for higher category_encoders versions.

    Actual Behavior

    According to the experiment results, when the category_encoders version is higher than 2.0.0, the performance of the model is worse. |Memory(MB) | Version| |- | -| |896 | 2.3.0| | 896| 2.2.2| | 896| 2.1.0| | 896| 2.0.0| | 288| 1.3.0|

    Steps to Reproduce the Problem

    Step 1: download above dataset train & test (63MB) Step 2: install category_encoders

    pip install  category_encoders == #version#
    

    Step 3: change category_encoders version and save the memory usage

    import numpy as np 
    import pandas as pd
    import category_encoders as ce
    import tracemalloc
    df_train = pd.read_csv("train.csv")
    df_test = pd.read_csv("test.csv")
    df_train.drop("id", axis=1, inplace=True) 
    df_test.drop("id", axis=1, inplace=True) 
    cat_labels = [f"cat{i}" for i in range(10)]
    
    tracemalloc.start()
    onehot_encoder = ce.one_hot.OneHotEncoder() 
    onehot_encoder.fit(pd.concat([df_train[cat_labels], df_test[cat_labels]], axis=0))
    train_ohe = onehot_encoder.transform(df_train[cat_labels]) 
    test_ohe = onehot_encoder.transform(df_test[cat_labels]) 
    
    current3, peak3 = tracemalloc.get_traced_memory()
    print("Get_dummies memory usage is {",current3 /1024/1024,"}MB; Peak memory was :{",peak3 / 1024/1024,"}MB")
    

    Specifications

    • Version: 2.3.0, 2.2.2, 2.1.0, 2.0.0, 1.3.0
    • Platform: ubuntu 16.4
    • OS : Ubuntu
    • CPU : Intel(R) Core(TM) i9-9900K CPU
    • GPU : TITAN V
    opened by DSOTM-pf 3
Releases(2.5.1.post0)
A library of extension and helper modules for Python's data analysis and machine learning libraries.

Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks. Sebastian Raschka 2014-2021 Links Doc

Sebastian Raschka 4.2k Dec 28, 2022
A Python library for dynamic classifier and ensemble selection

DESlib DESlib is an easy-to-use ensemble learning library focused on the implementation of the state-of-the-art techniques for dynamic classifier and

null 425 Dec 18, 2022
Data Analysis Baseline Library

dabl The data analysis baseline library. "Mr Sanchez, are you a data scientist?" "I dabl, Mr president." Find more information on the website. State o

Andreas Mueller 122 Dec 27, 2022
A sklearn-compatible Python implementation of Multifactor Dimensionality Reduction (MDR) for feature construction.

Master status: Development status: Package information: MDR A scikit-learn-compatible Python implementation of Multifactor Dimensionality Reduction (M

Epistasis Lab at UPenn 122 Jul 6, 2022
Categorical Depth Distribution Network for Monocular 3D Object Detection

CaDDN CaDDN is a monocular-based 3D object detection method. This repository is based off of [OpenPCDet]. Categorical Depth Distribution Network for M

Toronto Robotics and AI Laboratory 289 Jan 5, 2023
A working implementation of the Categorical DQN (Distributional RL).

Categorical DQN. Implementation of the Categorical DQN as described in A distributional Perspective on Reinforcement Learning. Thanks to @tudor-berari

Florin Gogianu 98 Sep 20, 2022
Script to create an animated data visualisation for categorical timeseries data - GIF choropleth map with annotations.

choropleth_ldn Simple script to create a chloropleth map of London with categorical timeseries data. The script in main.py creates a gif of the most f

null 1 Oct 7, 2021
Machine learning template for projects based on sklearn library.

Machine learning template for projects based on sklearn library.

Janez Lapajne 17 Oct 28, 2022
Multiple Linear Regression using the LinearRegression class from sklearn.linear_model library

Multiple-Linear-Regression-master - A python program to implement Multiple Linear Regression using the LinearRegression class from sklearn.linear model library

Kushal Shingote 1 Feb 6, 2022
The windML framework provides an easy-to-use access to wind data sources within the Python world, building upon numpy, scipy, sklearn, and matplotlib. Renewable Wind Energy, Forecasting, Prediction

windml Build status : The importance of wind in smart grids with a large number of renewable energy resources is increasing. With the growing infrastr

Computational Intelligence Group 125 Dec 24, 2022
50% faster, 50% less RAM Machine Learning. Numba rewritten Sklearn. SVD, NNMF, PCA, LinearReg, RidgeReg, Randomized, Truncated SVD/PCA, CSR Matrices all 50+% faster

[Due to the time taken @ uni, work + hell breaking loose in my life, since things have calmed down a bit, will continue commiting!!!] [By the way, I'm

Daniel Han-Chen 1.4k Jan 1, 2023
Pandas integration with sklearn

Sklearn-pandas This module provides a bridge between Scikit-Learn's machine learning methods and pandas-style Data Frames. In particular, it provides

null 2.7k Dec 27, 2022
a feature engineering wrapper for sklearn

Few Few is a Feature Engineering Wrapper for scikit-learn. Few looks for a set of feature transformations that work best with a specified machine lear

William La Cava 47 Nov 18, 2022
Hyper-parameter optimization for sklearn

hyperopt-sklearn Hyperopt-sklearn is Hyperopt-based model selection among machine learning algorithms in scikit-learn. See how to use hyperopt-sklearn

null 1.4k Jan 1, 2023
Test symmetries with sklearn decision tree models

Test symmetries with sklearn decision tree models Setup Begin from an environment with a recent version of python 3. source setup.sh Leave the enviro

Rupert Tombs 2 Jul 19, 2022
In this Repo a simple Sklearn Model will be trained and pushed to MLFlow

SKlearn_to_MLFLow In this Repo a simple Sklearn Model will be trained and pushed to MLFlow Install This Repo is based on poetry python3 -m venv .venv

null 1 Dec 13, 2021
Contains an implementation (sklearn API) of the algorithm proposed in "GENDIS: GEnetic DIscovery of Shapelets" and code to reproduce all experiments.

GENDIS GENetic DIscovery of Shapelets In the time series classification domain, shapelets are small subseries that are discriminative for a certain cl

IDLab Services 90 Oct 28, 2022
Turning images into '9-pan' palettes using KMeans clustering from sklearn.

img2palette Turning images into '9-pan' palettes using KMeans clustering from sklearn. Requirements We require: Pillow, for opening and processing ima

Samuel Vidovich 2 Jan 1, 2022
Napari sklearn decomposition

napari-sklearn-decomposition A simple plugin to use with napari This napari plug

null 1 Sep 1, 2022