Fast SHAP value computation for interpreting tree-based models

Overview

FastTreeSHAP

FastTreeSHAP package is built based on the paper Fast TreeSHAP: Accelerating SHAP Value Computation for Trees published in NeurIPS 2021 XAI4Debugging Workshop. It is a fast implementation of the TreeSHAP algorithm in the SHAP package.

For more detailed introduction of FastTreeSHAP package, please check out this blogpost.

Introduction

SHAP (SHapley Additive exPlanation) values are one of the leading tools for interpreting machine learning models. Even though computing SHAP values takes exponential time in general, TreeSHAP takes polynomial time on tree-based models (e.g., decision trees, random forest, gradient boosted trees). While the speedup is significant, TreeSHAP can still dominate the computation time of industry-level machine learning solutions on datasets with millions or more entries.

In FastTreeSHAP package we implement two new algorithms, FastTreeSHAP v1 and FastTreeSHAP v2, designed to improve the computational efficiency of TreeSHAP for large datasets. We empirically find that Fast TreeSHAP v1 is 1.5x faster than TreeSHAP while keeping the memory cost unchanged, and Fast TreeSHAP v2 is 2.5x faster than TreeSHAP, at the cost of a slightly higher memory usage (performance is measured on a single core).

The table below summarizes the time and space complexities of each variant of TreeSHAP algorithm ( is the number of samples to be explained, is the number of features, is the number of trees, is the maximum number of leaves in any tree, and is the maximum depth of any tree). Note that the (theoretical) average running time of FastTreeSHAP v1 is reduced to 25% of TreeSHAP.

TreeSHAP Version Time Complexity Space Complexity
TreeSHAP
FastTreeSHAP v1
FastTreeSHAP v2 (general case)
FastTreeSHAP v2 (balanced trees)

Performance with Parallel Computing

Parallel computing is fully enabled in FastTreeSHAP package. As a comparison, parallel computing is not enabled in SHAP package except for "shortcut" which calls TreeSHAP algorithms embedded in XGBoost, LightGBM, and CatBoost packages specifically for these three models.

The table below compares the execution times of FastTreeSHAP v1 and FastTreeSHAP v2 in FastTreeSHAP package against TreeSHAP algorithm (or "shortcut") in SHAP package on two datasets Adult (binary classification) and Superconductor (regression). All the evaluations were run in parallel on all available cores in Azure Virtual Machine with size Standard_D8_v3 (8 cores and 32GB memory) (except for scikit-learn models in SHAP package). We ran each evaluation on 10,000 samples, and the results were averaged over 3 runs.

Model # Trees Tree
Depth
Dataset SHAP (s) FastTree-
SHAP v1 (s)
Speedup FastTree-
SHAP v2 (s)
Speedup
sklearn random forest 500 8 Adult 318.44* 43.89 7.26 27.06 11.77
sklearn random forest 500 8 Super 466.04 58.28 8.00 36.56 12.75
sklearn random forest 500 12 Adult 2446.12 293.75 8.33 158.93 15.39
sklearn random forest 500 12 Super 5282.52 585.85 9.02 370.09 14.27
XGBoost 500 8 Adult 17.35** 12.31 1.41 6.53 2.66
XGBoost 500 8 Super 35.31 21.09 1.67 13.00 2.72
XGBoost 500 12 Adult 62.19 40.31 1.54 21.34 2.91
XGBoost 500 12 Super 152.23 82.46 1.85 51.47 2.96
LightGBM 500 8 Adult 7.64*** 7.20 1.06 3.24 2.36
LightGBM 500 8 Super 8.73 7.11 1.23 3.58 2.44
LightGBM 500 12 Adult 9.95 7.96 1.25 4.02 2.48
LightGBM 500 12 Super 14.02 11.14 1.26 4.81 2.91

* Parallel computing is not enabled in SHAP package for scikit-learn models, thus TreeSHAP algorithm runs on a single core.
** SHAP package calls TreeSHAP algorithm in XGBoost package, which by default enables parallel computing on all cores.
*** SHAP package calls TreeSHAP algorithm in LightGBM package, which by default enables parallel computing on all cores.

Installation

FastTreeSHAP package is available on PyPI and can be installed with pip:

pip install fasttreeshap

Installation troubleshooting:

  • On Macbook, if an error message ld: library not found for -lomp pops up, run the following command line before installation (Reference):
brew install libomp

Usage

The following screenshot shows a typical use case of FastTreeSHAP on Census Income Data. Note that the usage of FastTreeSHAP is exactly the same as the usage of SHAP, except for three additional arguments in the class TreeExplainer: algorithm, n_jobs, and shortcut.

algorithm: This argument specifies the TreeSHAP algorithm used to run FastTreeSHAP. It can take values "v0", "v1", "v2" or "auto", and its default value is "auto":

  • "v0": Original TreeSHAP algorithm in SHAP package.
  • "v1": FastTreeSHAP v1 algorithm proposed in FastTreeSHAP paper.
  • "v2": FastTreeSHAP v2 algorithm proposed in FastTreeSHAP paper.
  • "auto" (default): Automatic selection between "v0", "v1" and "v2" according to the number of samples to be explained and the constraint on the allocated memory. Specifically, "v1" is always preferred to "v0" in any use cases, and "v2" is preferred to "v1" when the number of samples to be explained is sufficiently large (), and the memory constraint is also satisfied (, is the number of threads). More detailed discussion of the above criteria can be found in FastTreeSHAP paper and in Section Notes.

n_jobs: This argument specifies the number of parallel threads used to run FastTreeSHAP. It can take values -1 or a positive integer. Its default value is -1, which means utilizing all available cores in parallel computing.

shortcut: This argument determines whether to use the TreeSHAP algorithm embedded in XGBoost, LightGBM, and CatBoost packages directly when computing SHAP values for XGBoost, LightGBM, and CatBoost models and when computing SHAP interaction values for XGBoost models. Its default value is False, which means bypassing the "shortcut" and using the code in FastTreeSHAP package directly to compute SHAP values for XGBoost, LightGBM, and CatBoost models. Note that currently shortcut is automaticaly set to be True for CatBoost model, as we are working on CatBoost component in FastTreeSHAP package. More details of the usage of "shortcut" can be found in the notebooks Census Income, Superconductor, and Crop Mapping.

FastTreeSHAP Adult Screenshot1

The code in the following screenshot was run on all available cores in a Macbook Pro (2.4 GHz 8-Core Intel Core i9 and 32GB Memory). We see that both "v1" and "v2" produce exactly the same SHAP value results as "v0". Meanwhile, "v2" has the shortest execution time, followed by "v1", and then "v0". "auto" selects "v2" as the most appropriate algorithm in this use case as desired. For more detailed comparisons between FastTreeSHAP v1, FastTreeSHAP v2 and the original TreeSHAP, check the notebooks Census Income, Superconductor, and Crop Mapping.

FastTreeSHAP Adult Screenshot2

Notes

  • In FastTreeSHAP paper, two scenarios in model interpretation use cases have been discussed: one-time usage (explaining all samples for once), and multi-time usage (having a stable model in the backend and receiving new scoring data to be explained on a regular basis). Current version of FastTreeSHAP package only supports one-time usage scenario, and we are working on extending it to multi-time usage scenario with parallel computing. Evaluation results in FastTreeSHAP paper shows that FastTreeSHAP v2 can achieve as high as 3x faster explanation in multi-time usage scenario.
  • The implementation of parallel computing is straightforward for FastTreeSHAP v1 and the original TreeSHAP, where a parallel for-loop is built over all samples. The implementation of parallel computing for FastTreeSHAP v2 is slightly more complicated: Two versions of parallel computing have been implemented. Version 1 builds a parallel for-loop over all trees, which requires memory allocation (each thread has its own matrices to store both SHAP values and pre-computed values). Version 2 builds two consecutive parallel for-loops over all trees and over all samples respectively, which requires memory allocation (first parallel for-loop stores pre-computed values across all trees). In FastTreeSHAP package, version 1 is selected for FastTreeSHAP v2 as long as its memory constraint is satisfied. If not, version 2 is selected as an alternative as long as its memory constraint is satisfied. If the memory constraints in both version 1 and version 2 are not satisfied, FastTreeSHAP v1 will replace FastTreeSHAP v2 with a less strict memory constraint.

Notebooks

The notebooks below contain more detailed comparisons between FastTreeSHAP v1, FastTreeSHAP v2 and the original TreeSHAP in classification and regression problems using scikit-learn, XGBoost and LightGBM:

Citation

Please cite FastTreeSHAP in your publications if it helps your research:

@article{yang2021fast,
  title={Fast TreeSHAP: Accelerating SHAP Value Computation for Trees},
  author={Yang, Jilei},
  journal={arXiv preprint arXiv:2109.09847},
  year={2021}
}

License

Copyright (c) LinkedIn Corporation. All rights reserved. Licensed under the BSD 2-Clause License.

Comments
  • Segmentation fault (core dumped) for shap_values

    Segmentation fault (core dumped) for shap_values

    Hi,

    I'm trying to apply the TreeExplainer to get shap_values on XGBoost model on regression problem with large dataset. During the hyperparameter tuning, it failed due to segmentation fault at the explainer.shap_values() step in certain hyperparameter sets. I used fasttreeshap=0.1.1, xgboost=1.4.1 (also tested 1.6.0) and the machine came with CPU:"Intel Xeon E5-2640 v4 (20) @ 3.400GHz" and Memory:"128GB". The sample code below is a toy script to reproduce the issue using the Superconductor dataset from example notebook:

    # for debugging
    import faulthandler
    faulthandler.enable()
    
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import train_test_split
    import xgboost as xgb
    import fasttreeshap
    
    print(f"XGBoost version: {xgb.__version__}")
    print(f"fasttreeshap version: {fasttreeshap.__version__}")
    
    # source of data: https://archive.ics.uci.edu/ml/datasets/superconductivty+data
    data = pd.read_csv("FastTreeSHAP/data/superconductor_train.csv", engine = "python")
    train, test = train_test_split(data, test_size = 0.5, random_state = 0)
    label_train = train["critical_temp"]
    label_test = test["critical_temp"]
    train = train.iloc[:, :-1]
    test = test.iloc[:, :-1]
    
    print("train XGBoost model")
    xgb_model = xgb.XGBRegressor(
        max_depth = 100, n_estimators = 200, learning_rate = 0.1, n_jobs = -1, alpha = 0.12, random_state = 0)
    xgb_model.fit(train, label_train)
    
    print("run TreeExplainer()")
    shap_explainer = fasttreeshap.TreeExplainer(xgb_model)
    
    print("run shap_values()")
    shap_values = shap_explainer.shap_values(train)
    

    The time report of program execution also showed that the "Maximum resident set size" was only about 32GB.

    ~$ /usr/bin/time -v python segfault.py 
    XGBoost version: 1.4.1
    fasttreeshap version: 0.1.1
    train XGBoost model
    run TreeExplainer()
    run shap_values()
    Fatal Python error: Segmentation fault
    
    Thread 0x00007ff2c2793740 (most recent call first):
      File "~/.local/lib/python3.8/site-packages/fasttreeshap/explainers/_tree.py", line 459 in shap_values
      File "segfault.py", line 27 in <module>
    Segmentation fault (core dumped)
    
    Command terminated by signal 11
            Command being timed: "python segfault.py"
            User time (seconds): 333.65
            System time (seconds): 27.79
            Percent of CPU this job got: 797%
            Elapsed (wall clock) time (h:mm:ss or m:ss): 0:45.30
            Average shared text size (kbytes): 0
            Average unshared data size (kbytes): 0
            Average stack size (kbytes): 0
            Average total size (kbytes): 0
            Maximum resident set size (kbytes): 33753096
            Average resident set size (kbytes): 0
            Major (requiring I/O) page faults: 0
            Minor (reclaiming a frame) page faults: 8188488
            Voluntary context switches: 3048
            Involuntary context switches: 3089
            Swaps: 0
            File system inputs: 0
            File system outputs: 0
            Socket messages sent: 0
            Socket messages received: 0
            Signals delivered: 0
            Page size (bytes): 4096
            Exit status: 0
    

    In some case (and the example above), forcing TreeExplainer(algorithm="v1") did help, which means the issue could only happen to "v2" (or "auto" passing the _memory_check()). However, by chance the v1 would raise another check_additivity issue which remained unsolved in the original algorithm.

    Alternatively, passing approximate=True to explainer.shap_values() would work but have the inconsistency concerns for the reproducibility of our studies...

    In this case, could you help me to debug with this issue?

    Thanks you so much!

    opened by ShaunFChen 8
  • TreeSHAP does not exact SHAP values on correlated features

    TreeSHAP does not exact SHAP values on correlated features

    As highlighted in https://github.com/slundberg/shap/issues/2345, it seems that TreeSHAP does not compute the expected SHAP values.

    The following notebook reproduces the problem with the FastTreeSHAP implementation: https://nbviewer.org/gist/glemaitre/9a30dd3a704675164b84d9bf7128882e

    Note that it is not only a problem in the implementation but rather a problem in algorithm 1 of the original TreeSHAP paper (i.e. tree traversal)

    opened by glemaitre 6
  • Numpy<1.22 requirement, could we upgrade it?

    Numpy<1.22 requirement, could we upgrade it?

    First of all, thank you for this amazing project, it has really sped up my team's shap execution.

    My problem arises because the numpy<1.22 restriction causes conflicts with other packages we use.

    In your setup.py code it says that this restriction is due to using numba, but in their own setup.py the minimum numpy version is 1.18 and there is no maximum version requirement. In fact, I've been able to upgrade numpy's version after installing fasttreeshap in a clean environment and have had no problem at all during my test executions (using numpy==1.23.5). This is how I setup the environment on Windows 10:

    $ python -m venv venv_fasttreeshap
    $ venv_fasttreeshap\Scripts\activate
    $ pip install fasttreeshap
    $ pip install --upgrade numpy==1.23.5
    

    It throws the following error, but pip is finaly able to install it anyway:

    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    fasttreeshap 0.1.2 requires numpy<1.22, but you have numpy 1.23.5 which is incompatible.
    

    Do you think you could upgrade the numpy version in the install_requires section of setup.py, so that it's easier to combine fasttreeshap with other packages?

    Thank you!

    opened by CarlaFernandez 3
  • Catboost bug

    Catboost bug

    Catboost produces a TreeEnsemble has no "num_nodes" error with this code. Btw do you support a background dataset parameter, like in shap for "interventional" vs "tree_path_dependent"? Because if your underlying code uses the "interventional" method this might be related to this bug: https://github.com/slundberg/shap/issues/2557

    from catboost import CatBoostRegressor
    import  fasttreeshap
    
    X, y = shap.datasets.boston()
    
    model = CatBoostRegressor(task_type="CPU",logging_level="Silent").fit(X, y)
    explainer = fasttreeshap.TreeExplainer(model, algorithm="v2", n_jobs=-1)
    shap_values = explainer(X)
    
    # visualize the first prediction's explanation
    shap.plots.waterfall(shap_values[25])
    
    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    Input In [131], in <cell line: 13>()
         10 # explain the model's predictions using SHAP
         11 # (same syntax works for LightGBM, CatBoost, scikit-learn, transformers, Spark, etc.)
         12 explainer = fasttreeshap.TreeExplainer(model, algorithm="v2", n_jobs=-1)
    ---> 13 shap_values = explainer(X)
         15 # visualize the first prediction's explanation
         16 shap.plots.waterfall(shap_values[25])
    
    File ~\miniconda3\envs\Master\lib\site-packages\fasttreeshap\explainers\_tree.py:256, in Tree.__call__(self, X, y, interactions, check_additivity)
        253     feature_names = getattr(self, "data_feature_names", None)
        255 if not interactions:
    --> 256     v = self.shap_values(X, y=y, from_call=True, check_additivity=check_additivity, approximate=self.approximate)
        257 else:
        258     assert not self.approximate, "Approximate computation not yet supported for interaction effects!"
    
    File ~\miniconda3\envs\Master\lib\site-packages\fasttreeshap\explainers\_tree.py:379, in Tree.shap_values(self, X, y, tree_limit, approximate, check_additivity, from_call)
        376 algorithm = self.algorithm
        377 if algorithm == "v2":
        378     # check if memory constraint is satisfied (check Section Notes in README.md for justifications of memory check conditions in function _memory_check)
    --> 379     memory_check_1, memory_check_2 = self._memory_check(X)
        380     if memory_check_1:
        381         algorithm = "v2_1"
    
    File ~\miniconda3\envs\Master\lib\site-packages\fasttreeshap\explainers\_tree.py:483, in Tree._memory_check(self, X)
        482 def _memory_check(self, X):
    --> 483     max_leaves = (max(self.model.num_nodes) + 1) / 2
        484     max_combinations = 2**self.model.max_depth
        485     phi_dim = X.shape[0] * (X.shape[1] + 1) * self.model.num_outputs
    
    AttributeError: 'TreeEnsemble' object has no attribute 'num_nodes'
    
    opened by nilslacroix 3
  • Conda package

    Conda package

    It would be nice to add a FastTreeSHAP package to conda-forge https://conda-forge.org/ (in addition to pypi)

    You can use grayskull https://github.com/conda-incubator/grayskull to generate a boilerplate for the conda recipe.

    opened by candalfigomoro 1
  • Catboost improvement negligible?

    Catboost improvement negligible?

    Using a CatBoostRegressor() with your provided notebooks and the semiconductor dataset, I did not see any improvements regarding computing the shap files. Do you have any data on this regarding speed up or did I make a mistake? I did not find any information on this in the paper or blogpost.

    Also how does fasttreeshap handle GPU support?

    Code used:

    cat =CatBoostRegressor(iterations=2000, task_type="CPU")#, devices="0-1")
    cat.fit(train, label_train)
    
    run_fasttreeshap(
        model = cat sample = test, interactions = False, algorithm_version = "v0", n_jobs = n_jobs,
        num_round = num_round, num_sample = num_sample, shortcut = False)
    
    run_fasttreeshap(
        model =cat, sample = test, interactions = False, algorithm_version = "v1", n_jobs = n_jobs,
        num_round = num_round, num_sample = num_sample, shortcut = False)
    
    opened by nilslacroix 1
  • [BUG] LightGBM Booster 'objective' Parameter Membership with `shortcut=True`

    [BUG] LightGBM Booster 'objective' Parameter Membership with `shortcut=True`

    Note This corresponds to an issue on slundberg/shap.

    Description A LightGBM Booster does not necessarily have 'objective' in its param attribute dictionary. The documented default behavior in LightGBM is to assume regression without auto-setting the 'objective' key in the Booster.params dictionary if 'objective' is missing from param (a dictionary passed into the Training API function train).

    Reproduction The following is a link to a notebook containing details for error reproduction and a view of the stack trace: Notebook for Reproduction

    Thanks for your responsiveness!

    Nandish Gupta Data Science Engineer, SolasAI image

    opened by ngupta20 0
  • SHAP Values Change and Additivity Breaks on NumPy Upgrade

    SHAP Values Change and Additivity Breaks on NumPy Upgrade

    Description

    Following the 0.1.3 release, the additivity of my logit-linked xgboost local tree shaps broke. I found that pinning my numpy version back to either 1.21.6 or 1.22.4 temporarily solved the issue. I also tested every numpy version from 1.23.1 to 1.23.5 and found that all numpy versions 1.23.* were associated with the value changes.

    Brainstorming

    It makes me wonder if there is any numpy or numpy-adjacent usage in FastTreeSHAP that could be reimplemented to be compatible with new versions. I would be glad to contribute if that's the case and someone could help restrict the scope of the problem (if I can't restrict it myself quickly enough). As we all know from #14, that would be highly preferable to pinning numpy versions. My instinct tells me this issue may link to an existing NumPy update note or issue... haven't looked too deeply yet.

    Good Release

    In any case, I still think the decision to relax the numpy constraint is the right one and anyone who runs into the same behavior can just pin the version themselves in the short term.

    Environment Info

    Partial result of pip list

    Package                       Version
    ----------------------------- -----------
    fasttreeshap                  0.1.3
    pandas                        1.5.2
    numba                         0.56.4
    numpy                         1.23.5
    setuptools                    65.6.3
    shap                          0.41.0
    wheel                         0.38.4
    xgboost                       1.7.1
    
    

    uname --all

    Linux ip-**-**-*-*** 5.4.0-1088-aws #96~18.04.1-Ubuntu SMP Mon Oct 17 02:57:48 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    

    lscpu

    Architecture:        x86_64
    CPU op-mode(s):      32-bit, 64-bit
    Byte Order:          Little Endian
    CPU(s):              8
    On-line CPU(s) list: 0-7
    Thread(s) per core:  2
    Core(s) per socket:  4
    Socket(s):           1
    NUMA node(s):        1
    Vendor ID:           GenuineIntel
    CPU family:          6
    Model:               85
    Model name:          Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
    Stepping:            7
    CPU MHz:             3102.640
    BogoMIPS:            5000.00
    Hypervisor vendor:   KVM
    Virtualization type: full
    L1d cache:           32K
    L1i cache:           32K
    L2 cache:            1024K
    L3 cache:            36608K
    NUMA node0 CPU(s):   0-7
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
    

    Thanks for your responsiveness to the community!

    Nandish Gupta Senior AI Engineer, SolasAI image

    opened by ngupta20 1
  • Cannot build fasttreeshap in linux environment

    Cannot build fasttreeshap in linux environment

    We would like to use fasttreeshap to calculate explainability values for our machine learning model. To run our model we create a linux container with a venv for the model. All the requirements are specified in a file and installed in the venv with pip -r requirements.txt. Our model depends on numpy==1.21.4 and when we try to install fasttreeshap we incur into this issue:

    ERROR: Cannot install oldest-supported-numpy==0.12, oldest-supported-numpy==0.14, oldest-supported-numpy==0.15, oldest-supported-numpy==2022.1.30, oldest-supported-numpy==2022.3.27, oldest-supported-numpy==2022.4.10, oldest-supported-numpy==2022.4.18, oldest-supported-numpy==2022.4.8, oldest-supported-numpy==2022.5.27, oldest-supported-numpy==2022.5.28 and oldest-supported-numpy==2022.8.16 because these package versions have conflicting dependencies.
    The conflict is caused by:
    oldest-supported-numpy 2022.8.16 depends on numpy==1.17.3; python_version == "3.8" and platform_machine not in "arm64|aarch64|s390x|loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.5.28 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.5.27 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.4.18 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.4.10 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.4.8 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.3.27 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 2022.1.30 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 0.15 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 0.14 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "s390x" and platform_python_implementation != "PyPy"
    oldest-supported-numpy 0.12 depends on numpy==1.17.3; python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_python_implementation != "PyPy"
    

    I can volunteer time and resources to add a wheel file for linux. Would you be interested in distributing a wheel file created by us ?

    opened by n-karahan-afacan 1
  • Plotting example

    Plotting example

    Thanks for making FastTreeShap! I'm excited to use it. I want to generate some classic SHAP plots. Can you provide an example, maybe in a jupyter notebook, of how to use your plotting library to visualize the SHAP values.

    opened by jpfeil 3
  • Can not install FastTreeSHAP on linux

    Can not install FastTreeSHAP on linux

    Hi team, I have both windows and linux laptop. I was able to install FastTreeSHAP package on window but not on linux. this is the error message that I got on linux: ERROR: Could not build wheels for fasttreeshap, whichh is required tot install pyproject.toml-based projects

    opened by LongUEH 2
Releases(v0.1.3)
  • v0.1.3(Nov 29, 2022)

  • v0.1.2(May 20, 2022)

    PyPI: https://pypi.org/project/fasttreeshap/0.1.2/ Authors: @jlyang1990

    Added argument memory_tolerance to enable more flexible memory control, and fixed some bugs

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Mar 22, 2022)

    PyPI: https://pypi.org/project/fasttreeshap/0.1.1/ Authors: @jlyang1990

    Blog post for this release: https://engineering.linkedin.com/blog/2022/fasttreeshap--accelerating-shap-value-computation-for-trees Paper for this release: https://arxiv.org/abs/2109.09847

    Source code(tar.gz)
    Source code(zip)
Owner
LinkedIn
LinkedIn
Implementation of fast algorithms for Maximum Spanning Tree (MST) parsing that includes fast ArcMax+Reweighting+Tarjan algorithm for single-root dependency parsing.

Fast MST Algorithm Implementation of fast algorithms for (Maximum Spanning Tree) MST parsing that includes fast ArcMax+Reweighting+Tarjan algorithm fo

Miloš Stanojević 11 Oct 14, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Jan 4, 2023
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 5.7k Feb 12, 2021
The AugNet Python module contains functions for the fast computation of image similarity.

AugNet AugNet: End-to-End Unsupervised Visual Representation Learning with Image Augmentation arxiv link In our work, we propose AugNet, a new deep le

Ming 74 Dec 28, 2022
A PyTorch implementation of "SimGNN: A Neural Network Approach to Fast Graph Similarity Computation" (WSDM 2019).

SimGNN ⠀⠀⠀ A PyTorch implementation of SimGNN: A Neural Network Approach to Fast Graph Similarity Computation (WSDM 2019). Abstract Graph similarity s

Benedek Rozemberczki 534 Dec 25, 2022
git《Self-Attention Attribution: Interpreting Information Interactions Inside Transformer》(AAAI 2021) GitHub:

Self-Attention Attribution This repository contains the implementation for AAAI-2021 paper Self-Attention Attribution: Interpreting Information Intera

null 60 Dec 29, 2022
Official implementation of GraphMask as presented in our paper Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking.

GraphMask This repository contains an implementation of GraphMask, the interpretability technique for graph neural networks presented in our ICLR 2021

Michael Schlichtkrull 29 Sep 2, 2022
[CVPR 2020] Interpreting the Latent Space of GANs for Semantic Face Editing

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing Figure: High-quality facial attributes editing results with InterFaceGA

GenForce: May Generative Force Be with You 1.3k Dec 29, 2022
[CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts

Visual-Reasoning-eXplanation [CVPR 2021 A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts] Project Page | Vid

Andy_Ge 54 Dec 21, 2022
ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks

ALFRED A Benchmark for Interpreting Grounded Instructions for Everyday Tasks Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han,

ALFRED 204 Dec 15, 2022
InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing Figure: High-quality facial attributes editing results with InterFaceGA

GenForce: May Generative Force Be with You 1.3k Jan 9, 2023
Code to reproduce the results in the paper "Tensor Component Analysis for Interpreting the Latent Space of GANs".

Tensor Component Analysis for Interpreting the Latent Space of GANs [ paper | project page ] Code to reproduce the results in the paper "Tensor Compon

James Oldfield 4 Jun 17, 2022
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Microsoft 14.5k Jan 8, 2023
Functional TensorFlow Implementation of Singular Value Decomposition for paper Fast Graph Learning

tf-fsvd TensorFlow Implementation of Functional Singular Value Decomposition for paper Fast Graph Learning with Unique Optimal Solutions Cite If you f

Sami Abu-El-Haija 14 Nov 25, 2021
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

null 139 Jan 1, 2023
aka "Bayesian Methods for Hackers": An introduction to Bayesian methods + probabilistic programming with a computation/understanding-first, mathematics-second point of view. All in pure Python ;)

Bayesian Methods for Hackers Using Python and PyMC The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chap

Cameron Davidson-Pilon 25.1k Jan 2, 2023
Deep learning with dynamic computation graphs in TensorFlow

TensorFlow Fold TensorFlow Fold is a library for creating TensorFlow models that consume structured data, where the structure of the computation graph

null 1.8k Dec 28, 2022
Official codebase for Pretrained Transformers as Universal Computation Engines.

universal-computation Overview Official codebase for Pretrained Transformers as Universal Computation Engines. Contains demo notebook and scripts to r

Kevin Lu 210 Dec 28, 2022
Custom TensorFlow2 implementations of forward and backward computation of soft-DTW algorithm in batch mode.

Batch Soft-DTW(Dynamic Time Warping) in TensorFlow2 including forward and backward computation Custom TensorFlow2 implementations of forward and backw

null 19 Aug 30, 2022