A library of metrics for evaluating recommender systems

Overview

recmetrics

A python library of evalulation metrics and diagnostic tools for recommender systems.

**This library is activly maintained. My goal is to continue to develop this as the main source of recommender metrics in python. Please submit issues, bug reports, feature requests or controbute directly through a pull request. If I do not respond you can ping me directly at [email protected] **

Description Command
Installation pip install recmetrics
Notebook Demo make run_demo
Test make test

Full documentation coming soon.... In the interm, the python notebook in this repo, example.ipynb, contains examples of these plots and metrics in action using the MovieLens 20M Dataset. You can also view my Medium Article.

This library is an open source project. The goal is to create a go-to source for metrics related to recommender systems. I have begun by adding metrics and plots I found useful during my career as a Data Scientist at a retail company, and encourage the community to contribute. If you would like to see a new metric in this package, or find a bug, or have suggestions for improvement, please contribute!

Long Tail Plot

recmetrics.long_tail_plot()

The Long Tail plot is used to explore popularity patterns in user-item interaction data. Typically, a small number of items will make up most of the volume of interactions and this is referred to as the "head". The "long tail" typically consists of most products, but make up a small percent of interaction volume.

Long Tail Plot

The items in the "long tail" usually do not have enough interactions to accurately be recommended using user-based recommender systems like collaborative filtering due to inherent popularity bias in these models and data sparsity. Many recommender systems require a certain level of sparsity to train. A good recommender must balance sparsity requirements with popularity bias.

Mar@K and Map@K

recmetrics.mark()

recmetrics.mark_plot()

recmetrics.mapk_plot()

Mean Average Recall at K (Mar@k) measures the recall at the kth recommendations. Mar@k considers the order of recommendations, and penalizes correct recommendations if based on the order of the recommendations. Map@k and Mar@k are ideal for evaluating an ordered list of recommendations. There is a fantastic implmentation of Mean Average Precision at K (Map@k) available here, so I have not included it in this repo.

Mar@k

Map@k and Mar@k metrics suffer from popularity bias. If a model works well on popular items, the majority of recommendations will be correct, and Mar@k and Map@k can appear to be high while the model may not be making useful or personalized recommendations.

Coverage

recmetrics.prediction_coverage()

recmetrics.catalog_coverage()

recmetrics.coverage_plot()

Coverage is the percent of items that the recommender is able to recommend. It referred as prediction coverage and it's depicted by the next formula.

Coverage Equation

Where 'I' is the number of unique items the model recommends in the test data, and 'N' is the total number of unique items in the training data. The catalog coverage is the rate of distinct items recommended over a period of time to the user. For this purpose the catalog coverage function take also as parameter 'k' the number of observed recommendation lists. In essence, both of metrics quantify the proportion of items that the system is able to work with.

Coverage Plot

Novelty

recmetrics.novelty()

Novelty measures the capacity of recommender system to propose novel and unexpected items which a user is unlikely to know about already. It uses the self-information of the recommended item and it calculates the mean self-information per top-N recommended list and averages them over all users.

Coverage Equation

Where the absolute U is the number of users, count(i) is the number of users consumed the specific item and N is the length of recommended list.

Personalization

recmetrics.personalization()

Personalization is the dissimilarity between user's lists of recommendations. A high score indicates user's recommendations are different). A low personalization score indicates user's recommendations are very similar.

For example, if two users have recommendations lists [A,B,C,D] and [A,B,C,Y], the personalization can be calculated as:

Coverage Plot

Intra-list Similarity

recmetrics.intra_list_similarity()

Intra-list similarity uses a feature matrix to calculate the cosine similarity between the items in a list of recommendations. The feature matrix is indexed by the item id and includes one-hot-encoded features. If a recommender system is recommending lists of very similar items, the intra-list similarity will be high.

Coverage Plot

Coverage Plot

MSE and RMSE

recmetrics.mse()
recmetrics.rmse()

Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) are used to evaluate the accuracy of predicted values yhat such as ratings compared to the true value, y. These can also be used to evalaute the reconstruction of a ratings matrix.

MSE Equation

RMSE Equation

Predicted Class Probability Distribution Plots

recmetrics.class_separation_plot()

This is a plot of the distribution of the predicted class probabilities from a classification model. The plot is typically used to visualize how well a model is able to distinguish between two classes, and can assist a Data Scientist in picking the optimal decision threshold to classify observations to class 1 (0.5 is usually the default threshold for this method). The color of the distribution plots represent true class 0 and 1, and everything to the right of the decision threshold is classified as class 0.

binary class probs

This plot can also be used to visualize the recommendation scores in two ways.

In this example, and item is considered class 1 if it is rated more than 3 stars, and class 0 if it is not. This example shows the performance of a model that recommends an item when the predicted 5-star rating is greater than 3 (plotted as a vertical decision threshold line). This plot shows that the recommender model will perform better if items with a predicted rating of 3.5 stars or greater is recommended.

ratings scores

The raw predicted 5 star rating for all recommended movies could be visualized with this plot to see the optimal predicted rating score to threshold into a prediction of that movie. This plot also visualizes how well the model is able to distinguish between each rating value.

ratings distributions

ROC and AUC

recmetrics.roc_plot()

The Receiver Operating Characteristic (ROC) plot is used to visualize the trade-off between true positives and false positives for binary classification. The Area Under the Curve (AUC) is sometimes used as an evaluation metrics.

ROC

Recommender Precision and Recall

recmetrics.recommender_precision()
recmetrics.recommender_recall()

Recommender precision and recall uses all recommended items over all users to calculate traditional precision and recall. A recommended item that was actually interacted with in the test data is considered an accurate prediction, and a recommended item that is not interacted with, or received a poor interaction value, can be considered an inaccurate recommendation. The user can assign these values based on their judgment.

Precision and Recall Curve

recmetrics.precision_recall_plot()

The Precision and Recall plot is used to visualize the trade-off between precision and recall for one class in a classification.

PandRcurve

Confusion Matrix

recmetrics.make_confusion_matrix()

Traditional confusion matrix used to evaluate false positive and false negative trade-offs.

PandRcurve

Comments
  • Unable to import recmetrics

    Unable to import recmetrics

    I am working on a recommendation engine using collaborative filtering and wanted to try the metrics provided by recmetrics. Here, the error I get trying to import the package (version 0.0.12).

    ---------------------------------------------------------------------------
    ImportError                               Traceback (most recent call last)
    <ipython-input-309-301854677c00> in <module>
    ----> 1 import recmetrics
          2 
          3 recmetrics.long_tail_plot()
    
    ~/.virtualenvs/py3/lib/python3.6/site-packages/recmetrics/__init__.py in <module>
    ----> 1 from .plots import long_tail_plot, mark_plot, mapk_plot, coverage_plot, class_separation_plot, roc_plot, precision_recall_plot
          2 from .metrics import mark, coverage, personalization, intra_list_similarity, rmse, mse, make_confusion_matrix, recommender_precision, recommender_recall
    
    ~/.virtualenvs/py3/lib/python3.6/site-packages/recmetrics/plots.py in <module>
          5 from matplotlib.lines import Line2D
          6 from sklearn.metrics import roc_curve, auc, precision_recall_curve, average_precision_score
    ----> 7 from sklearn.utils.fixes import signature
          8 
          9 
    
    ImportError: cannot import name 'signature'
    
    bug 
    opened by kleekaai 3
  • Unused Requirement

    Unused Requirement

    Surprise is listed as a module dependency but is not used in metrics or plots. Might be worth removing the dependency - especially since it requires additional built tools (Visual C++) and thus may throw unnecessary errors.

    opened by VedantVarshney 2
  • module 'recmetrics' has no attribute 'prediction_coverage'

    module 'recmetrics' has no attribute 'prediction_coverage'

    Hi there I am trying to run example notebook. But I am getting 'module 'recmetrics' has no attribute 'prediction_coverage'' and "attribute error: module 'recmetrics' has no attribute 'catalog_coverage'"

    any pointer or suggestion.

    Thanks in advance

    opened by rhkaz 2
  • TypeError on class_separation_plot of example notebook

    TypeError on class_separation_plot of example notebook

    I attached the error below

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-30-05160122655c> in <module>
    ----> 1 recmetrics.class_separation_plot(pred_df, n_bins=45, class0_label="True class 0", class1_label="True class 1")
    
    TypeError: class_separation_plot() got an unexpected keyword argument 'class0_label'
    
    opened by itsoum 2
  • License

    License

    This is missing a license. You can use https://tldrlegal.com/ for an overview. The top-3 are MIT, BSD and GPL (see my analysis).

    The simplest way to add it is in the setup.py as license='MIT' or similar.

    opened by MartinThoma 2
  • Is surprise really required?

    Is surprise really required?

    First of all: this package looks great! It's exactly what I need for some small projects, so thanks for putting it out there!

    I'm looking at the setup.py, and it lists surprise as a requirement. I don't see it imported anywhere in the package though, so I'm wondering if it can be removed? I get that it's useful for the example notebook, but that wouldn't be included in the pip install anyway. (I might suggest making surprise an extras_require if you want to keep it in there for demo purposes.)

    If you're open to some packaging changes along these lines, I'd be happy to send a PR your way.

    help wanted good first issue 
    opened by bmcfee 2
  • Fix 35/optimize personalization calculation

    Fix 35/optimize personalization calculation

    This relates to #35. As the cosine similarity metric is symmetric, we don't need the upper triangle indices to calculate the mean of the matrix. Just subtract the diagonal (all ones) and divide by the number of distances (without the diagonal). This way the performance is increased and is noticeable on matrices over 50k x 50k. All tests passed. Performance before and after the modification (skipping make_rec_matrix): performance

    opened by ibuda 1
  • Personalization metric calculation optimization

    Personalization metric calculation optimization

    Hi @statisticianinstilettos,

    kudos for a great tool! I would like to propose an optimization for calculating Personalization Metric here:

    #get indicies for upper right triangle w/o diagonal
    upper_right = np.triu_indices(similarity.shape[0], k=1)
    
    #calculate average similarity
    personalization = np.mean(similarity[upper_right])
    return 1-personalization
    

    There is no need to get the upper triangle indices, as the cosine similarity is a symmetric distance. I will follow up with a pull request for this.

    opened by ibuda 1
  • RecMetrics Revisions

    RecMetrics Revisions

    Description

    This pull request is designed to introduce reproducibility and maintainability across the RecMetrics library.

    • Test coverage for metrics and plots scripts.
    • Create Docker images for RecMetrics development and notebook demo
    • Additional Makefile commands
      • build - Create RecMetrics Docker image (Development)
      • build_demo - Build RecMetrics Docker image (Demo)
      • clean - Remove files from repo
      • download_movielens - Download MovieLens data to repo
      • run_demo - Run RecMetrics Docker image (Demo)
      • test - Test RecMetrics Docker image
    • Type hinting for all functions
    • Support for Poetry
      • Used within both Dockerfiles
    • Fix metrics_plot function
      • Now displays output in Jupyter notebook

    Fixes # (issue)

    • #2 - Is Surprise required?
    • #14 - TypeError on class_separation_plot of example notebook
    • #23 - ImportError: cannot import name 'signature'
    • #25 - module 'recmetrics' has no attribute 'prediction_coverage'

    Type of change

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [x] This change requires a documentation update

    How Has This Been Tested?

    All tests now run within a Docker container. Current test coverage is in the following scripts:

    • test_metrics.py
    • test_plots.py

    Notes

    • Building a new package for PyPI has not been tested.
    • Surprise is included in the Poetry files, and is installed for both Docker images.
    • Tests within test_plots.py assume visualizations are correct; per the references, visualizations can be difficult to test.
    • In the future, there's potential to automate tests with GitHub Actions.

    References

    opened by gregwchase 1
  • fix setup requires error

    fix setup requires error

    add plotly in setup.py as install_requires

    • error
    pip3 install git+https://github.com/statisticianinstilettos/recmetrics.git
    
    python3
    >>> import recmetrics
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/Users/uni/Library/Python/3.7/lib/python/site-packages/recmetrics/__init__.py", line 1, in <module>
        from .plots import long_tail_plot, mark_plot, mapk_plot, coverage_plot, class_separation_plot, roc_plot, precision_recall_plot
      File "/Users/uni/Library/Python/3.7/lib/python/site-packages/recmetrics/plots.py", line 7, in <module>
        import plotly.graph_objects as go
    ModuleNotFoundError: No module named 'plotly'
    
    opened by uni-3 1
  • Minor fixes on example jupyter notebook

    Minor fixes on example jupyter notebook

    1. I add brackets in the python print
    2. I changed the first parameter in the coverage function calls because it was wrong.
    3. If the prediction and catalog coverages are accepted we should change also the function name in the calls.
    opened by itsoum 1
  • Implement MAP@k

    Implement MAP@k

    MAP@k implementation linked in the documentation (https://github.com/benhamner/Metrics) has not been updated for 7 years and has bugs in MAP@k implementation (e.g. https://github.com/benhamner/Metrics/issues/51, https://github.com/benhamner/Metrics/issues/57). It would be really useful to have MAP@k implementation in recmetrics. Would it be possible to implement it? It would be almost identical to the existing mark() function.

    opened by j-adamczyk 1
  • Integration with Deep Learning Based Frameworks

    Integration with Deep Learning Based Frameworks

    Is there any way to integrate this with recommender system frameworks that involve more deep learning-based algorithms such as PyTorch etc.? Sci-Kit Learn's with Surprise doesn't really support such algorithms

    opened by agb2k 0
  • Coverage over 100%

    Coverage over 100%

    In the example bellow, the coverage measured exceeds 100%, which does not make sense.

    This happens when items that are not listed on the catalog are recommended.

    > from rcmetrics import prediction_coverage
    > prediction_coverage([['x', 'y'], ['w', 'z']], catalog=['w', 'x', 'y'])
    133.33
    
    opened by vascosmota 2
  • personalization() has explosive memory requirements due to pairwise comparison

    personalization() has explosive memory requirements due to pairwise comparison

    On my system (16gb ram), a list of 10k recommendations will run. A list of 50k will crash out. I'd like to try to understand the personalization score across my entire hypothetical customer base 250k+.

    Is there a way to chunk the scipy.sparse.csr_matrix and iteratively calculate the cosine similarity to avoid holding the whole thing in memory?

    opened by ahgraber 0
  • Installation issues

    Installation issues

    Hi! Have been trying to install recmetrics with "pip install recmetrcis", keep getting an error "ERROR: Could not build wheels for scikit-learn, which is required to install pyproject.toml-based projects". I'm using Windows, Python version 3.9.7, pip all upgraded. pip freeze shows that scikit-learn is actually already installed: "scikit-learn==0.24.2". I've also tried installing with pip from git, same result. Any ideas what I could still try?

    opened by Erin59 5
Releases(v0.1.5)
Owner
Claire Longo
Full Stack Data Scientist/Machine Learning Engineer
Claire Longo
Graph Neural Networks for Recommender Systems

This repository contains code to train and test GNN models for recommendation, mainly using the Deep Graph Library (DGL).

null 217 Jan 4, 2023
Collaborative variational bandwidth auto-encoder (VBAE) for recommender systems.

Collaborative Variational Bandwidth Auto-encoder The codes are associated with the following paper: Collaborative Variational Bandwidth Auto-encoder f

Yaochen Zhu 14 Dec 11, 2022
QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and newly state-of-the-art recommendation models are implemented. QRec has a lightweight architecture and provides user-friendly interfaces. It can facilitate model implementation and evaluation.

Yu 1.4k Dec 27, 2022
Plex-recommender - Get movie recommendations based on your current PleX library

plex-recommender Description: Get movie/tv recommendations based on your current

null 5 Jul 19, 2022
Deep recommender models using PyTorch.

Spotlight uses PyTorch to build both deep and shallow recommender models. By providing both a slew of building blocks for loss functions (various poin

Maciej Kula 2.8k Dec 29, 2022
Recommender System Papers

Included Conferences: SIGIR 2020, SIGKDD 2020, RecSys 2020, CIKM 2020, AAAI 2021, WSDM 2021, WWW 2021

RUCAIBox 704 Jan 6, 2023
RecSim NG: Toward Principled Uncertainty Modeling for Recommender Ecosystems

RecSim NG, a probabilistic platform for multi-agent recommender systems simulation. RecSimNG is a scalable, modular, differentiable simulator implemented in Edward2 and TensorFlow. It offers: a powerful, general probabilistic programming language for agent-behavior specification;

Google Research 110 Dec 16, 2022
E-Commerce recommender demo with real-time data and a graph database

?? E-Commerce recommender demo ?? This is a simple stream setup that uses Memgraph to ingest real-time data from a simulated online store. Data is str

g-despot 3 Feb 23, 2022
Movie Recommender System

Movie-Recommender-System Movie-Recommender-System is a web application using which a user can select his/her watched movie from list and system will r

null 1 Jul 14, 2022
Mutual Fund Recommender System. Tailor for fund transactions.

Explainable Mutual Fund Recommendation Data Please see 'DATA_DESCRIPTION.md' for mode detail. Recommender System Methods Baseline Collabarative Fiilte

JHJu 2 May 19, 2022
Movies/TV Recommender

recommender Movies/TV Recommender. Recommends Movies, TV Shows, Actors, Directors, Writers. Setup Create file API_KEY and paste your TMDB API key in i

Aviem Zur 3 Apr 22, 2022
6002project-rl - An implemention of offline RL on recommender system

An implemention of offline RL on recommender system @author: misajie @update: 20

Tzay Lee 3 May 24, 2022
Persine is an automated tool to study and reverse-engineer algorithmic recommendation systems.

Persine, the Persona Engine Persine is an automated tool to study and reverse-engineer algorithmic recommendation systems. It has a simple interface a

Jonathan Soma 87 Nov 29, 2022
fastFM: A Library for Factorization Machines

Citing fastFM The library fastFM is an academic project. The time and resources spent developing fastFM are therefore justified by the number of citat

null 1k Dec 24, 2022
A Library for Field-aware Factorization Machines

Table of Contents ================= - What is LIBFFM - Overfitting and Early Stopping - Installation - Data Format - Command Line Usage - Examples -

null 1.6k Dec 5, 2022
Code for Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022)

Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022) We consider how a user of a web servi

joisino 20 Aug 21, 2022
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie_recs Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Coll

ShopRunner 97 Jan 3, 2023
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Collie do

ShopRunner 96 Dec 29, 2022
An efficient PyTorch implementation of the evaluation metrics in recommender systems.

recsys_metrics An efficient PyTorch implementation of the evaluation metrics in recommender systems. Overview • Installation • How to use • Benchmark

Xingdong Zuo 12 Dec 2, 2022