recsys_metrics
An efficient PyTorch implementation of the evaluation metrics in recommender systems.
Overview • Installation • How to use • Benchmark • Citation •
Overview
Highlights
- Efficient (vectorized) implementations over mini-batches
- Standard RecSys metrics: precision, recall, map, mrr, hr, ndcg
- Beyond-accuracy metrics: e.g. coverage, diversity, novelty, etc.
- All metrics support a top-k argument.
recsys_metrics
?
Why do we need Installation
You can install recsys_metrics
from PyPI:
pip install recsys_metrics
Or you can also install the latest version from source:
pip install git+https://github.com/zuoxingdong/recsys_metrics.git@master
Note that we support Python 3.7+ only.
How to use
Let us take Hit Rate (HR) to illustrate how to use this library:
preds = torch.tensor([
[.5, .3, .1],
[.3, .4, .5]
])
target = torch.tensor([
[0, 0, 1],
[0, 1, 1]
])
hit_rate(preds, target, k=1, reduction='mean')
>> tensor(0.5000)
The first example in the batch does not have a hit (i.e. top-1 item is not a relevant item) and second example in the batch gets a hit (i.e. top-1 item is a relevant item). Thus, we have a hit-rate of 0.5.
The API of other metrics are of the same format.
Benchmark
Metrics | Single Example | Mini-Batch |
---|---|---|
Precision | ||
Recall | ||
MAP | ||
MRR | ||
HR | ||
NDCG |
Citation
This work is inspired by Torchmetrics from PyTorchLightning Team.
Please use this bibtex if you want to cite this repository in your publications:
@misc{recsys_metrics,
author = {Zuo, Xingdong},
title = {recsys_metrics: An efficient PyTorch implementation of the evaluation metrics in recommender systems.},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/zuoxingdong/recsys_metrics}},
}