Cornac
Cornac is a comparative framework for multimodal recommender systems. It focuses on making it convenient to work with models leveraging auxiliary data (e.g., item descriptive text and image, social network, etc). Cornac enables fast experiments and straightforward implementations of new models. It is highly compatible with existing machine learning libraries (e.g., TensorFlow, PyTorch).
Quick Links
Website | Documentation | Tutorials | Examples | Models | Datasets | Paper | Preferred.AI
Installation
Currently, we are supporting Python 3. There are several ways to install Cornac:
-
From PyPI (you may need a C++ compiler):
pip3 install cornac
-
From Anaconda:
conda install cornac -c conda-forge
-
From the GitHub source (for latest updates):
pip3 install Cython git clone https://github.com/PreferredAI/cornac.git cd cornac python3 setup.py install
Note:
Additional dependencies required by models are listed here.
Some algorithm implementations use OpenMP
to support multi-threading. For Mac OS users, in order to run those algorithms efficiently, you might need to install gcc
from Homebrew to have an OpenMP compiler:
brew install gcc | brew link gcc
Getting started: your first Cornac experiment
Flow of an Experiment in Cornac
import cornac
from cornac.eval_methods import RatioSplit
from cornac.models import MF, PMF, BPR
from cornac.metrics import MAE, RMSE, Precision, Recall, NDCG, AUC, MAP
# load the built-in MovieLens 100K and split the data based on ratio
ml_100k = cornac.datasets.movielens.load_feedback()
rs = RatioSplit(data=ml_100k, test_size=0.2, rating_threshold=4.0, seed=123)
# initialize models, here we are comparing: Biased MF, PMF, and BPR
models = [
MF(k=10, max_iter=25, learning_rate=0.01, lambda_reg=0.02, use_bias=True, seed=123),
PMF(k=10, max_iter=100, learning_rate=0.001, lambda_reg=0.001, seed=123),
BPR(k=10, max_iter=200, learning_rate=0.001, lambda_reg=0.01, seed=123),
]
# define metrics to evaluate the models
metrics = [MAE(), RMSE(), Precision(k=10), Recall(k=10), NDCG(k=10), AUC(), MAP()]
# put it together in an experiment, voilà!
cornac.Experiment(eval_method=rs, models=models, metrics=metrics, user_based=True).run()
Output:
MAE | RMSE | AUC | MAP | NDCG@10 | Precision@10 | Recall@10 | Train (s) | Test (s) | |
---|---|---|---|---|---|---|---|---|---|
MF | 0.7430 | 0.8998 | 0.7445 | 0.0407 | 0.0479 | 0.0437 | 0.0352 | 0.13 | 1.57 |
PMF | 0.7534 | 0.9138 | 0.7744 | 0.0491 | 0.0617 | 0.0533 | 0.0479 | 2.18 | 1.64 |
BPR | N/A | N/A | 0.8695 | 0.0753 | 0.0975 | 0.0727 | 0.0891 | 3.74 | 1.49 |
For more details, please take a look at our examples as well as tutorials.
Models
The recommender models supported by Cornac are listed below. Why don't you join us to lengthen the list?
Support
Your contributions at any level of the library are welcome. If you intend to contribute, please:
- Fork the Cornac repository to your own account.
- Make changes and create pull requests.
You can also post bug reports and feature requests in GitHub issues.
Citation
If you use Cornac in a scientific publication, we would appreciate citations to the following paper:
Cornac: A Comparative Framework for Multimodal Recommender Systems, Salah et al., JMLR 21, pp. 1-5, 2020.
Bibtex entry:
@article{cornac,
author = {Aghiles Salah and Quoc-Tuan Truong and Hady W. Lauw},
title = {Cornac: A Comparative Framework for Multimodal Recommender Systems},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {95},
pages = {1-5},
url = {http://jmlr.org/papers/v21/19-805.html}
}