Factorization machines in python

Overview

Factorization Machines in Python

This is a python implementation of Factorization Machines [1]. This uses stochastic gradient descent with adaptive regularization as a learning method, which adapts the regularization automatically while training the model parameters. See [2] for details. From libfm.org: "Factorization machines (FM) are a generic approach that allows to mimic most factorization models by feature engineering. This way, factorization machines combine the generality of feature engineering with the superiority of factorization models in estimating interactions between categorical variables of large domain."

[1] Steffen Rendle (2012): Factorization Machines with libFM, in ACM Trans. Intell. Syst. Technol., 3(3), May. [2] Steffen Rendle: Learning recommender systems with adaptive regularization. WSDM 2012: 133-142

Installation

pip install git+https://github.com/coreylynch/pyFM

Dependencies

  • numpy
  • sklearn

Training Representation

The easiest way to use this class is to represent your training data as lists of standard Python dict objects, where the dict elements map each instance's categorical and real valued variables to its values. Then use a sklearn DictVectorizer to convert them to a design matrix with a one-of-K or “one-hot” coding.

Here's a toy example

from pyfm import pylibfm
from sklearn.feature_extraction import DictVectorizer
import numpy as np
train = [
	{"user": "1", "item": "5", "age": 19},
	{"user": "2", "item": "43", "age": 33},
	{"user": "3", "item": "20", "age": 55},
	{"user": "4", "item": "10", "age": 20},
]
v = DictVectorizer()
X = v.fit_transform(train)
print(X.toarray())
[[ 19.   0.   0.   0.   1.   1.   0.   0.   0.]
 [ 33.   0.   0.   1.   0.   0.   1.   0.   0.]
 [ 55.   0.   1.   0.   0.   0.   0.   1.   0.]
 [ 20.   1.   0.   0.   0.   0.   0.   0.   1.]]
y = np.repeat(1.0,X.shape[0])
fm = pylibfm.FM()
fm.fit(X,y)
fm.predict(v.transform({"user": "1", "item": "10", "age": 24}))

Getting Started

Here's an example on some real movie ratings data.

First get the smallest movielens ratings dataset from http://www.grouplens.org/system/files/ml-100k.zip. ml-100k contains the files u.item (list of movie ids and titles) and u.data (list of user_id, movie_id, rating, timestamp).

import numpy as np
from sklearn.feature_extraction import DictVectorizer
from pyfm import pylibfm

# Read in data
def loadData(filename,path="ml-100k/"):
    data = []
    y = []
    users=set()
    items=set()
    with open(path+filename) as f:
        for line in f:
            (user,movieid,rating,ts)=line.split('\t')
            data.append({ "user_id": str(user), "movie_id": str(movieid)})
            y.append(float(rating))
            users.add(user)
            items.add(movieid)

    return (data, np.array(y), users, items)

(train_data, y_train, train_users, train_items) = loadData("ua.base")
(test_data, y_test, test_users, test_items) = loadData("ua.test")
v = DictVectorizer()
X_train = v.fit_transform(train_data)
X_test = v.transform(test_data)

# Build and train a Factorization Machine
fm = pylibfm.FM(num_factors=10, num_iter=100, verbose=True, task="regression", initial_learning_rate=0.001, learning_rate_schedule="optimal")

fm.fit(X_train,y_train)
Creating validation dataset of 0.01 of training for adaptive regularization
-- Epoch 1
Training MSE: 0.59477
-- Epoch 2
Training MSE: 0.51841
-- Epoch 3
Training MSE: 0.49125
-- Epoch 4
Training MSE: 0.47589
-- Epoch 5
Training MSE: 0.46571
-- Epoch 6
Training MSE: 0.45852
-- Epoch 7
Training MSE: 0.45322
-- Epoch 8
Training MSE: 0.44908
-- Epoch 9
Training MSE: 0.44557
-- Epoch 10
Training MSE: 0.44278
...
-- Epoch 98
Training MSE: 0.41863
-- Epoch 99
Training MSE: 0.41865
-- Epoch 100
Training MSE: 0.41874

# Evaluate
preds = fm.predict(X_test)
from sklearn.metrics import mean_squared_error
print("FM MSE: %.4f" % mean_squared_error(y_test,preds))
FM MSE: 0.9227

Classification example

import numpy as np
from sklearn.feature_extraction import DictVectorizer
from sklearn.cross_validation import train_test_split
from pyfm import pylibfm

from sklearn.datasets import make_classification

X, y = make_classification(n_samples=1000,n_features=100, n_clusters_per_class=1)
data = [ {v: k for k, v in dict(zip(i, range(len(i)))).items()}  for i in X]

X_train, X_test, y_train, y_test = train_test_split(data, y, test_size=0.1, random_state=42)

v = DictVectorizer()
X_train = v.fit_transform(X_train)
X_test = v.transform(X_test)

fm = pylibfm.FM(num_factors=50, num_iter=10, verbose=True, task="classification", initial_learning_rate=0.0001, learning_rate_schedule="optimal")

fm.fit(X_train,y_train)

Creating validation dataset of 0.01 of training for adaptive regularization
-- Epoch 1
Training log loss: 1.91885
-- Epoch 2
Training log loss: 1.62022
-- Epoch 3
Training log loss: 1.36736
-- Epoch 4
Training log loss: 1.15562
-- Epoch 5
Training log loss: 0.97961
-- Epoch 6
Training log loss: 0.83356
-- Epoch 7
Training log loss: 0.71208
-- Epoch 8
Training log loss: 0.61108
-- Epoch 9
Training log loss: 0.52705
-- Epoch 10
Training log loss: 0.45685

# Evaluate
from sklearn.metrics import log_loss
print "Validation log loss: %.4f" % log_loss(y_test,fm.predict(X_test))
Validation log loss: 1.5025
Comments
  • FM's on simple Sklearn's boston data giving NaN's

    FM's on simple Sklearn's boston data giving NaN's

    This is giving errors, am I missing something?

    from scipy import sparse from sklearn.datasets import load_boston import pylibfm

    instantiate FM instance with 7 latent factors

    fm = pylibfm.FM(num_factors=7, num_iter=6, verbose=True)

    load dataset

    boston = load_boston()

    fit FM, making sure to wrap the ndarray as a sparse csr

    fm.fit(sparse.csr_matrix(boston.data), boston.target)

    Creating validation dataset of 0.01 of training for adaptive regularization -- Epoch 1 Training log loss: nan -- Epoch 2 Training log loss: nan -- Epoch 3 Training log loss: nan -- Epoch 4 Training log loss: nan -- Epoch 5 Training log loss: nan -- Epoch 6 Training log loss: nan

    fm.v is also all nan.

    opened by silkspace 16
  • Not converge when training?

    Not converge when training?

    You can see the rmse change as below, using the example from README. Why does it not converge, and how could I fix it?

    --- git/pyFM ‹master* ?› » python example.py 
    Creating validation dataset of 0.01 of training for adaptive regularization
    -- Epoch 1
    Training RMSE: 0.49676
    -- Epoch 2
    Training RMSE: 0.44940
    -- Epoch 3
    Training RMSE: 0.44133
    -- Epoch 4
    Training RMSE: 0.43757
    -- Epoch 5
    Training RMSE: 0.43599
    -- Epoch 6
    Training RMSE: 0.43494
    -- Epoch 7
    Training RMSE: 0.43381
    -- Epoch 8
    Training RMSE: 0.43375
    -- Epoch 9
    Training RMSE: 0.43324
    -- Epoch 10
    Training RMSE: 0.43272
    -- Epoch 11
    Training RMSE: 0.43310
    -- Epoch 12
    Training RMSE: 0.43255
    -- Epoch 13
    Training RMSE: 0.43229
    -- Epoch 14
    Training RMSE: 0.43235
    -- Epoch 15
    Training RMSE: 0.43214
    -- Epoch 16
    Training RMSE: 0.43237
    -- Epoch 17
    Training RMSE: 0.43242
    -- Epoch 18
    Training RMSE: 0.43247
    -- Epoch 19
    Training RMSE: 0.43308
    -- Epoch 20
    Training RMSE: 0.44136
    -- Epoch 21
    Training RMSE: 0.44681
    -- Epoch 22
    Training RMSE: 0.44714
    -- Epoch 23
    Training RMSE: nan
    
    opened by geekan 3
  • bug in classification prediction

    bug in classification prediction

    It seems there is a bug in the pyfm_fast.pyx within the prediction part for classification tasks:

    In the _predict method, the outcome is basically calculated in line 252 using the predict_instance method. predict_instance is evaluating the FM model and then scales the result with the sigmoid function for classification in _scale_prediction (line 179), which is fine and which is also needed also for the training part. The problem I see, is that this sigmoid transformation is done again in line 252, which basically means, that _predict always returns values > 0.5 because we apply the sigmoid twice. There is no effect for the regression taks, due to the difference handling for classification/regression within _scale_prediction.

    What do you think?

    opened by tkuTM 2
  • Easy installation

    Easy installation

    This PR assumes to be merged after https://github.com/coreylynch/pyFM/pull/9 .

    This changes enables pip install from github as following,

    pip install git+https://github.com/coreylynch/pyFM
    

    Of course, I would be very happy if you register pyFM to pypi :smile:

    opened by chezou 1
  • scale sigmoid.

    scale sigmoid.

    I notice all classification results are negatives and all of them are far away from 0, so I read the code and issues, and I find that scale sigmoid is needed when classification (one time, no less, no more).

    https://github.com/coreylynch/pyFM/commit/d249b1cca6a011021dae66816751018cc633b6bb may remove both sigmoid. @coreylynch how do you think?

    opened by geekan 1
  • Implement the Pickle interface for FM_fast

    Implement the Pickle interface for FM_fast

    Without this change, attempting to pickle the model will result in an error due to the FM_fast Class lacking a proper implementation of the pickle interface. This PR implements it and removes a few unnecessary spaces. Also attributes of the class are made public because if not pickle has no access to them.

    The models can be potentially quite large, over 4GB depending on how many features you use. So I recommend using joblib instead of pickle.

    opened by tiagozortea 0
  • Implement the Pickle interface for FM_fast

    Implement the Pickle interface for FM_fast

    Without this change, attempting to pickle the model will result in an error due to the FM_fast Class lacking a proper implementation of the pickle interface. This PR implements it and removes a few unnecessary spaces. Also attributes of the class are made public because if not pickle has no access to them.

    The models can be potentially quite large, over 4GB depending on how many features you use. So I recommend using joblib instead of pickle.

    opened by tiagozortea 0
  • sklearn.cross_validation is deprecated

    sklearn.cross_validation is deprecated

    Every time you load the package you can see the deprecation warning. In future versions of scikit-learn the cross_validation will be deprecated. With this pull request I suggest the change to be compatible with future versions of scikit-learn.

    opened by tiagootto 0
  • indptr not found

    indptr not found

    Train_x and Test_x are all scipy sparse data, the fm.fit() is running normally, while in predict it come up with the error " indptr not found" when call the function CSRDataset(), why this error not occurred in fit()?

    opened by linxiexiong 0
  • s/RMSE/MSE/

    s/RMSE/MSE/

    The errors written as RMSE seems to be Mean Squared Error. This PR change RMSE to MSE.

    scikit-learn's mean_squared_error calculates without root execution. https://github.com/scikit-learn/scikit-learn/blob/c957249/sklearn/metrics/regression.py#L232-L233

    opened by chezou 0
  • will it work for third order categorical features interaction ?

    will it work for third order categorical features interaction ?

    Great code, thanks !

    Plese help to understand 1 will it work for third order categorical features interaction ? 2 will it run on Windows computer ?

    3 will it work for sparse data ?

    opened by Sandy4321 0
Owner
Corey Lynch
Research Engineer, Robotics @ Google Brain
Corey Lynch
fastFM: A Library for Factorization Machines

Citing fastFM The library fastFM is an academic project. The time and resources spent developing fastFM are therefore justified by the number of citat

null 1k Dec 24, 2022
TensorFlow implementation of an arbitrary order Factorization Machine

This is a TensorFlow implementation of an arbitrary order (>=2) Factorization Machine based on paper Factorization Machines with libFM. It supports: d

Mikhail Trofimov 785 Dec 21, 2022
High performance implementation of Extreme Learning Machines (fast randomized neural networks).

High Performance toolbox for Extreme Learning Machines. Extreme learning machines (ELM) are a particular kind of Artificial Neural Networks, which sol

Anton Akusok 174 Dec 7, 2022
Python library which makes it possible to dynamically mask/anonymize data using JSON string or python dict rules in a PySpark environment.

pyspark-anonymizer Python library which makes it possible to dynamically mask/anonymize data using JSON string or python dict rules in a PySpark envir

null 6 Jun 30, 2022
Educational python for Neural Networks, written in pure Python/NumPy.

Educational python for Neural Networks, written in pure Python/NumPy.

null 127 Oct 27, 2022
learn python in 100 days, a simple step could be follow from beginner to master of every aspect of python programming and project also include side project which you can use as demo project for your personal portfolio

learn python in 100 days, a simple step could be follow from beginner to master of every aspect of python programming and project also include side project which you can use as demo project for your personal portfolio

BDFD 6 Nov 5, 2022
A modular active learning framework for Python

Modular Active Learning framework for Python3 Page contents Introduction Active learning from bird's-eye view modAL in action From zero to one in a fe

modAL 1.9k Dec 31, 2022
A library of extension and helper modules for Python's data analysis and machine learning libraries.

Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks. Sebastian Raschka 2014-2021 Links Doc

Sebastian Raschka 4.2k Dec 29, 2022
Sequence learning toolkit for Python

seqlearn seqlearn is a sequence classification toolkit for Python. It is designed to extend scikit-learn and offer as similar as possible an API. Comp

Lars 653 Dec 27, 2022
Simple structured learning framework for python

PyStruct PyStruct aims at being an easy-to-use structured learning and prediction library. Currently it implements only max-margin methods and a perce

pystruct 666 Jan 3, 2023
Python implementation of the rulefit algorithm

RuleFit Implementation of a rule based prediction algorithm based on the rulefit algorithm from Friedman and Popescu (PDF) The algorithm can be used f

Christoph Molnar 326 Jan 2, 2023
Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

null 1.3k Dec 28, 2022
[HELP REQUESTED] Generalized Additive Models in Python

pyGAM Generalized Additive Models in Python. Documentation Official pyGAM Documentation: Read the Docs Building interpretable models with Generalized

daniel servén 747 Jan 5, 2023
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Karate Club is an unsupervised machine learning extension library for NetworkX. Please look at the Documentation, relevant Paper, Promo Video, and Ext

Benedek Rozemberczki 1.8k Jan 3, 2023
Open source time series library for Python

PyFlux PyFlux is an open source time series library for Python. The library has a good array of modern time series models, as well as a flexible array

Ross Taylor 2k Jan 2, 2023
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Jan 9, 2023
MLBox is a powerful Automated Machine Learning python library.

MLBox is a powerful Automated Machine Learning python library. It provides the following features: Fast reading and distributed data preprocessing/cle

Axel 1.4k Jan 6, 2023
Python package for stacking (machine learning technique)

vecstack Python package for stacking (stacked generalization) featuring lightweight functional API and fully compatible scikit-learn API Convenient wa

Igor Ivanov 671 Dec 25, 2022
A Python Package to Tackle the Curse of Imbalanced Datasets in Machine Learning

imbalanced-learn imbalanced-learn is a python package offering a number of re-sampling techniques commonly used in datasets showing strong between-cla

null 6.2k Jan 1, 2023