Code for WSDM 2022 paper, Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation.

Related tags

Deep Learning DuoRec
Overview

DuoRec

Code for WSDM 2022 paper, Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation.

Usage

Download datasets from RecSysDatasets or their Google Drive. And put the files in ./dataset/ like the following.

$ tree
.
├── Amazon_Beauty
│   ├── Amazon_Beauty.inter
│   └── Amazon_Beauty.item
├── Amazon_Clothing_Shoes_and_Jewelry
│   ├── Amazon_Clothing_Shoes_and_Jewelry.inter
│   └── Amazon_Clothing_Shoes_and_Jewelry.item
├── Amazon_Sports_and_Outdoors
│   ├── Amazon_Sports_and_Outdoors.inter
│   └── Amazon_Sports_and_Outdoors.item
├── ml-1m
│   ├── ml-1m.inter
│   ├── ml-1m.item
│   ├── ml-1m.user
│   └── README.md
└── yelp
    ├── README.md
    ├── yelp.inter
    ├── yelp.item
    └── yelp.user

Run duorec.sh.

Cite

If you find this repo useful, please cite

@article{DuoRec,
  author    = {Ruihong Qiu and
               Zi Huang and
               Hongzhi Yin and
               Zijian Wang},
  title     = {Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation},
  journal   = {CoRR},
  volume    = {abs/2110.05730},
  year      = {2021},
}

MISC

We have also implemented CL4SRec, Contrastive Learning for Sequential Recommendation. Change the --model="DuoRec" into --model="CL4SRec" in the duorec.sh file to run CL4SRec.

Credit

This repo is based on RecBole.

Comments
  • issues about CL4SRec

    issues about CL4SRec

    您好,我尝试运行CL4SRec模型,想要使用pop100来评估模型,按照RecBole中的文档在seq.yaml修改eval_setting

    eval_setting: TO_LS, pop100
    

    但得到了一处错误

    Traceback (most recent call last):
      File "/code/DuoRec/run_seq.py", line 15, in <module>
        run_recbole(model=args.model, dataset=args.dataset, config_file_list=config_file_list)
      File "/code/DuoRec/recbole/quick_start/quick_start.py", line 48, in run_recbole
        train_data, valid_data, test_data = data_preparation(config, dataset)
      File "/code/DuoRec/recbole/data/utils.py", line 170, in data_preparation
        valid_data = dataloader(**valid_kwargs)
    TypeError: __init__() got an unexpected keyword argument 'phase'
    

    我看到~data/dataloader/sequential_dataloader.py/SequentialNegSampleDataLoader 并没有phase 作为参数,官方repo关于dataloader的地方差别较大,可能您的部分代码使用的版本较早,出现了兼容性问题?

    应该只是一个小小的问题。

    opened by Furyton 3
  • the bug of the CL4SREC code

    the bug of the CL4SREC code

    In the augmentation phase of CL4SREC, which includes three tasks ,i.e. item_crop、item_mask、item_reorder, the code of item_mask and item_reorder unexpectedly changes the source seq, resulting in the augmented sequence being identical to the original sequence. The main reason is that the slicing operation is performed on the tensor instead of the list.

    opened by ShelveyJ 0
  • Cannot reproduce the same result posted in the paper.  CE loss v.s. BPR loss. Nice try.

    Cannot reproduce the same result posted in the paper. CE loss v.s. BPR loss. Nice try.

    I use the source code provided here. Download the ml-1m dataset, just run sh duorec.sh I can't get the result as the paper.

    More interesting, the baseline use BPR loss as the main objective which takes 1 positive and 1 negative item each time while this work introduces CE loss which takes 1 positive and all of the rest as negative items. It is unfair.

    When I change the main objective to BPR loss which is the common used loss for former papers, the performance is even lower than CL4SRec.

    opened by peggy95 1
  • Reproducing ML-1M results from the paper.

    Reproducing ML-1M results from the paper.

    In the paper you report 0.2946 recall@10 for ML-1M, however when I run duorec.sh command, I'm getting result around 0.25-026. Could you please share the run configuration which I can use to reproduce paper results?

    opened by asash 0
  • About Dataset

    About Dataset

    Hi, thanks for your great work! But I have a question. In section 5.1 of the paper, you said "Following baselines [19, 41, 51, 58], the widely used Amazon dataset is chosen in our experiments with three sub-categories." But the 19[ Self-Attentive Sequential Recommendation, CIKM2018] and 58[ Sˆ3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization, CIKM 2020] have different datasets. The former uses the Amazon ratings only datasets, while the latter uses the 5-cores datasets. And model performance varies widely across datasets. So what kind of datasets are you using? Looking forward your reply!

    opened by anunverse 0
  • The number of negative items on SASRec and DuoRec

    The number of negative items on SASRec and DuoRec

    Hello, I find that the number of negative items is set to 1 on SASRec. However, DuoRec set the number to all items. Besides, I experiment on SASRec when the number of negative items is all items, and I find that the performance is comparable with DuoRec. Look forward to your reply.

    opened by paulpig 3
Owner
Qrh
Qrh
Code for "Retrieving Black-box Optimal Images from External Databases" (WSDM 2022)

Retrieving Black-box Optimal Images from External Databases (WSDM 2022) We propose how a user retreives an optimal image from external databases of we

joisino 5 Apr 13, 2022
Leveraging Two Types of Global Graph for Sequential Fashion Recommendation, ICMR 2021

This is the repo for the paper: Leveraging Two Types of Global Graph for Sequential Fashion Recommendation Requirements OS: Ubuntu 16.04 or higher ver

Yujuan Ding 10 Oct 10, 2022
Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

null 730 Jan 9, 2023
Locally Constrained Self-Attentive Sequential Recommendation

LOCKER This is the pytorch implementation of this paper: Locally Constrained Self-Attentive Sequential Recommendation. Zhankui He, Handong Zhao, Zhe L

Zhankui (Aaron) He 8 Jul 30, 2022
A PaddlePaddle implementation of Time Interval Aware Self-Attentive Sequential Recommendation.

TiSASRec.paddle A PaddlePaddle implementation of Time Interval Aware Self-Attentive Sequential Recommendation. Introduction 论文:Time Interval Aware Sel

Paddorch 2 Nov 28, 2021
[CVPR 2022 Oral] Rethinking Minimal Sufficient Representation in Contrastive Learning

Rethinking Minimal Sufficient Representation in Contrastive Learning PyTorch implementation of Rethinking Minimal Sufficient Representation in Contras

null 36 Nov 23, 2022
[CVPR 2022 Oral] Crafting Better Contrastive Views for Siamese Representation Learning

Crafting Better Contrastive Views for Siamese Representation Learning (CVPR 2022 Oral) 2022-03-29: The paper was selected as a CVPR 2022 Oral paper! 2

null 249 Dec 28, 2022
A PyTorch implementation of "SimGNN: A Neural Network Approach to Fast Graph Similarity Computation" (WSDM 2019).

SimGNN ⠀⠀⠀ A PyTorch implementation of SimGNN: A Neural Network Approach to Fast Graph Similarity Computation (WSDM 2019). Abstract Graph similarity s

Benedek Rozemberczki 534 Dec 25, 2022
Hierarchical Metadata-Aware Document Categorization under Weak Supervision (WSDM'21)

Hierarchical Metadata-Aware Document Categorization under Weak Supervision This project provides a weakly supervised framework for hierarchical metada

Yu Zhang 53 Sep 17, 2022
PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Representation

How to Reproduce our Results This repository contains PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Represen

opcrisis 46 Dec 15, 2022
Problem-943.-ACMP - Problem 943. ACMP

Problem-943.-ACMP В "main.py" расположен вариант моего решения задачи 943 с серв

Konstantin Dyomshin 2 Aug 19, 2022
null 5 Jan 5, 2023
Saeed Lotfi 28 Dec 12, 2022
Recommendationsystem - Movie-recommendation - matrixfactorization colloborative filtering recommendation system user

recommendationsystem matrixfactorization colloborative filtering recommendation

kunal jagdish madavi 1 Jan 1, 2022
Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0

Yan Yuanmeng 478 Dec 25, 2022
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Joshua Ji 3 Aug 20, 2022
PyTorch reimplementation of the Smooth ReLU activation function proposed in the paper "Real World Large Scale Recommendation Systems Reproducibility and Smooth Activations" [arXiv 2022].

Smooth ReLU in PyTorch Unofficial PyTorch reimplementation of the Smooth ReLU (SmeLU) activation function proposed in the paper Real World Large Scale

Christoph Reich 10 Jan 2, 2023
Code in conjunction with the publication 'Contrastive Representation Learning for Hand Shape Estimation'

HanCo Dataset & Contrastive Representation Learning for Hand Shape Estimation Code in conjunction with the publication: Contrastive Representation Lea

Computer Vision Group, Albert-Ludwigs-Universität Freiburg 38 Dec 13, 2022