Leveraging Two Types of Global Graph for Sequential Fashion Recommendation
This is the repo for the paper:Requirements
- OS: Ubuntu 16.04 or higher version
- python3.7
- Supported (tested) CUDA Versions: V10.2
- python modules: refer to the modules in requirements.txt
Code Structure
- The entry script for training and evaluation is: train.py
- The config file is: config.yaml
- The script for data preprocess and dataloader: utility.py
- The model folder: ./model/.
- The experimental logs in tensorboard-format are saved in ./logs.
- The experimental logs in txt-format are saved in ./performance.
- The best model for each experimental setting is saved in ./model_saves.
- The recommendation results in the evaluation are recorded in ./results.
- The ./logs, ./performance, ./model_saves, ./results files will be generated automatically when first time runing the codes.
- The script get_all_the_res.py is used to print the performance of all the trained and tested models on the screen.
How to Run
-
Download the dataset, decompress it and put it in the top directory with the following command. Note that the downloaded files include two datasets ulilized in the paper: iFashion and amazon_fashion.
tar zxvf dgsr_dataset.tar.gz.
-
Settings in the configure file config.yaml are basic experimental settings, which are usually fixed in the experiments. To tune other hyper-parameters, you can use command line to pass the parameters. The command line supported hyper-parameters including: the dataset (-d), sequence length (-l) and embedding size (-e). You can also specify which gpu device (-g) to use in the experiments.
-
Run the training and evaluation with the specified hyper-parameters by the command:
python train.py -d=ifashion -l=5 -e=50 -g=0.
-
During the training, you can monitor the training loss and the evaluation performance by Tensorboard. You can get into ./logs to track the curves of your training and evaluation with the following command:
tensorboard --host="your host ip" --logdir=./
-
The performance of the model is saved in ./performance. You can get into the folder and check the detailed training process of any finished experiments (Compared with the tensorboard log save in ./logs, it is just the txt-version human-readable training log). To quickly check the results for all implemented experiments, you can also print the results of all experiments in a table format on the terminal screen by running:
python get_all_the_res.py
-
The best model will be saved in ./model_saves.