Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Networks

Related tags

Deep Learning CyGNet
Overview

CyGNet

This repository reproduces the AAAI'21 paper “Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Networks” by pytorch.

Abstract

image

Large knowledge graphs often grow to store temporal facts that model the dynamic relations or interactions of entities along the timeline. Since such temporal knowledge graphs often suffer from incompleteness, it is important to develop time-aware representation learning models that help to infer the missing temporal facts. While the temporal facts are typically evolving, it is observed that many facts often show a repeated pattern along the timeline, such as economic crises and diplomatic activities. This observation indicates that a model could potentially learn much from the known facts appeared in history. To this end, we propose a new representation learning model for temporal knowledge graphs, namely CyGNet, based on a novel time-aware copy-generation mechanism. CyGNet is not only able to predict future facts from the whole entity vocabulary, but also capable of identifying facts with repetition and accordingly predicting such future facts with reference to the known facts in the past. We evaluate the proposed method on the knowledge graph completion task using five benchmark datasets. Extensive experiments demonstrate the effectiveness of CyGNet for predicting future facts with repetition as well as de novo fact prediction.

Environment

python 3.7
pytorch 1.3.0

Dataset

There are five datasets (from RE-NET): ICEWS18, ICEWS14, GDELT, WIKI, and YAGO. Times of test set should be larger than times of train and valid sets. (Times of valid set also should be larger than times of train set.) Each data folder has 'stat.txt', 'train.txt', 'valid.txt', 'test.txt'.

Run the experiment

We first get the historical vocabulary.

python get_historical_vocabulary.py --dataset DATA_NAME

Then, train the model and test.

python train_test.py --dataset ICEWS18 --time-stamp 24 -alpha 0.8 -lr 0.001 --n-epoch 30 --hidden-dim 200 -gpu 0 --batch-size 1024 --counts 3
python train_test.py --dataset ICEWS14 --time-stamp 24 -alpha 0.8 -lr 0.001 --n-epoch 30 --hidden-dim 200 -gpu 0 --batch-size 1024 --counts 3
python train_test.py --dataset GDELT --time-stamp 15 -alpha 0.7 -lr 0.001 --n-epoch 30 --hidden-dim 200 -gpu 0 --batch-size 1024 --counts 2
python train_test.py --dataset WIKI --time-stamp 1 -alpha 0.7 -lr 0.001 --n-epoch 30 --hidden-dim 200 -gpu 0 --batch-size 1024 --counts 5
python train_test.py --dataset YAGO --time-stamp 1 -alpha 0.7 -lr 0.001 --n-epoch 30 --hidden-dim 200 -gpu 0 --batch-size 1024 --counts 5

Reference

Bibtex:

@inproceedings{zhu-etal-2021-cygnet,
  title = {Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Networks},
  author = "Zhu, Cunchao and Chen, Muhao and Fan, Changjun and Cheng, Guangquan and Zhang, Yan",
  booktitle = "AAAI",
  year = "2021"
}
You might also like...
Continuous Query Decomposition for Complex Query Answering in Incomplete Knowledge Graphs

Continuous Query Decomposition This repository contains the official implementation for our ICLR 2021 (Oral) paper, Complex Query Answering with Neura

Language models are open knowledge graphs ( non official implementation )
Language models are open knowledge graphs ( non official implementation )

language-models-are-knowledge-graphs-pytorch Language models are open knowledge graphs ( work in progress ) A non official reimplementation of Languag

Compositional and Parameter-Efficient Representations for Large Knowledge Graphs

NodePiece - Compositional and Parameter-Efficient Representations for Large Knowledge Graphs NodePiece is a "tokenizer" for reducing entity vocabulary

Code for the paper "Query Embedding on Hyper-relational Knowledge Graphs"

Query Embedding on Hyper-Relational Knowledge Graphs This repository contains the code used for the experiments in the paper Query Embedding on Hyper-

Collective Multi-type Entity Alignment Between Knowledge Graphs (WWW'20)

CG-MuAlign A reference implementation for "Collective Multi-type Entity Alignment Between Knowledge Graphs", published in WWW 2020. If you find our pa

Implementation for the EMNLP 2021 paper "Interactive Machine Comprehension with Dynamic Knowledge Graphs".

Interactive Machine Comprehension with Dynamic Knowledge Graphs Implementation for the EMNLP 2021 paper. Dependencies apt-get -y update apt-get instal

This is the source code for: Context-aware Entity Typing in Knowledge Graphs.

This is the source code for: Context-aware Entity Typing in Knowledge Graphs.

ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs

ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs This is the code of paper ConE: Cone Embeddings for Multi-Hop Reasoning over Knowl

Efficient 3D Backbone Network for Temporal Modeling
Efficient 3D Backbone Network for Temporal Modeling

VoV3D is an efficient and effective 3D backbone network for temporal modeling implemented on top of PySlowFast. Diverse Temporal Aggregation and

Comments
  • Typos and Hyper-parameters

    Typos and Hyper-parameters

    Dear CunchaoZ,

    Thanks for providing the open-source code for your fantastic AAAI'21 work. When I try to run your code, I find there are two typos

    1. get_historical_vocabulary.py line 59: svae_dir_sub
    2. class link_prediction in link_prediction.py self.num_times is not defined

    After fixing these typos, I was able to run the code. However, I am having trouble reproducing the numbers reported in the paper. Could you release the hyper-parameters used in training?

    Thanks!

    opened by tedzhouhk 2
  • I have a question on whole entity vocabulary

    I have a question on whole entity vocabulary

    I found that the Figure 2 in paper shows that all entities are NBA Teams in whole entity vocabulary, but the whole entity vocabulary should consist of all entities in TKGC , not just NBA Teams. I don't know whether I understand correctly. I'd be grateful if you give me a reply!

    opened by Ckeai 0
  • Single Step Prediction Possible?

    Single Step Prediction Possible?

    Hello,

    From the Code and from the Paper I understand, that the Cygnet model always predicts multiple steps (all steps from the test set) ahead, without feeding the ground truth information in between, is this correct?

    Is it possible, to set some parameter, or easily modify the code such that Cygnet also does single step prediction? With this I mean:

    for each timestep t in test_set:

    • Predict sub/ ob for all triples in t -> compute scores for these triples
    • Feed the ground truth for t
    • t = t+1

    Similar to the setting described in RE-GCN, 5.14: https://arxiv.org/pdf/2104.10353.pdf

    To conduct the multi-step inference [ 16 ] in the validation set and the test set, we evaluate the performance of RE-GCN with the evo- lutional embeddings of entities and relations at the final timestamp of the training set as the input of score functions following [43 ]. Besides, we also report the results of the models with ground truth history given during multi-step inference on the test set, namely, w. GT.

    Best Julia

    opened by JuliaGast 0
  • Updates time subsetting to use np.where

    Updates time subsetting to use np.where

    Improves subset time required by 10x.

    Examples:

    Calling ./get_historical_vocabulary.py --dataset XXX on a large dataset with 100,000 time steps previously takes 20 minutes but now takes 2 minutes.

    Calling ./get_historical_vocabulary.py --dataset WIKI takes ~81it/s rather than 8it/s

    opened by davidshumway 0
Owner
CunchaoZ
CunchaoZ
A pytorch-version implementation codes of paper: "BSN++: Complementary Boundary Regressor with Scale-Balanced Relation Modeling for Temporal Action Proposal Generation"

BSN++: Complementary Boundary Regressor with Scale-Balanced Relation Modeling for Temporal Action Proposal Generation A pytorch-version implementation

null 11 Oct 8, 2022
Implementation of the "Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos" paper.

Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos Introduction Point cloud videos exhibit irregularities and lack of or

Hehe Fan 101 Dec 29, 2022
Sequence modeling benchmarks and temporal convolutional networks

Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN) This repository contains the experiments done in the work An Empirical Evaluati

CMU Locus Lab 3.5k Jan 1, 2023
ZSL-KG is a general-purpose zero-shot learning framework with a novel transformer graph convolutional network (TrGCN) to learn class representation from common sense knowledge graphs.

ZSL-KG is a general-purpose zero-shot learning framework with a novel transformer graph convolutional network (TrGCN) to learn class representa

Bats Research 94 Nov 21, 2022
CVPR2021: Temporal Context Aggregation Network for Temporal Action Proposal Refinement

Temporal Context Aggregation Network - Pytorch This repo holds the pytorch-version codes of paper: "Temporal Context Aggregation Network for Temporal

Zhiwu Qing 63 Sep 27, 2022
Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Zhengzhong Tu 5 Sep 16, 2022
Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

This repository is the official PyTorch implementation of Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

hippopmonkey 4 Dec 11, 2022
Official code for Next Check-ins Prediction via History and Friendship on Location-Based Social Networks (MDM 2018)

MUC Next Check-ins Prediction via History and Friendship on Location-Based Social Networks (MDM 2018) Performance Details for Accuracy: | Dataset

Yijun Su 3 Oct 9, 2022
QA-GNN: Question Answering using Language Models and Knowledge Graphs

QA-GNN: Question Answering using Language Models and Knowledge Graphs This repo provides the source code & data of our paper: QA-GNN: Reasoning with L

Michihiro Yasunaga 434 Jan 4, 2023
ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

(Comet-) ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs Paper Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sa

AI2 152 Dec 27, 2022