Meandering In Networks of Entities to Reach Verisimilar Answers

Overview

MINERVA

Meandering In Networks of Entities to Reach Verisimilar Answers

Code and models for the paper Go for a Walk and Arrive at the Answer - Reasoning over Paths in Knowledge Bases using Reinforcement Learning

MINERVA is a RL agent which answers queries in a knowledge graph of entities and relations. Starting from an entity node, MINERVA learns to navigate the graph conditioned on the input query till it reaches the answer entity. For example, give the query, (Colin Kaepernick, PLAYERHOMESTADIUM, ?), MINERVA takes the path in the knowledge graph below as highlighted. Note: Only the solid edges are observed in the graph, the dashed edges are unobsrved. gif gif courtesy of Bhuvi Gupta

Requirements

To install the various python dependencies (including tensorflow)

pip install -r requirements.txt

Training

Training MINERVA is easy!. The hyperparam configs for each experiments are in the configs directory. To start a particular experiment, just do

sh run.sh configs/${dataset}.sh

where the ${dataset}.sh is the name of the config file. For example,

sh run.sh configs/countries_s3.sh

Testing

We are also releasing pre-trained models so that you can directly use MINERVA for query answering. They are located in the saved_models directory. To load the model, set the load_model to 1 in the config file (default value 0) and model_load_dir to point to the saved_model. For example in configs/countries_s2.sh, make

load_model=1
model_load_dir="saved_models/countries_s2/model.ckpt"

Output

The code outputs the evaluation of MINERVA on the datasets provided. The metrics used for evaluation are Hits@{1,3,5,10,20} and MRR (which in the case of Countries is AUC-PR). Along with this, the code also outputs the answers MINERVA reached in a file.

Code Structure

The structure of the code is as follows

Code
├── Model
│    ├── Trainer
│    ├── Agent
│    ├── Environment
│    └── Baseline
├── Data
│    ├── Grapher
│    ├── Batcher
│    └── Data Preprocessing scripts
│            ├── create_vocab
│            ├── create_graph
│            ├── Trainer
│            └── Baseline

Data Format

To run MINERVA on a custom graph based dataset, you would need the graph and the queries as triples in the form of (e1,r, e2). Where e1, and e2 are nodes connected by the edge r. The vocab can of the dataset can be created using the create_vocab.py file found in data/data preprocessing scripts. The vocab needs to be stores in the json format {'entity/relation': ID}. The following shows the directory structure of the Kinship dataset.

kinship
    ├── graph.txt
    ├── train.txt
    ├── dev.txt
    ├── test.txt
    └── Vocab
            ├── entity_vocab.json
            └── relation_vocab.json

Citation

If you use this code, please cite our paper

@inproceedings{minerva,
  title = {Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning},
  author = {Das, Rajarshi and Dhuliawala, Shehzaad and Zaheer, Manzil and Vilnis, Luke and Durugkar, Ishan and Krishnamurthy, Akshay and Smola, Alex and McCallum, Andrew},
  booktitle = {ICLR},
  year = 2018
}
Comments
  • Reproducing NELL-995 MAP Results

    Reproducing NELL-995 MAP Results

    Thanks very much for releasing the code in accompany with the paper. It definitely makes reproducing the experiments a lot easier. I've been playing with the code base and have some questions on reproducing the NELL-995 experiments.

    The codebase does not contain the configuration file for NELL-995 experiments, nor does it contains the evaluation scripts for computing MAP. (Maybe you've missed them from the release?) I used the hyperparameters reported in "Experimental Details, section 2.3" and the appendix section 8.1 of the paper, which results in the following configuration file:

    data_input_dir="datasets/data_preprocessed/nell-995/"
    vocab_dir="datasets/data_preprocessed/nell-995/vocab"
    total_iterations=1000
    path_length=3
    hidden_size=400
    embedding_size=200
    batch_size=64
    beta=0.05
    Lambda=0.02
    use_entity_embeddings=1
    train_entity_embeddings=1
    train_relation_embeddings=1
    base_output_dir="output/nell-995/"
    load_model=1
    model_load_dir="saved_models/nell-995/model.ckpt"
    

    I run train & test as specified in the README, and evaluate the decoding results using the MAP computation script produced by the DeepPath paper. (I assumed that the experiment setup is exactly the same as the DeepPath paper since you compared head-to-head with them.)

    However, the MAP results I obtained this way is significantly lower compared to the reported results.

    MINERVA concept_athleteplaysinleague MAP: 0.810746658312 (380 queries evaluated)
    MINERVA concept_athleteplaysforteam MAP: 0.649309434089 (386 queries evaluated)
    MINERVA concept_organizationheadquarteredincity MAP: 0.944878371403 (246 queries evaluated)
    MINERVA concept_athleteplayssport MAP: 0.919186046512 (602 queries evaluated)
    MINERVA concept_personborninlocation MAP: 0.775690686628 (192 queries evaluated)
    MINERVA concept_teamplayssport MAP: 0.762183612184 (111 queries evaluated)
    MINERVA concept_athletehomestadium MAP: 0.519108225108 (200 queries evaluated)
    MINERVA concept_worksfor MAP: 0.663530575465 (420 queries evaluated)
    

    I did a few variation on embedding dimensions and also tried to freeze entity embeddings, yet none of the trials produced numbers close to the results tabulated in the MINERVA paper.

    Would you please clarify the experiment setup for computing MAP? I want to make sure I did set the hyperparameters to the correct value. Besides, the DeepPath paper used a relation-dependent underlying graph per relation during inference. Did you also vary the graph per relation or used a base graph for all relations like you did for other datasets?

    Many thanks.

    opened by todpole3 14
  • Testing problem

    Testing problem

    Thanks for publishing the codebase.I have a question about testing.I set it up exactly as you mentioned in the README documentation.The rest arguments of testing are same as training.For example,'load_model=1' and 'model_load_dir="/output/worksfor/b2b4_3_0.05_100_0.05/model/model.ckpt"'.But I got the following error:

    INFO:tensorflow:Restoring parameters from /output/worksfor/b2b4_3_0.05_100_0.05/model/model.ckpt
    INFO:tensorflow:Restoring parameters from /output/worksfor/b2b4_3_0.05_100_0.05/model/model.ckpt
    07/06/2019 03:42:44 PM: [ Restoring parameters from /output/worksfor/b2b4_3_0.05_100_0.05/model/model.ckpt ]
    2019-07-06 15:42:44.906846: W tensorflow/core/framework/op_kernel.cc:1318] OP_REQUIRES failed at save_restore_tensor.cc:170 : Invalid argument: Unsuccessful TensorSliceReader constructor: Failed to get matching files on /output/worksfor/b2b4_3_0.05_100_0.05/model/model.ckpt: Not found: /output/worksfor/b2b4_3_0.05_100_0.05/model; No such file or directory
    Traceback (most recent call last):
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
        return fn(*args)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
        options, feed_dict, fetch_list, target_list, run_metadata)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
        run_metadata)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Unsuccessful TensorSliceReader constructor: Failed to get matching files on /output/worksfor/b2b4_3_0.05_100_0.05/model/model.ckpt: Not found: /output/worksfor/b2b4_3_0.05_100_0.05/model; No such file or directory
    	 [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "code/model/trainer.py", line 574, in <module>
        trainer.initialize(restore=save_path, sess=sess)
      File "code/model/trainer.py", line 144, in initialize
        return  self.model_saver.restore(sess, restore)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1802, in restore
        {self.saver_def.filename_tensor_name: save_path})
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 900, in run
        run_metadata_ptr)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1135, in _run
        feed_dict_tensor, options, run_metadata)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
        run_metadata)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
        raise type(e)(node_def, op, message)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Unsuccessful TensorSliceReader constructor: Failed to get matching files on /output/worksfor/b2b4_3_0.05_100_0.05/model/model.ckpt: Not found: /output/worksfor/b2b4_3_0.05_100_0.05/model; No such file or directory
    	 [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
    
    Caused by op 'save/RestoreV2', defined at:
      File "code/model/trainer.py", line 574, in <module>
        trainer.initialize(restore=save_path, sess=sess)
      File "code/model/trainer.py", line 138, in initialize
        self.model_saver = tf.train.Saver(max_to_keep=2)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1338, in __init__
        self.build()
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1347, in build
        self._build(self._filename, build_save=True, build_restore=True)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1384, in _build
        build_save=build_save, build_restore=build_restore)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 835, in _build_internal
        restore_sequentially, reshape)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 472, in _AddRestoreOps
        restore_sequentially)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 886, in bulk_restore
        return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1463, in restore_v2
        shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
        op_def=op_def)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
        op_def=op_def)
      File "/home/dr/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1718, in __init__
        self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access
    
    InvalidArgumentError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to get matching files on /output/worksfor/b2b4_3_0.05_100_0.05/model/model.ckpt: Not found: /output/worksfor/b2b4_3_0.05_100_0.05/model; No such file or directory
    	 [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
    
    opened by DR73 7
  • Reproduce WN18RR results

    Reproduce WN18RR results

    I am trying to reproduce the results of WN18RR. I obtain much lower results despite using the predefined configs.

    INFO:root:Hits@1: 0.3500 07/26/2018 02:40:17 PM: [ Hits@1: 0.3500 ] INFO:root:Hits@3: 0.4346 07/26/2018 02:40:17 PM: [ Hits@3: 0.4346 ]

    In the paper Hits@1 is 0.413 and Hits@3 0.456

    Moreover, in a new paper they are claiming a sota result compared to these lower results which makes this very confusing... https://arxiv.org/pdf/1802.04394.pdf

    opened by FredericGodin 7
  • Table 4 experiments

    Table 4 experiments

    Hi @rajarshd, @shehzaadzd,

    For all the baseline methods, do you train on train.txt or on both train.txt and graph.txt? Do you have setting to reproduce the results? (e.g., Neural LP result is different from the ones in their paper for FB15k-237)

    Thanks!

    opened by posenhuang 7
  • is the config file you provide the optimal parameter??

    is the config file you provide the optimal parameter??

    Thanks for reading my question! is the config file you provide the optimal parameter?? I run a experiment on the dataset FB15K-237 with the default config file your provide . the results seems a lot different from the result in your paper?This is my result: INFO:root:Hits@1: 0.1069 04/29/2018 12:23:52 AM: [ Hits@1: 0.1069 ] INFO:root:Hits@3: 0.1904 04/29/2018 12:23:52 AM: [ Hits@3: 0.1904 ] INFO:root:Hits@5: 0.2314 04/29/2018 12:23:52 AM: [ Hits@5: 0.2314 ] INFO:root:Hits@10: 0.2864 04/29/2018 12:23:52 AM: [ Hits@10: 0.2864 ] INFO:root:Hits@20: 0.3413 04/29/2018 12:23:52 AM: [ Hits@20: 0.3413 ] INFO:root:auc: 0.1655 04/29/2018 12:23:52 AM: [ auc: 0.1655 ] Could you help me to reproduce the result of your paper??

    opened by Lee-zix 7
  • grid-world dataset access

    grid-world dataset access

    Hi @rajarshd, @shehzaadzd,

    When I access the grid world data, I got the following error. I am using git-lfs version 1.2.

    git lfs fetch --all
    Scanning for all objects ever referenced...
    * 26 objects found
    Fetching objects...
    Git LFS: (0 of 26 files, 26 skipped) 0 B / 6.34 MB, 6.34 MB skipped
    [404] Object does not exist on the server
    [e16e1d42fbd288e20d73337bbce88b29e1d889cfb409de87b87a7d3d3b3c06e5] Object does not exist on the server 
    

    Thanks!

    opened by posenhuang 5
  • Encode the relation and entity ?

    Encode the relation and entity ?

    Hello, I read your paper and codes and there is a confusion.

    About encoding relation and entity, here is my understanding. You don't encode them as vectors, instead using embedding matrices r and e and looking up in the embedding matrices according to their ids. And tf.nn.embedding_lookup() function could train the parameters in the embedding matrices.

    Is that right?

    opened by ProQianXiao 4
  • train/dev/test split

    train/dev/test split

    Why your dev triples are included in training data?

    code/data/preprocessing_scripts/nell.py: out_file.write(e1+'\t'+r+'\t'+e2+'\n') if np.random.normal() > 0.2: ----dev.write(e1+'\t'+r+'\t'+e2+'\n')

    Theoretically you are supposed to split it into 2 datasets (train/test) or 3 (train/dev/test) without overlaps. Please explain the reason behind this. Thank you.

    opened by ghost 4
  • A whole solution for question-answering?

    A whole solution for question-answering?

    I have not found the natural language questions in the datasets. So I think I should turn the natural language questions to logic form like (e1,r,?) first. Am I right? Thank you very much!

    opened by guotong1988 4
  • the accuracy can be better after fine tune the function `calc_cum_discounted_reward ` as follows

    the accuracy can be better after fine tune the function `calc_cum_discounted_reward ` as follows

    the code is at https://github.com/shehzaadzd/MINERVA/blob/d2f44ad7b48490fe73627cdd357e1465d67d9d75/code/model/trainer.py#L182-L183 I modified the code as a normal way to calculate returns(just change the time t), it becomes better. that is from :

    for t in reversed(range(self.path_length)):
                running_add = self.gamma * running_add + cum_disc_reward[:, t]
                cum_disc_reward[:, t] = running_add
    

    to :

    for t in reversed(range(1, self.path_length)):
                running_add = self.gamma * running_add + cum_disc_reward[:, t]
                cum_disc_reward[:, t-1] = running_add
    

    Thank you! I very appreciate you hard works, so I learned it line by line.

    opened by suenpun 2
  • num_rollouts

    num_rollouts

    Hi, I'm getting the following error:

    File ".../grapher.py", line 65, in return_next_actions if entities[j] in all_correct_answers[i/rollouts] and entities[j] != correct_e2: TypeError: list indices must be integers or slices, not float

    This function is called in environment.py:

    next_actions = self.grapher.return_next_actions(self.current_entities, self.start_entities, self.query_relation,self.end_entities, self.all_answers, self.current_hop == self.path_len - 1,self.num_rollouts)

    I wonder what the value of num_rollouts should be to prevent i/rollouts to become a float (as far as I see, i is already an integer.

    opened by msaebi 2
  • Is this code an official implementation to the method in paper

    Is this code an official implementation to the method in paper "Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks"?

    On the "papers with code" website, this github repo is labeled as the official implementation of a paper named "Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks" (https://paperswithcode.com/paper/neuro-symbolic-inductive-logic-programming), which seems to be a mistake.

    opened by yinjc8214 0
  • Export the mode using `tf.saved_model` and deploy it to production environment

    Export the mode using `tf.saved_model` and deploy it to production environment

    Sorry to bother. When I'm trying to export the model using tf.saved_model to deploy it to a production environment (tf serving in docker), there are some problems. The exported graph format requires pure inputs and outputs signature, but in our model,the agent needs to interact with an environment. But all of the codes are written in numpy rather than tensorflow except the agent.py. So if I export the agent only(maybe failed), how the agent interacts with the environment? My idea: rewrite the environment in TF... but this is going to be a lot of work.

    Thanks in advance.

    opened by suenpun 1
  • How do you extract the general rules from the trained policy(like the (i) example in table 8)?

    How do you extract the general rules from the trained policy(like the (i) example in table 8)?

    Thanks for your code! But I have some doubts about your displayed general rules. Is it just some similar result in test set or you come up with some guidelines for induction?

    opened by xlk369293141 0
  • Unable to load the saved_model

    Unable to load the saved_model

    I am running the model for countries_s1 model but getting the following error:

    Executing python code/model/trainer.py --base_output_dir output/countries_s1/ --path_length 2 --hidden_size 25 --embedding_size 25     --batch_size 256 --beta 0.05 --Lambda 0.05 --use_entity_embeddings 1     --train_entity_embeddings 1 --train_relation_embeddings 1     --data_input_dir datasets/data_preprocessed/countries_s1/ --vocab_dir datasets/data_preprocessed/countries_s1/vocab --model_load_dir saved_models/countries_s1/model.ckpt --load_model 1 --total_iterations 1000 --nell_evaluation 0
    HEY
    Arguments:
                             LSTM_layers : 1
                                  Lambda : 0.05
                         base_output_dir : output/countries_s1/
                              batch_size : 256
                                    beta : 0.05
                            create_vocab : 0
                          data_input_dir : datasets/data_preprocessed/countries_s1/
                          embedding_size : 25
                              eval_every : 100
                                   gamma : 1
                          grad_clip_norm : 5
                             hidden_size : 25
                              input_file : train.txt
                             input_files : ['datasets/data_preprocessed/countries_s1//train.txt']
                            l2_reg_const : 0.01
                           learning_rate : 0.001
                              load_model : True
                                 log_dir : ./logs/
                           log_file_name : output/countries_s1//3f71_2_0.05_100_0.05/log.txt
                         max_num_actions : 200
                               model_dir : output/countries_s1//3f71_2_0.05_100_0.05/model/
                          model_load_dir : saved_models/countries_s1/model.ckpt
                         negative_reward : 0
                         nell_evaluation : 0
                            num_rollouts : 20
                              output_dir : output/countries_s1//3f71_2_0.05_100_0.05
                             output_file :
                             path_length : 2
                        path_logger_file : output/countries_s1//3f71_2_0.05_100_0.05
                                    pool : max
                         positive_reward : 1.0
            pretrained_embeddings_action :
            pretrained_embeddings_entity :
                           test_rollouts : 100
                        total_iterations : 1000
                 train_entity_embeddings : True
               train_relation_embeddings : True
                   use_entity_embeddings : True
                               vocab_dir : datasets/data_preprocessed/countries_s1/vocab
    INFO:root:reading vocab files...
    01/28/2019 01:56:20 AM: [ reading vocab files... ]
    INFO:root:Reading mid to name map
    01/28/2019 01:56:20 AM: [ Reading mid to name map ]
    INFO:root:Done..
    01/28/2019 01:56:20 AM: [ Done.. ]
    INFO:root:Total number of entities 273
    01/28/2019 01:56:20 AM: [ Total number of entities 273 ]
    INFO:root:Total number of relations 6
    01/28/2019 01:56:20 AM: [ Total number of relations 6 ]
    INFO:root:Skipping training
    01/28/2019 01:56:20 AM: [ Skipping training ]
    INFO:root:Loading model from saved_models/countries_s1/model.ckpt
    01/28/2019 01:56:20 AM: [ Loading model from saved_models/countries_s1/model.ckpt ]
    Reading vocab...
    batcher loaded
    KG constructed
    Reading vocab...
    Contains full graph
    batcher loaded
    KG constructed
    Reading vocab...
    Contains full graph
    batcher loaded
    KG constructed
    2019-01-28 01:56:21.112771: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
    2019-01-28 01:56:21.112794: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
    2019-01-28 01:56:21.112801: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
    2019-01-28 01:56:21.112806: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
    2019-01-28 01:56:21.112812: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
    2019-01-28 01:56:21.988648: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties:
    name: GeForce GTX 1080 Ti
    major: 6 minor: 1 memoryClockRate (GHz) 1.582
    pciBusID 0000:06:00.0
    Total memory: 10.91GiB
    Free memory: 10.75GiB
    2019-01-28 01:56:21.988740: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0
    2019-01-28 01:56:21.988757: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y
    2019-01-28 01:56:21.988799: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:06:00.0)
    INFO:root:Creating TF graph...
    01/28/2019 01:56:22 AM: [ Creating TF graph... ]
    INFO:root:TF Graph creation done..
    01/28/2019 01:56:23 AM: [ TF Graph creation done.. ]
    INFO:tensorflow:Restoring parameters from saved_models/countries_s1/model.ckpt
    INFO:tensorflow:Restoring parameters from saved_models/countries_s1/model.ckpt
    01/28/2019 01:56:23 AM: [ Restoring parameters from saved_models/countries_s1/model.ckpt ]
    2019-01-28 01:56:23.830867: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
    2019-01-28 01:56:23.846931: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847248: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847249: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847266: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847290: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847248: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847342: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847358: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847437: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847529: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847785: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847753: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847603: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847794: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847881: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.847816: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.848095: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.848317: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.848375: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.854893: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.854890: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.854966: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.854933: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.855353: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.855395: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam not found in checkpoint
    2019-01-28 01:56:23.855675: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.855750: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.855791: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.855820: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.855785: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.856273: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.856468: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.856486: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.856592: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.856623: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    2019-01-28 01:56:23.856677: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
    Traceback (most recent call last):
      File "code/model/trainer.py", line 573, in <module>
        trainer.initialize(restore=save_path, sess=sess)
      File "code/model/trainer.py", line 143, in initialize
        return  self.model_saver.restore(sess, restore)
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1560, in restore
        {self.saver_def.filename_tensor_name: save_path})
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
        run_metadata_ptr)
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1124, in _run
        feed_dict_tensor, options, run_metadata)
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run
        options, run_metadata)
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call
        raise type(e)(node_def, op, message)
    tensorflow.python.framework.errors_impl.NotFoundError: Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
             [[Node: save/RestoreV2_17/_3 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_80_save/RestoreV2_17", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
    
    Caused by op u'save/RestoreV2_9', defined at:
      File "code/model/trainer.py", line 573, in <module>
        trainer.initialize(restore=save_path, sess=sess)
      File "code/model/trainer.py", line 137, in initialize
        self.model_saver = tf.train.Saver(max_to_keep=2)
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1140, in __init__
        self.build()
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1172, in build
        filename=self._filename)
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 688, in build
        restore_sequentially, reshape)
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 407, in _AddRestoreOps
        tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 247, in restore_op
        [spec.tensor.dtype])[0])
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 663, in restore_v2
        dtypes=dtypes, name=name)
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
        op_def=op_def)
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
        original_op=self._default_original_op, op_def=op_def)
      File "/home/ext_user2/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__
        self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access
    
    NotFoundError (see above for traceback): Key entity_lookup_table/entity_lookup_table/Adam_1 not found in checkpoint
             [[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]
             [[Node: save/RestoreV2_17/_3 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_80_save/RestoreV2_17", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
    
    
    opened by kunal017 4
  • parameter settings used in paper

    parameter settings used in paper

    Hello thank you for your code, it works smoothly without any problem. I found parameter settings in your code are different than ones in your paper and MAP results were much lower. Would you please post parameter settings used for experiments in your paper? I really appreciate it if you could provide config.sh file for paper experiments. For example, batch size, beta, lambda total_iterations, entity_embeddings, and train_entity_embeddings so on:

    data_input_dir="datasets/data_preprocessed/athleteplaysinleague/" vocab_dir="datasets/data_preprocessed/athleteplaysinleague/vocab" total_iterations=120 path_length=3 hidden_size=400 embedding_size=200 batch_size=128 beta=0.05 Lambda=0.05 use_entity_embeddings=1 train_entity_embeddings=1 train_relation_embeddings=1 base_output_dir="output/athleteplaysinleague/" load_model=0 model_load_dir="/home/sdhuliawala/logs/RL-Path-RNN/wn18rrr/edb6_3_0.05_10_0.05/model/model.ckpt" nell_evaluation=1

    opened by ghost 0
Owner
Shehzaad Dhuliawala
Shehzaad Dhuliawala
Learning to Reach Goals via Iterated Supervised Learning

Vanilla GCSL This repository contains a vanilla implementation of "Learning to Reach Goals via Iterated Supervised Learning" proposed by Dibya Gosh et

Christoph Heindl 4 Aug 10, 2022
Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021)

TDEER (WIP) Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021) Overview TDEER is an e

Alipay 6 Dec 17, 2022
Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers.

ConditionalQA Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. Disclaimer This dataset

null 2 Oct 14, 2021
This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of Coordinate Independent Convolutional Networks.

Orientation independent Möbius CNNs This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of

Maurice Weiler 59 Dec 9, 2022
UAV-Networks-Routing is a Python simulator for experimenting routing algorithms and mac protocols on unmanned aerial vehicle networks.

UAV-Networks Simulator - Autonomous Networking - A.A. 20/21 UAV-Networks-Routing is a Python simulator for experimenting routing algorithms and mac pr

null 0 Nov 13, 2021
Complex-Valued Neural Networks (CVNN)Complex-Valued Neural Networks (CVNN)

Complex-Valued Neural Networks (CVNN) Done by @NEGU93 - J. Agustin Barrachina Using this library, the only difference with a Tensorflow code is that y

youceF 1 Nov 12, 2021
A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

Kordel K. France 2 Nov 14, 2022
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 61.4k Jan 4, 2023
Lightweight library to build and train neural networks in Theano

Lasagne Lasagne is a lightweight library to build and train neural networks in Theano. Its main features are: Supports feed-forward networks such as C

Lasagne 3.8k Dec 29, 2022
A flexible framework of neural networks for deep learning

Chainer: A deep learning framework Website | Docs | Install Guide | Tutorials (ja) | Examples (Official, External) | Concepts | ChainerX Forum (en, ja

Chainer 5.8k Jan 6, 2023
Fast, flexible and fun neural networks.

Brainstorm Discontinuation Notice Brainstorm is no longer being maintained, so we recommend using one of the many other,available frameworks, such as

IDSIA 1.3k Nov 21, 2022
Image-to-Image Translation with Conditional Adversarial Networks (Pix2pix) implementation in keras

pix2pix-keras Pix2pix implementation in keras. Original paper: Image-to-Image Translation with Conditional Adversarial Networks (pix2pix) Paper Author

William Falcon 141 Dec 30, 2022
Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

Michael Nielsen 13.9k Dec 26, 2022
Python Library for learning (Structure and Parameter) and inference (Statistical and Causal) in Bayesian Networks.

pgmpy pgmpy is a python library for working with Probabilistic Graphical Models. Documentation and list of algorithms supported is at our official sit

pgmpy 2.2k Jan 3, 2023
Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Networks

CyGNet This repository reproduces the AAAI'21 paper “Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Network

CunchaoZ 89 Jan 3, 2023
Transformers are Graph Neural Networks!

?? Gated Graph Transformers Gated Graph Transformers for graph-level property prediction, i.e. graph classification and regression. Associated article

Chaitanya Joshi 46 Jun 30, 2022
TDN: Temporal Difference Networks for Efficient Action Recognition

TDN: Temporal Difference Networks for Efficient Action Recognition Overview We release the PyTorch code of the TDN(Temporal Difference Networks).

Multimedia Computing Group, Nanjing University 326 Dec 13, 2022
Sample code from the Neural Networks from Scratch book.

Neural Networks from Scratch (NNFS) book code Code from the NNFS book (https://nnfs.io) separated by chapter.

Harrison 172 Dec 31, 2022
State of the Art Neural Networks for Deep Learning

pyradox This python library helps you with implementing various state of the art neural networks in a totally customizable fashion using Tensorflow 2

Ritvik Rastogi 60 May 29, 2022