This repository contains implementations and illustrative code to accompany DeepMind publications

Overview

DeepMind Research

This repository contains implementations and illustrative code to accompany DeepMind publications. Along with publishing papers to accompany research conducted at DeepMind, we release open-source environments, data sets, and code to enable the broader research community to engage with our work and build upon it, with the ultimate goal of accelerating scientific progress to benefit society. For example, you can build on our implementations of the Deep Q-Network or Differential Neural Computer, or experiment in the same environments we use for our research, such as DeepMind Lab or StarCraft II.

If you enjoy building tools, environments, software libraries, and other infrastructure of the kind listed below, you can view open positions to work in related areas on our careers page.

For a full list of our publications, please see https://deepmind.com/research/publications/

Projects

Disclaimer

This is not an official Google product.

Comments
  • DM21 convergence problem for elongated bonds

    DM21 convergence problem for elongated bonds

    I am a PostDoc researcher with Prof. Paul Zimmerman at the University of Michigan. We are particularly interested in exact exchange-correlation potentials. So I became interested in your recent paper and was trying to reproduce some of the results using the DM21 functional. I found strangely, that for H2 dissociation as well as F2 dissociation, the convergence fails precisely 2.3 Angstroms onwards. This problem is invariant to the change of basis or grid size. However, DM21m and DM21mc do not have convergence issues, although the results are not very good. I am attaching the input, the corresponding output (h2_2.3.txt) and a plot of H2 dissociation at the cc-PVQZ basis.

    I am wondering whether we are missing any crucial parameter in the input? How do we specify the fractional occupancy?

    Below is the input:

    import density_functional_approximation_dm21 as dm21 from pyscf import gto from pyscf import dft

    mol = gto.Mole() mol.atom =''' H 0.0 0.0 0.0 H 0.0 0.0 2.3 ''' mol.basis = 'cc-pVDZ' mol.build()

    mf = dft.RKS(mol) mf.xc ='B3LYP' mf.verbose=0 mf.run() dm0 = mf.make_rdm1() mf = dft.RKS(mol) mf._numint = dm21.NeuralNumInt(dm21.Functional.DM21) mf.grids.level=5 mf.conv_tol=1E-6 mf.conv_tol_grad=1E-3 mf.verbose=4

    mf.kernel(dm0=dm0) h2_pes

    h2_2.3.txt

    opened by soumitribedi 14
  • simple flag train -> rank error

    simple flag train -> rank error

    Python version = 3.6 and dependencies areas in requirements.txt

    I run the following command to train cloth simple flag, model

    (deepmind_env) C:\Users\zcemg08\PycharmProjects\deepmind-research\meshgraphnets>python -m run_model --mode=train=cloth --checkpoint_dir=C:\Users\zcemg08\Documents\deepmind\data\cloth\chk --dataset_dir=C:\Users\zcemg08\Documents\deepmind\data\cloth

    I get the error, which is mathematical (invalid tensor rank)

    2021-12-23 12:22:58.507895: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found 2021-12-23 12:22:58.513137: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. FATAL Flags parsing error: flag --mode=train=cloth: value should be one of <train|eval> Pass --helpshort or --helpfull to see help on flags.

    (deepmind_env) C:\Users\zcemg08\PycharmProjects\deepmind-research\meshgraphnets>python -m run_model --mode=train --model=cloth --checkpoint_dir=C:\Users\zcemg08\Documents\deepmind\data\cloth\chk --dataset_dir=C:\Users\zcemg08\Documents\deepmind\data\cloth 2021-12-23 12:23:53.536495: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found 2021-12-23 12:23:53.542641: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. WARNING:tensorflow:From C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1630: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. W1223 12:23:56.040745 3876 deprecation.py:506] From C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1630: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. WARNING:tensorflow:From C:\Users\zcemg08\PycharmProjects\deepmind-research\meshgraphnets\dataset.py:80: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where W1223 12:23:56.858920 3876 deprecation.py:323] From C:\Users\zcemg08\PycharmProjects\deepmind-research\meshgraphnets\dataset.py:80: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where Traceback (most recent call last): File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1607, in _create_c_op c_op = c_api.TF_FinishOperation(op_desc) tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape must be rank 1 but is rank 2 for 'Model/loss/Unique' (op: 'Unique') with input shapes: [9084,2].

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\zcemg08\PycharmProjects\deepmind-research\meshgraphnets\run_model.py", line 131, in app.run(main) File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\absl\app.py", line 312, in run _run_main(main, args) File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\absl\app.py", line 258, in _run_main sys.exit(main(argv)) File "C:\Users\zcemg08\PycharmProjects\deepmind-research\meshgraphnets\run_model.py", line 126, in main learner(model, params) File "C:\Users\zcemg08\PycharmProjects\deepmind-research\meshgraphnets\run_model.py", line 64, in learner loss_op = model.loss(inputs) File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\sonnet\python\modules\util.py", line 746, in eager_test return method(*args, **kwargs) File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\sonnet\python\modules\util.py", line 866, in call_method out_ops = method(*args, **kwargs) File "C:\Users\zcemg08\PycharmProjects\deepmind-research\meshgraphnets\cloth_model.py", line 77, in loss graph = self._build_graph(inputs, is_training=True) File "C:\Users\zcemg08\PycharmProjects\deepmind-research\meshgraphnets\cloth_model.py", line 49, in _build_graph senders, receivers = common.triangles_to_edges(inputs['cells']) File "C:\Users\zcemg08\PycharmProjects\deepmind-research\meshgraphnets\common.py", line 47, in triangles_to_edges unique_edges = tf.bitcast(tf.unique(packed_edges)[0], tf.int32) File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\tensorflow_core\python\ops\array_ops.py", line 1613, in unique return gen_array_ops.unique(x, out_idx, name) File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\tensorflow_core\python\ops\gen_array_ops.py", line 11547, in unique "Unique", x=x, out_idx=out_idx, name=name) File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper op_def=op_def) File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func return func(*args, **kwargs) File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op attrs, op_def, compute_device) File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal op_def=op_def) File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1770, in init control_input_ops) File "C:\Users\zcemg08\Miniconda3\envs\deepmind_env\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1610, in _create_c_op raise ValueError(str(e)) ValueError: Shape must be rank 1 but is rank 2 for 'Model/loss/Unique' (op: 'Unique') with input shapes: [9084,2].

    (deepmind_env) C:\Users\zcemg08\PycharmProjects\deepmind-research\meshgraphnets>

    opened by SergeyAnufriev 12
  • Implementation For World Edges In MeshgraphNets

    Implementation For World Edges In MeshgraphNets

    The provided meshgraphnets implementation seems to be a simplified one. I have two questions on this:

    1. Whether the world edges only connect nodes from the cloth or they can connect nodes where one is from the cloth and the other is from the obstacle. In the latter case, we should be able to model both collision and self-collision, right?
    2. I am not familiar with tensorflow, so I'd like to ask whether implementation like this is ok
    collision_radius = 0.05
    model_collision = False 
    if 'MODEL_COLLISION' in os.environ and os.environ['MODEL_COLLISION'] == 'True':
        model_collision = True 
        print('world edges added')
    
    class Model(snt.AbstractModule):
      """Model for static cloth simulation."""
    
      def __init__(self, learned_model, name='Model'):
        super(Model, self).__init__(name=name)
        with self._enter_variable_scope():
          self._learned_model = learned_model
          self._output_normalizer = normalization.Normalizer(
              size=3, name='output_normalizer')
          self._node_normalizer = normalization.Normalizer(
              size=3+common.NodeType.SIZE, name='node_normalizer')
          self._edge_normalizer = normalization.Normalizer(
              size=7, name='edge_normalizer')  # 2D coord + 3D coord + 2*length = 7
          self._world_edge_normalizer = normalization.Normalizer(
              size=4, name='world_edge_normalizer') # 3D coord + length = 4
    
      def _build_graph(self, inputs, is_training):
        """Builds input graph."""
        # construct graph nodes
        velocity = inputs['world_pos'] - inputs['prev|world_pos']
        node_type = tf.one_hot(inputs['node_type'][:, 0], common.NodeType.SIZE)
        node_features = tf.concat([velocity, node_type], axis=-1)
    
        # construct graph edges
        senders, receivers = common.triangles_to_edges(inputs['cells'])
        relative_world_pos = (tf.gather(inputs['world_pos'], senders) -
                              tf.gather(inputs['world_pos'], receivers))
        relative_mesh_pos = (tf.gather(inputs['mesh_pos'], senders) -
                             tf.gather(inputs['mesh_pos'], receivers))
        edge_features = tf.concat([
            relative_world_pos,
            tf.norm(relative_world_pos, axis=-1, keepdims=True),
            relative_mesh_pos,
            tf.norm(relative_mesh_pos, axis=-1, keepdims=True)], axis=-1)
    
        mesh_edges = core_model.EdgeSet(
            name='mesh_edges',
            features=self._edge_normalizer(edge_features, is_training),
            receivers=receivers,
            senders=senders)
    
        # construct distance matrix 
        world_pos = inputs['world_pos']
        dists = tf.sqrt((tf.reduce_sum(tf.pow(world_pos, 2), axis=-1, keepdims=True) +
                tf.reduce_sum(tf.pow(world_pos, 2), axis=-1) - 
                2 * tf.matmul(world_pos, tf.transpose(world_pos, (1, 0)))))
        meshmask_indices = tf.stack([senders, receivers], axis=-1)
        meshmask = tf.scatter_nd(meshmask_indices, tf.multiply(tf.ones(tf.shape(senders)[0]), 1e10), dists.shape)
        meshmask_indices2 = tf.stack([receivers, senders], axis=-1)
        meshmask2 = tf.scatter_nd(meshmask_indices2, tf.multiply(tf.ones(tf.shape(senders)[0]), 1e10), dists.shape)
    
        dists_final = tf.add(tf.add(tf.add(dists, meshmask), meshmask2), tf.multiply(tf.eye(tf.shape(dists)[0]), 1e10))
        dists_mask = tf.where(dists_final < collision_radius)
        u = tf.gather(dists_mask, 0, axis=-1); v = tf.gather(dists_mask, 1, axis=-1)
        world_senders = tf.concat([u, v], axis=0); world_receivers = tf.concat([v, u], axis=0)
    
        w_relative_world_pos = (tf.gather(inputs['world_pos'], world_senders) - 
                               tf.gather(inputs['world_pos'], world_receivers))
        sess = tf.Session()
        print(sess.run(world_senders).shape[0], "world senders exist")
    
        world_edge_features = tf.concat([
            w_relative_world_pos, 
            tf.norm(w_relative_world_pos, axis=-1, keepdims=True)], axis=-1)
    
        world_edges = core_model.EdgeSet(
            name='world_edges', 
            features=self._world_edge_normalizer(world_edge_features, is_training), 
            receivers=world_receivers, 
            senders=world_senders)
    
        return core_model.MultiGraph(
            node_features=self._node_normalizer(node_features, is_training),
            edge_sets=[mesh_edges, world_edges] if model_collision else [mesh_edges])
    

    Highly appreciate your thoughts, thanks! @tobiaspfaff @diegolascasas

    opened by FishWoWater 11
  • Generation of the profile_with_prior in alphafold

    Generation of the profile_with_prior in alphafold

    Hi! Thanks for publishing your code and article. It's really useful and interesting. I'm working on features generation pipeline and I'm stuck with profile_with_prior feature. Can you please explain the formula in the referred article or how did you calculate this feature? As I understand we need to calculate the distribution of the amino acids for every residue (it will be Position Probability Matrix in fact), based on MSA , which we get from HHBlits and then made some calculations according to proposed formula?

    opened by artemmam 11
  • BYOL convert weights to PyTorch

    BYOL convert weights to PyTorch

    Hi,

    Thanks for open sourcing this. It is very helpful. I am in the process of converting the BYOL R50x1 weights to PyTorch. I have been able to get the dimensions of the weights to match with the standard torchvision R50 model. When I evaluate the pytorch weights, I get ~70% on ImageNet val set. Any idea what I may be missing? I not sure, but 'SAME' padding in conv and max pool are primary suspects right now. Although it looks normal to me, is there any caveat in input image pre-processing?

    opened by ajtejankar 6
  • rl_unplugged/rwrl_d4pg.ipynb does not reproduce

    rl_unplugged/rwrl_d4pg.ipynb does not reproduce

    The notebook is easy to get running, kudos for that. However the results do not match the repository.

    When I run it the output of "Training Loop" is:

    [Learner] Critic Loss = 4.062 | Policy Loss = 0.500 | Steps = 1 | Walltime = 0
    [Learner] Critic Loss = 3.844 | Policy Loss = 0.269 | Steps = 46 | Walltime = 3.173
    [Learner] Critic Loss = 3.770 | Policy Loss = 0.296 | Steps = 92 | Walltime = 4.182
    

    and the "Evaluation":

    [Evaluation] Episode Length = 1000 | Episode Return = 68.235 | Episodes = 1 | Steps = 1000 | Steps Per Second = 420.795
    [Evaluation] Episode Length = 1000 | Episode Return = 73.514 | Episodes = 2 | Steps = 2000 | Steps Per Second = 448.120
    [Evaluation] Episode Length = 1000 | Episode Return = 71.517 | Episodes = 3 | Steps = 3000 | Steps Per Second = 463.122
    [Evaluation] Episode Length = 1000 | Episode Return = 74.285 | Episodes = 4 | Steps = 4000 | Steps Per Second = 464.442
    [Evaluation] Episode Length = 1000 | Episode Return = 72.500 | Episodes = 5 | Steps = 5000 | Steps Per Second = 459.378
    

    Is this expected?

    opened by pmineiro 6
  • BoxBath dataset - Learning to Simulate Complex Physics with Graph Networks

    BoxBath dataset - Learning to Simulate Complex Physics with Graph Networks

    Hello, im currently implementing my thesis in neural networks, and I want to recreate your BoxBath scene(cube interaction with water), but it seems there's not a BoxBath dataset in your GoogleApi storage. Could you provide the dataset you used for BoxBath domain?

    Thanks a lot!

    opened by JohnKond 5
  • Learning Mesh-Based Simulation with Graph Networks Code

    Learning Mesh-Based Simulation with Graph Networks Code

    Do you plan to release code for MeshGraphNets (Learning Mesh-Based Simulation with Graph Networks https://arxiv.org/pdf/2010.03409.pdf) in the near future?

    Thanks for awesome work.

    opened by asadabbas09 5
  • Evaluating AlphaFold using Windows 10, WSL and Ubuntu

    Evaluating AlphaFold using Windows 10, WSL and Ubuntu

    Hi, I'm a beginner trying to get AlphaFold to work in Windows 10. I have WSL enabled and Ubuntu installed.

    I tried to run the script with the requirements, and ran into problems, so I installed each python package individually in my virtual environment, and then commented out the packages in the requirements file, so that it would get past this stage.

    I then ran the bash script in Windows Command Prompt using this command, and received the following error traceback in my command prompt.

    `bash alphafold_casp13/run_eval.sh`
    
    
    `(alphafold_env) C:\Users\....\deepmind-research-master>bash alphafold_casp13/run_eval.sh
    Collecting wheel Using cached wheel-0.35.1-py2.py3-none-any.whl (33 kB)
    Installing collected packages: wheel
    Successfully installed wheel-0.35.1
    Saving output to /home/user/contacts_T1019s2_2020_09_06_12_10_32/
    Launching all models for replica 0
    Launching all models for replica 1
    Launching all models for replica 2
    Launching all models for replica 3
    All models running, waiting for them to complete
    Traceback (most recent call last):
    File "/usr/lib/python3.8/runpy.py", line 193, in _run_module_as_main
      return _run_code(code, main_globals, None,
    File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
      exec(code, run_globals)
    Traceback (most recent call last):
    Traceback (most recent call last):
    File "/mnt/c/Users/..../deepmind-research-master/alphafold_casp13/contacts.py", line 21, in <module>
    File "/usr/lib/python3.8/runpy.py", line 193, in _run_module_as_main
    File "/usr/lib/python3.8/runpy.py", line 193, in _run_module_as_main
    Traceback (most recent call last):
      return _run_code(code, main_globals, None,
      return _run_code(code, main_globals, None,
    File "/usr/lib/python3.8/runpy.py", line 193, in _run_module_as_main
    File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
    File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
      return _run_code(code, main_globals, None,
    exec(code, run_globals)
    exec(code, run_globals)
    File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
    File "/mnt/c/Users/..../deepmind-research-master/alphafold_casp13/contacts.py", line 21, in <module>
    File "/mnt/c/Users/....g/deepmind-research-master/alphafold_casp13/contacts.py", line 21, in <module>
      exec(code, run_globals)
    File "/mnt/c/Users/..../deepmind-research-master/alphafold_casp13/contacts.py", line 21, in <module>
    Traceback (most recent call last):
    File "/usr/lib/python3.8/runpy.py", line 193, in _run_module_as_main
      return _run_code(code, main_globals, None,
    from absl import app
    File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
    from absl import app
    from absl import app
    from absl import app
    ModuleNotFoundError: No module named 'absl'
      exec(code, run_globals) 
    ModuleNotFoundError: No module named 'absl'
    ModuleNotFoundError: No module named 'absl'
    Traceback (most recent call last):
    ModuleNotFoundError: No module named 'absl'
      File "/mnt/c/Users/..../deepmind-research-master/alphafold_casp13/contacts.py", line 21, in <module>
    Traceback (most recent call last):
    Traceback (most recent call last):
      File "/usr/lib/python3.8/runpy.py", line 193, in _run_module_as_main
    from absl import app
    File "/usr/lib/python3.8/runpy.py", line 193, in _run_module_as_main
    File "/usr/lib/python3.8/runpy.py", line 193, in _run_module_as_main
      return _run_code(code, main_globals, None,
    ModuleNotFoundError: No module named 'absl'
      return _run_code(code, main_globals, None,
      return _run_code(code, main_globals, None,
    File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
    File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
    File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
      exec(code, run_globals)
    Traceback (most recent call last):
      exec(code, run_globals)
      exec(code, run_globals)
    File "/mnt/c/Users/..../deepmind-research-master/alphafold_casp13/contacts.py", line 21, in <module>
    File "/usr/lib/python3.8/runpy.py", line 193, in _run_module_as_main
    File "/mnt/c/Users/..../deepmind-research-master/alphafold_casp13/contacts.py", line 21, in <module>
    File "/mnt/c/Users/..../deepmind-research-master/alphafold_casp13/contacts.py", line 21, in <module>
    from absl import app
      return _run_code(code, main_globals, None,
    from absl import app
    ModuleNotFoundError: No module named 'absl'
      from absl import app
    File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
    ModuleNotFoundError: No module named 'absl'
    ModuleNotFoundError: No module named 'absl'
      exec(code, run_globals)
    File "/mnt/c/Users/..../deepmind-research-master/alphafold_casp13/contacts.py", line 21, in <module>
    from absl import app
    ModuleNotFoundError: No module named 'absl'
    Ensembling all replica outputs
    Traceback (most recent call last):
    File "/usr/lib/python3.8/runpy.py", line 193, in _run_module_as_main
    return _run_code(code, main_globals, None,
    File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
    exec(code, run_globals)
    File "/mnt/c/Users/..../deepmind-research-master/alphafold_casp13/ensemble_contact_maps.py", line 24, in <module>
    from absl import app
    ModuleNotFoundError: No module named 'absl'
    
    (alphafold_env) C:\....\deepmind-research-master>`
    

    I have Python 3.6 installed on my machine. But I used to have python 3.8 installed. I uninstalled python 3.8 from my machine. So not sure why Python 3.8 is in the traceback.

    (alphafold_env) C:\....\deepmind-research-master>python --version Python 3.6.4

    Is it possible that when I run this run_eval bash script in windows command prompt, it finds the version of Python 3.8 that was installed in Windows, still in Ubuntu ?

    I was wondering if anyone has a walkthrough for beginners to AlphaFold that use Windows 10 ?

    Or if there are any other suggestions regarding how I could get AlphaFold running, that would be much appreciated.

    opened by OneWorld-github 5
  • About example for methane post on DM21

    About example for methane post on DM21

    In the readme file of DM21, an example of methane atomization energy was post. It was told that the literature for atomization energy of methane is 420.42kcal/mol,. But when I search for it on CCCBDB benchmark, it was told that atomization energy of methane is 1642kj/mol (392.45kcal/mol) under 0K and 1663.3kj/mol (397.54kcal/mol) under 298K. I'm wondering where the 420.42kcal/mol literature data come from? https://cccbdb.nist.gov/ea2x.asp

    opened by Eureka10shen 4
  • Adversarial Robustness: problem with loading pertained model

    Adversarial Robustness: problem with loading pertained model

    https://github.com/deepmind/deepmind-research/blob/fba48d1e44d86628b65a31549560b7be2a25d823/adversarial_robustness/jax/eval.py#L89

    Dear authors, thank you for your great work! However, I cannot load the pertained model provided by you, i.e., cifar10_linf_wrn28-10_cutmix_ddpm_v2.npy. The key in this npy file is different from the WideResNet class in model_zoo.py.

    what the npy file saved: what the npy file saved

    what we need for params: what we need for params

    what we need for state: what we need for state

    opened by YiDongOuYang 4
  • ModuleNotFoundError: No module named 'absl'

    ModuleNotFoundError: No module named 'absl'

    Hey

    I am currently trying to fix the alphafold software and whenever I try to run the script for my user I keep getting the following error in the log files that I get in my users account.

    Traceback (most recent call last): File "/mnt/bffa3afc-3d28-4640-8118-97b43d848c45/ALPHAFOLD/USERS/../alphafold-main/docker/run_docker.py", line 22, in from absl import app ModuleNotFoundError: No module named 'absl'

    Can anyone kindly help me with this?

    opened by guyver007 1
  • How to extract the last 10009 last images of the official tensorflow ImageNet split as stated in BYOL paper.

    How to extract the last 10009 last images of the official tensorflow ImageNet split as stated in BYOL paper.

    In Appendix D.1., the BYOL paper states that they took the last 10009 last images of the official TensorFlow ImageNet split for the validation set to tune various hyperparameters. I would like to reproduce the result and thus wonder how I can extract only the last 10009 last images of the official TensorFlow ImageNet split. Is there an order for the official TensorFlow ImageNet split?

    Thank you.

    opened by JoohyungLee0106 0
  • issues with flag_dynamic and sphere_dynamic dataset

    issues with flag_dynamic and sphere_dynamic dataset

    When I tried to run the flag_dynamic case, it has following error:

    ValueError: ragged_rank must be non-negative; got 0.

    And it seems that the dataset is empty, but the size of flag_dynamic dataset folder is more than 30G.

    when i run ds = meshgraphnets.dataset.load_dataset('meshgraphnets/data/sphere_dynamic', 'train') ds I can get the output <DatasetV1Adapter shapes: {node_type: (?, ?, 1), world_pos: (?, ?, 3), cells: (?, ?, 3), mesh_pos: (?, ?, 2)}, types: {node_type: tf.int32, world_pos: tf.float32, cells: tf.int32, mesh_pos: tf.float32}>

    opened by x9898 0
  • Colab - doesn't support Tensorflow1.x any longer -- fail running in hierarchical_probabilistic_unet

    Colab - doesn't support Tensorflow1.x any longer -- fail running in hierarchical_probabilistic_unet

    Hi

    Thanks for providing the notebook in colab for using hierarchical_probabilistic_unet. However, colab no longer supports Tensorflow1x any more and so it failed to run the notebook. Any advice for solution?

    Best Stella

    opened by Wenhui-Zhang-5 0
  • ISSUE about GraphMatchingNet

    ISSUE about GraphMatchingNet

    Hello First, Thanks your great work.

    I had read your paper and reproduced this model. But I cannot understand the cross-graph attention-based matching mechanism. What does the attention_x variable mean in your code? And how to represent the difference between hi(t) and its closest neighbor in the other graph?

    Thanks RZZBlackMagic

    opened by RZZBlackMagic 0
Owner
DeepMind
DeepMind
This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the time series forecasting research space.

TSForecasting This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the tim

Rakshitha Godahewa 80 Dec 30, 2022
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
A modified version of DeepMind's Alphafold2 to divide CPU part (MSA and template searching) and GPU part (prediction model)

ParallelFold Author: Bozitao Zhong This is a modified version of DeepMind's Alphafold2 to divide CPU part (MSA and template searching) and GPU part (p

Bozitao Zhong 77 Dec 22, 2022
Libraries, tools and tasks created and used at DeepMind Robotics.

Libraries, tools and tasks created and used at DeepMind Robotics.

DeepMind 270 Nov 30, 2022
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

MIC-DKFZ 1.2k Jan 4, 2023
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark

The DeepMind Alchemy environment is a meta-reinforcement learning benchmark that presents tasks sampled from a task distribution with deep underlying structure.

DeepMind 188 Dec 25, 2022
My implementation of DeepMind's Perceiver

DeepMind Perceiver (in PyTorch) Disclaimer: This is not official and I'm not affiliated with DeepMind. My implementation of the Perceiver: General Per

Louis Arge 55 Dec 12, 2022
An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.

relational-rnn-pytorch An implementation of DeepMind's Relational Recurrent Neural Networks (Santoro et al. 2018) in PyTorch. Relational Memory Core (

Sang-gil Lee 241 Nov 18, 2022
This's an implementation of deepmind Visual Interaction Networks paper using pytorch

Visual-Interaction-Networks An implementation of Deepmind visual interaction networks in Pytorch. Introduction For the purpose of understanding the ch

Mahmoud Gamal Salem 166 Dec 6, 2022
Pytorch implementation of DeepMind's differentiable neural computer paper.

DNC pytorch This is a Pytorch implementation of DeepMind's Differentiable Neural Computer (DNC) architecture introduced in their recent Nature paper:

Yuanpu Xie 91 Nov 21, 2022
Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch

Enformer - Pytorch (wip) Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch. The original tensorflow

Phil Wang 235 Dec 27, 2022
Implementing DeepMind's Fast Reinforcement Learning paper

Fast Reinforcement Learning This is a repo where I implement the algorithms in the paper, Fast reinforcement learning with generalized policy updates.

Marcus Chiam 6 Nov 28, 2022
Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch

Bootstrap Your Own Latent (BYOL), in Pytorch Practical implementation of an astoundingly simple method for self-supervised learning that achieves a ne

Phil Wang 1.4k Dec 29, 2022
RETRO-pytorch - Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch

RETRO - Pytorch (wip) Implementation of RETRO, Deepmind's Retrieval based Attent

Phil Wang 556 Jan 4, 2023
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch

?? Flamingo - Pytorch Implementation of Flamingo, state-of-the-art few-shot visual question answering attention net, in Pytorch. It will include the p

Phil Wang 630 Dec 28, 2022
This repo contains the official implementations of EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis This repo contains the official implementations of EigenDamage: Structured Prunin

Chaoqi Wang 107 Apr 20, 2022
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow implementation of SERank model. The code is developed based on TF-Ranking.

SERank An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow

Zhihu 44 Oct 20, 2022