Offical code for the paper: "Growing 3D Artefacts and Functional Machines with Neural Cellular Automata" https://arxiv.org/abs/2103.08737

Overview

Growing 3D Artefacts and Functional Machines with Neural Cellular Automata

Paper

alt text

Video of more results: https://www.youtube.com/watch?v=-EzztzKoPeo


Requirements

Installation

For general installation

python setup.py install

For ray tune + mlflow

python -m pip install -r ray-requirements.txt
python setup.py install

Usage

Make sure an evocraft-py server is running, either with test-evocraft-py --interactive or by following the steps in https://github.com/real-itu/Evocraft-py.

Configs

Each nca is trained on a specific structure w/ hyperparams and configurations defined in yaml config, which we use with hydra to create the NCA trainer class.

Example Config for generating a "PlainBlacksmith" Minecraft Structure:

trainer:
    name: PlainBlacksmith
    min_steps: 48
    max_steps: 64
    visualize_output: true
    device_id: 0
    use_cuda: true
    num_hidden_channels: 10
    epochs: 20000
    batch_size: 5
    model_config:
        normal_std: 0.1
        update_net_channel_dims: [32, 32]
    optimizer_config:
        lr: 0.002
    dataset_config:
        nbt_path: artefact_nca/data/structs_dataset/nbts/village/plain_village_blacksmith.nbt

defaults:
  - voxel

Generation and Training

See generation notebook for ways to load in a pretrained nca and generate a structure in minecraft

See training notebook for ways to train an nca

CLI training

python artefact_nca/train.py config={path to yaml config} trainer.dataset_config.nbt_path={absolute path to nbt file to use}

Example:

python artefact_nca/train.py config=pretrained_models/PlainBlacksmith/plain_blacksmith.yaml trainer.dataset_config.nbt_path=/home/shyam/Code/3d-artefacts-nca/artefact_nca/data/structs_dataset/nbts/village/plain_village_blacksmith.nbt

Spawning in minecraft

See generation notebook for more details

Example spawning the oak tree

  1. Load in a trainer
from artefact_nca.trainer.voxel_ca_trainer import VoxelCATrainer

nbt_path = {path to repo}/artefact_nca/data/structs_dataset/nbts/village/Extra_dark_oak.nbt
ct = VoxelCATrainer.from_config(
                    "{path to repo}/pretrained_models/Extra_dark_oak/extra_dark_oak.yaml",
                    config={
                        "pretrained_path":"{path to repo}/pretrained_models/Extra_dark_oak/Extra_dark_oak.pt",
                        "dataset_config":{"nbt_path":nbt_path},
                        "use_cuda":False
                    }
                )
  1. Create MinecraftClient to view the growth of the structure in Minecraft at position (-10, 10, 10) (x, y, z)
from artefact_nca.utils.minecraft import MinecraftClient
m = MinecraftClient(ct, (-10, 10, 10))
  1. Spawn 100 iterations and display progress every 5 time steps
m.spawn(100)

Output should look like this:

alt text

Structures

see data directory. To view structures and spawn in minecraft see generation notebook. An example of spawning and viewing the Tree:

import matplotlib.pyplot as plt
from artefact_nca.utils.minecraft import MinecraftClient

base_nbt_path = {path to nbts}
nbt_path = "{}/village/Extra_dark_oak.nbt".format(base_nbt_path)

 # spawn at coords (50, 10, 10)
blocks, unique_vals, target, color_dict, unique_val_dict = MinecraftClient.load_entity("Extra_dark_oak", nbt_path=nbt_path, load_coord=(50,10,10))

color_arr = convert_to_color(target, color_dict)

fig = plt.figure()
ax = fig.gca(projection='3d')
ax.voxels(color_arr, facecolors=color_arr, edgecolor='k')

plt.show()

This should spawn and display:

alt text alt text

Authors

Shyam Sudhakaran [email protected], https://github.com/shyamsn97

Djordje Grbic [email protected], https://github.com/djole

Siyan Li [email protected], https://github.com/sli613

Adam Katona [email protected], https://github.com/adam-katona

Elias Najarro https://github.com/enajx

Claire Glanois https://github.com/claireaoi

Sebastian Risi [email protected], https://github.com/sebastianrisi

Citation

If you use the code for academic or commecial use, please cite the associated paper:

@inproceedings{Sudhakaran2021,
   title = {Growing 3D Artefacts and Functional Machines with Neural Cellular Automata}, 
   author = {Shyam Sudhakaran and Djordje Grbic and Siyan Li and Adam Katona and Elias Najarro and Claire Glanois and Sebastian Risi},
   booktitle = {2021 Conference on Artificial Life},
   year = {2021},
   url = {https://arxiv.org/abs/2103.08737}
}
You might also like...
This repository contains the code used for Predicting Patient Outcomes with Graph Representation Learning (https://arxiv.org/abs/2101.03940).
This repository contains the code used for Predicting Patient Outcomes with Graph Representation Learning (https://arxiv.org/abs/2101.03940).

Predicting Patient Outcomes with Graph Representation Learning This repository contains the code used for Predicting Patient Outcomes with Graph Repre

source code for https://arxiv.org/abs/2005.11248 "Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics"

Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics This work will be published in Nature Biomedical

Pytorch implementation of Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization https://arxiv.org/abs/2008.11646
Pytorch implementation of Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization https://arxiv.org/abs/2008.11646

[TCSVT] Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization LPN [Paper] NEWs Prerequisites Python 3.6 GPU Memory = 8G Numpy 1.

https://arxiv.org/abs/2102.11005
https://arxiv.org/abs/2102.11005

LogME LogME: Practical Assessment of Pre-trained Models for Transfer Learning How to use Just feed the features f and labels y to the function, and yo

Official Implementation for
Official Implementation for "ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement" https://arxiv.org/abs/2104.02699

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement Recently, the power of unconditional image synthesis has significantly advanced th

ISTR: End-to-End Instance Segmentation with Transformers (https://arxiv.org/abs/2105.00637)

This is the project page for the paper: ISTR: End-to-End Instance Segmentation via Transformers, Jie Hu, Liujuan Cao, Yao Lu, ShengChuan Zhang, Yan Wa

Non-Official Pytorch implementation of
Non-Official Pytorch implementation of "Face Identity Disentanglement via Latent Space Mapping" https://arxiv.org/abs/2005.07728 Using StyleGAN2 instead of StyleGAN

Face Identity Disentanglement via Latent Space Mapping - Implement in pytorch with StyleGAN 2 Description Pytorch implementation of the paper Face Ide

Minimal implementation of PAWS (https://arxiv.org/abs/2104.13963) in TensorFlow.
Minimal implementation of PAWS (https://arxiv.org/abs/2104.13963) in TensorFlow.

PAWS-TF 🐾 Implementation of Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples (PAWS)

YOLO5Face: Why Reinventing a Face Detector (https://arxiv.org/abs/2105.12931)
YOLO5Face: Why Reinventing a Face Detector (https://arxiv.org/abs/2105.12931)

Introduction Yolov5-face is a real-time,high accuracy face detection. Performance Single Scale Inference on VGA resolution(max side is equal to 640 an

Comments
  • Use specific version of omegaconf

    Use specific version of omegaconf

    Bug repport

    I tried to build with the Dockerfile, but the Docker threw an error.

    error: omegaconf 2.1.1 is installed but omegaconf==2.1.0.rc1 is required by {'hydra-core'}
    

    Fix

    We will need to use specific version of omegaconf, so I fixed setup.py.

    opened by sudame 1
Owner
Robotics Evolution and Art Lab
Robotics Evolution and Art Lab
[PyTorch] Official implementation of CVPR2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency". https://arxiv.org/abs/2103.05465

PointDSC repository PyTorch implementation of PointDSC for CVPR'2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency",

null 153 Dec 14, 2022
Unofficial implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" (https://arxiv.org/abs/2103.14030)

Swin-Transformer-Tensorflow A direct translation of the official PyTorch implementation of "Swin Transformer: Hierarchical Vision Transformer using Sh

null 52 Dec 29, 2022
This repo provides the official code for TransBTS: Multimodal Brain Tumor Segmentation Using Transformer (https://arxiv.org/pdf/2103.04430.pdf).

TransBTS: Multimodal Brain Tumor Segmentation Using Transformer This repo is the official implementation for TransBTS: Multimodal Brain Tumor Segmenta

Raymond 247 Dec 28, 2022
Code for paper "A Critical Assessment of State-of-the-Art in Entity Alignment" (https://arxiv.org/abs/2010.16314)

A Critical Assessment of State-of-the-Art in Entity Alignment This repository contains the source code for the paper A Critical Assessment of State-of

Max Berrendorf 16 Oct 14, 2022
Code for the paper: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (https://arxiv.org/abs/2002.11798)

Representation Robustness Evaluations Our implementation is based on code from MadryLab's robustness package and Devon Hjelm's Deep InfoMax. For all t

Sicheng 19 Dec 7, 2022
Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)

AudioCLIP Extending CLIP to Image, Text and Audio This repository contains implementation of the models described in the paper arXiv:2106.13043. This

null 458 Jan 2, 2023
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
Unofficial Tensorflow-Keras implementation of Fastformer based on paper [Fastformer: Additive Attention Can Be All You Need](https://arxiv.org/abs/2108.09084).

Fastformer-Keras Unofficial Tensorflow-Keras implementation of Fastformer based on paper Fastformer: Additive Attention Can Be All You Need. Tensorflo

Yam Peleg 10 Jan 30, 2022