Gauge equivariant mesh cnn

Overview

Geometric Mesh CNN

The code in this repository is an implementation of the Gauge Equivariant Mesh CNN introduced in the paper Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphsDownload PDF by Pim de Haan, Maurice Weiler, Taco Cohen and Max Welling, presented at ICLR 2021.

We would like to thank Ruben Wiersma as his implementation of Harmonic Surface Networks served as an inspiration for some parts of the code. Furthermore, we would like to thank Julian Suk for beta-testing the code.

Installation & dependencies

Make sure the following dependencies are installed:

  • Python (tested on 3.8)
  • Pytorch (tested on 1.8)
  • Pytorch Geometric (tested on 1.6.3)
  • Conda

Then to install, clone this repository and install the gem_cnn package by executing in this directory:

pip install .

Docker

Alternatively, if you have a GPU with CUDA 11.1 and have set up docker, then you can easily run the experiment at experiments/shapes.py in the following way:.

To build the image run in this directory:

docker build . -t gem_cnn_demo

Then to run:

docker run -it --rm --runtime=nvidia gem_cnn_demo python experiments/shapes.py

In order to run the FAUST experiments via Docker, we recommend mounting the local data folder inside the docker container by running:

docker run -it --rm --runtime=nvidia -v $(pwd)/data:/workspace/data gem_cnn_demo python experiments/faust_direct.py

Then run once, and follow instructions on how to download the dataset. Then run again to train the FAUST model.

Usage

The code implements a graph convolution with Pytorch Geometric.

Example experiments

In the folder experiments, the following examples are given:

  • experiments/shapes.py a simple toy experiment to classify geometric shapes.
  • experiments/faust_direct.py an implementation of a network similar the network used in our paper on the FAUST dataset. It does message passing directly over the edges of the mesh and does not use pooling. The used input features are the non-equivariant XYZ coordinates.
  • experiments/faust_pool.py is an alternative implementation for FAUST. It uses convolution over larger distances than direct neighbours, pooling and the equivariant matrix features.

All example experiments use Pytorch-Ignite, but the GEM-CNN code does not depend on this.

Reference

If you find our work useful, please cite

@inproceedings{dehaan2021,  
  title={Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs},  
  author={Pim de Haan and Maurice Weiler and Taco Cohen and Max Welling}
  booktitle={International Conference on Learning Representations},  
  year={2021},  
  url={https://openreview.net/forum?id=Jnspzp-oIZE}  
}

Export

This software may be subject to U.S. and international export, re-export, or transfer (“export”) laws. Diversion contrary to U.S. and international law is strictly prohibited.

You might also like...
Official code of the paper
Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

ReDet: A Rotation-equivariant Detector for Aerial Object Detection ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021), Jiam

Vector Neurons: A General Framework for SO(3)-Equivariant Networks
Vector Neurons: A General Framework for SO(3)-Equivariant Networks

Vector Neurons: A General Framework for SO(3)-Equivariant Networks Created by Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacc

LieTransformer: Equivariant Self-Attention for Lie Groups
LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant Self-At

Geometric Vector Perceptron --- a rotation-equivariant GNN for learning from biomolecular structure

Geometric Vector Perceptron Code to accompany Learning from Protein Structure with Geometric Vector Perceptrons by B Jing, S Eismann, P Suriana, RJL T

This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivariant Continuous Convolution
This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivariant Continuous Convolution

Trajectory Prediction using Equivariant Continuous Convolution (ECCO) This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivar

 You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors
You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors

You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors In this paper, we propose a novel local descriptor-based fra

Official implementation of the paper "Topographic VAEs learn Equivariant Capsules"

Topographic Variational Autoencoder Paper: https://arxiv.org/abs/2109.01394 Getting Started Install requirements with Anaconda: conda env create -f en

 Equivariant CNNs for the sphere and SO(3) implemented in PyTorch
Equivariant CNNs for the sphere and SO(3) implemented in PyTorch

Equivariant CNNs for the sphere and SO(3) implemented in PyTorch

Geometric Vector Perceptrons --- a rotation-equivariant GNN for learning from biomolecular structure
Geometric Vector Perceptrons --- a rotation-equivariant GNN for learning from biomolecular structure

Geometric Vector Perceptron Implementation of equivariant GVP-GNNs as described in Learning from Protein Structure with Geometric Vector Perceptrons b

Comments
  • Question about processing of the MNIST-example

    Question about processing of the MNIST-example

    Thank you for the interesting code!

    I had a question regarding the MNIST example present in your paper, as I do not completely follow here and it is not found in the Git. As opposed to the FAUST dataset where we are dealing purely with shapes, I guess that MNIST is transformed to lie on a rough grid so that we have two types of information, namely the features belonging to a node (in this case the raw pixel value, or "data.x") and the position in 3D for each node ("data.pos" so to speak). Am I interpreting it right when I say that the gauge-equivariant mesh convolutions are based on the information found in "pos", while they are applied to the raw pixel values?

    Thanks in advance!

    opened by RoelvH 1
  • Explanation of parameters

    Explanation of parameters

    Hi! Could you explain shortly the role of the following parameters in GemResNetBlock: in_order, out_order, n_rings? I'm not sure how to set these parameters. My dataset consists of mesh samples with scalar features.

    opened by daniel-unyi-42 9
Owner
An initiative of Qualcomm Technologies, Inc.
null
Mesh Graphormer is a new transformer-based method for human pose and mesh reconsruction from an input image

MeshGraphormer ✨ ✨ This is our research code of Mesh Graphormer. Mesh Graphormer is a new transformer-based method for human pose and mesh reconsructi

Microsoft 251 Jan 8, 2023
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022
Given a 2D triangle mesh, we could randomly generate cloud points that fill in the triangle mesh

generate_cloud_points Given a 2D triangle mesh, we could randomly generate cloud points that fill in the triangle mesh. Run python disp_mesh.py Or you

Peng Yu 2 Dec 24, 2021
AI Face Mesh: This is a simple face mesh detection program based on Artificial intelligence.

AI Face Mesh: This is a simple face mesh detection program based on Artificial Intelligence which made with Python. It's able to detect 468 different

Md. Rakibul Islam 1 Jan 13, 2022
NFT-Price-Prediction-CNN - Using visual feature extraction, prices of NFTs are predicted via CNN (Alexnet and Resnet) architectures.

NFT-Price-Prediction-CNN - Using visual feature extraction, prices of NFTs are predicted via CNN (Alexnet and Resnet) architectures.

null 5 Nov 3, 2022
Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch

Lie Transformer - Pytorch (wip) Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch. Only the SE3 version will be present in thi

Phil Wang 78 Oct 26, 2022
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch.

SE3 Transformer - Pytorch Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. May be needed for replicating Alphafold2 resu

Phil Wang 207 Dec 23, 2022
Authors implementation of LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant self-at

null 35 Oct 18, 2022
Implementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention

E(n)-Equivariant Transformer (wip) Implementation of E(n)-Equivariant Transformer, which extends the ideas from Welling's E(n)-Equivariant G

Phil Wang 132 Jan 2, 2023
EGNN - Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch

EGNN - Pytorch Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch. May be eventually used for Alphafold2 replication. This

Phil Wang 259 Jan 4, 2023