A framework for annotating 3D meshes using the predictions of a 2D semantic segmentation model.

Overview

Semantic Meshes

A framework for annotating 3D meshes using the predictions of a 2D semantic segmentation model.

Build License: MIT

Paper

If you find this framework useful in your research, please consider citing: [arxiv]

@misc{fervers2021improving,
      title={Improving Semantic Image Segmentation via Label Fusion in Semantically Textured Meshes},
      author={Florian Fervers, Timo Breuer, Gregor Stachowiak, Sebastian Bullinger, Christoph Bodensteiner, Michael Arens},
      year={2021},
      eprint={2111.11103},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Workflow

  1. Reconstruct a mesh of your scene from a set of images (e.g. using Colmap).
  2. Send all undistorted images through your segmentation model (e.g. from tfcv or image-segmentation-keras) to produce 2D semantic annotation images.
  3. Project all 2D annotations into the 3D mesh and fuse conflicting predictions.
  4. Render the annotated mesh from original camera poses to produce new 2D consistent annotation images, or save it as a colorized ply file.

Example output for a traffic scene with annotations produced by a model that was trained on Cityscapes:

view1 view2

Usage

We provide a python interface that enables easy integration with numpy and machine learning frameworks like Tensorflow. A full example script is provided in colorize_cityscapes_mesh.py that annotates a mesh using a segmentation model that was pretrained on Cityscapes. The model is downloaded automatically and the prediction peformed on-the-fly.

import semantic_meshes

...

# Load a mesh from ply file
mesh = semantic_meshes.data.Ply(args.input_ply)
# Instantiate a triangle renderer for the mesh
renderer = semantic_meshes.render.triangles(mesh)
# Load colmap workspace for camera poses
colmap_workspace = semantic_meshes.data.Colmap(args.colmap)
# Instantiate an aggregator for aggregating the 2D input annotations per 3D primitive
aggregator = semantic_meshes.fusion.MeshAggregator(primitives=renderer.getPrimitivesNum(), classes=19)

...

# Process all input images
for image_file in image_files:
    # Load image from file
    image = imageio.imread(image_file)
    ...
    # Predict class probability distributions for all pixels in the input image
    prediction = predictor(image)
    ...
    # Render the mesh from the pose of the given image
    # This returns an image that contains the index of the projected mesh primitive per pixel
    primitive_indices, _ = renderer.render(colmap_workspace.getCamera(image_file))
    ...
    # Aggregate the class probability distributions of all pixels per primitive
    aggregator.add(primitive_indices, prediction)

# After all images have been processed, the mesh contains a consistent semantic representation of the environment
aggregator.get() # Returns an array that contains the class probability distribution for each primitive

...

# Save colorized mesh to ply
mesh.save(args.output_ply, primitive_colors)

Docker

If you want to skip installation and jump right in, we provide a docker file that can be used without any further steps. Otherwise, see Installation.

  1. Install docker and gpu support
  2. Build the docker image: docker build -t semantic-meshes https://github.com/fferflo/semantic-meshes.git#master
    • If your system is using a proxy, add: --build-arg HTTP_PROXY=... --build-arg HTTPS_PROXY=...
  3. Open a command prompt in the docker image and mount a folder from your host system (HOST_PATH) that contains your colmap workspace into the docker image (DOCKER_PATH): docker run -v /HOST_PATH:/DOCKER_PATH --gpus all -it semantic-meshes bash
  4. Run the provided example script inside the docker image to annotate the mesh with Cityscapes annotations: colorize_cityscapes_mesh.py --colmap /DOCKER_PATH/colmap/dense/sparse --input_ply /DOCKER_PATH/colmap/dense/meshed-delaunay.ply --images /DOCKER_PATH/colmap/dense/images --output_ply /DOCKER_PATH/colorized_mesh.ply

Running the repository inside a docker image is significantly slower than running it in the host system (12sec/image vs 2sec/image on RTX 6000).

Installation

Dependencies

  • CUDA: https://developer.nvidia.com/cuda-downloads
  • OpenMP: On Ubuntu: sudo apt install libomp-dev
  • Python 3
  • Boost: Requires the python and numpy components of the Boost library, which have to be compiled for the python version that you are using. If you're lucky, your OS ships compatible Boost and Python3 versions. Otherwise, compile boost from source and make sure to include the --with-python=python3 switch.

Build

The repository contains CMake code that builds the project and provides a python package in the build folder that can be installed using pip.

CMake downloads, builds and installs all other dependencies automatically. If you don't want to clutter your global system directories, add -DCMAKE_INSTALL_PREFIX=... to install to a local directory.

The framework has to be compiled for specific number of classes (e.g. 19 for Cityscapes, or 2 for a binary segmentation). Add a semicolon-separated list with -DCLASSES_NUMS=2;19;... for all number of classes that you want to use. A longer list will significantly increase the compilation time.

An example build:

git clone https://github.com/fferflo/semantic-meshes
cd semantic-meshes
mkdir build
mkdir install
cd build
cmake -DCMAKE_INSTALL_PREFIX=../install -DCLASSES_NUMS=19 ..
make -j8
make install # Installs to the local install directory
pip install ./python

Build with incompatible Boost or Python versions

Alternatively, in case your OS versions of Boost or Python do not match the version requirements of semantic-meshes, we provide an installation script that also fetches and locally installs compatible versions of these dependencies: install.sh. Since the script builds python from source, make sure to first install all optional Python dependencies that you require (see e.g. https://github.com/python/cpython/blob/main/.github/workflows/posix-deps-apt.sh).

You might also like...
naked is a Python tool which allows you to strip a model and only keep what matters for making predictions.

naked is a Python tool which allows you to strip a model and only keep what matters for making predictions. The result is a pure Python function with no third-party dependencies that you can simply copy/paste wherever you wish.

Framework that uses artificial intelligence applied to mathematical models to make predictions
Framework that uses artificial intelligence applied to mathematical models to make predictions

LiconIA Framework that uses artificial intelligence applied to mathematical models to make predictions Interface Overview Table of contents [TOC] 1 Ar

Mae segmentation - Reproduction of semantic segmentation using masked autoencoder (mae)

ADE20k Semantic segmentation with MAE Getting started Install the mmsegmentation

Using python and scikit-learn to make stock predictions

MachineLearningStocks in python: a starter project and guide EDIT as of Feb 2021: MachineLearningStocks is no longer actively maintained MachineLearni

Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Semantic Segmentation of images using PixelLib with help of Pascalvoc dataset trained with Deeplabv3+ framework.
Semantic Segmentation of images using PixelLib with help of Pascalvoc dataset trained with Deeplabv3+ framework.

CARscan- Approach 1 - Segmentation of images by detecting contours. It failed because in images with elements along with cars were also getting detect

 Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP
Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP

Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP Abstract: We introduce a method that allows to automatically se

TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

Systematic generalisation with group invariant predictions

Requirements are Python 3, TensorFlow v1.14, Numpy, Scipy, Scikit-Learn, Matplotlib, Pillow, Scikit-Image, h5py, tqdm. Experiments were run on V100 GPUs (16 and 32GB).

Comments
  • Matplotlib.show() with Cpython

    Matplotlib.show() with Cpython

    With the current cpython installation, it is not possible to use matplotlib's show() method. For example, running

    import matplotlib.pyplot as plt
    plt.plot([1,2,3],[5,7,4])
    plt.show()
    

    will throw UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.

    To add the tinker gui backend on Ubuntu for the custom cpython installation (after install.sh has already been executed) just run:

    sudo apt-get install tk-dev
    cd /path/to/cpython/build/dir
    make -j
    make install -j
    

    Maybe it would be useful to add this to the documentation.

    opened by SBCV 1
  • Add bash installation script

    Add bash installation script

    Prerequisites:

    • libffi (e.g. sudo apt install libffi-dev)
    • No active conda environment in current terminal (i.e. conda deactivate)

    Usage: classes_nums="2;19" bash /path/to/semantic-meshes/install.sh /path/to/install/dir $classes_nums or bash /path/to/semantic-meshes/install.sh /path/to/install/dir $classes_nums /path/to/cuda

    opened by SBCV 0
Owner
Florian
Florian
Stroke-predictions-ml-model - Machine learning model to predict individuals chances of having a stroke

stroke-predictions-ml-model machine learning model to predict individuals chance

Alex Volchek 1 Jan 3, 2022
Official implementation of "Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets" (CVPR2021)

Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets This is the official implementation of "Towards Good Pract

Sanja Fidler's Lab 52 Nov 22, 2022
Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)

Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation) Download Synthia dataset The model uses

null 32 Sep 21, 2022
Blender Add-On for slicing meshes with planes

MeshSlicer Blender Add-On for slicing meshes with multiple overlapping planes at once. This is a simple Blender addon to slice a silmple mesh with mul

null 52 Dec 12, 2022
Picasso: A CUDA-based Library for Deep Learning over 3D Meshes

The Picasso Library is intended for complex real-world applications with large-scale surfaces, while it also performs impressively on the small-scale applications over synthetic shape manifolds. We have upgraded the point cloud modules of SPH3D-GCN from homogeneous to heterogeneous representations, and included the upgraded modules into this latest work as well. We are happy to announce that the work is accepted to IEEE CVPR2021.

null 97 Dec 1, 2022
DeepMetaHandles: Learning Deformation Meta-Handles of 3D Meshes with Biharmonic Coordinates

DeepMetaHandles (CVPR2021 Oral) [paper] [animations] DeepMetaHandles is a shape deformation technique. It learns a set of meta-handles for each given

Liu Minghua 73 Dec 15, 2022
Code for HodgeNet: Learning Spectral Geometry on Triangle Meshes, in SIGGRAPH 2021.

HodgeNet | Webpage | Paper | Video HodgeNet: Learning Spectral Geometry on Triangle Meshes Dmitriy Smirnov, Justin Solomon SIGGRAPH 2021 Set-up To ins

Dima Smirnov 61 Nov 27, 2022
Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021

Learning Generative Models of Textured 3D Meshes from Real-World Images This is the reference implementation of "Learning Generative Models of Texture

Dario Pavllo 115 Jan 7, 2023
The all new way to turn your boring vector meshes into the new fad in town; Voxels!

Voxelator The all new way to turn your boring vector meshes into the new fad in town; Voxels! Notes: I have not tested this on a rotated mesh. With fu

null 6 Feb 3, 2022
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022