We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.

Overview

An Effective Loss Function for Generating 3D Models from Single 2D Image without Rendering

Papers with code | Paper

Nikola Zubić   Pietro Lio  

University of Novi Sad   University of Cambridge

AIAI 2021

Citation

Besides AIAI 2021, our paper is in a Springer's book entitled "Artificial Intelligence Applications and Innovations": link

Please, cite our paper if you find this code useful for your research.

@article{zubic2021effective,
  title={An Effective Loss Function for Generating 3D Models from Single 2D Image without Rendering},
  author={Zubi{\'c}, Nikola and Li{\`o}, Pietro},
  journal={arXiv preprint arXiv:2103.03390},
  year={2021}
}

Prerequisites

  • Download code:
    Git clone the code with the following command:

    git clone https://github.com/NikolaZubic/2dimageto3dmodel.git
    
  • Open the project with Conda Environment (Python 3.7)

  • Install packages:

    conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch
    

    Then git clone Kaolin library in the root (2dimageto3dmodel) folder with the following commit and run the following commands:

    cd kaolin
    python setup.py install
    pip install --no-dependencies nuscenes-devkit opencv-python-headless scikit-learn joblib pyquaternion cachetools
    pip install packaging
    

Run the program

Run the following commands from the root/code/ (2dimageto3dmodel/code/) directory:

python main.py --dataset cub --batch_size 16 --weights pretrained_weights_cub --save_results

for the CUB Birds Dataset.

python main.py --dataset p3d --batch_size 16 --weights pretrained_weights_p3d --save_results

for the Pascal 3D+ Dataset.

The results will be saved at 2dimageto3dmodel/code/results/ path.

Continue training

To continue the training process:
Run the following commands (without --save_results) from the root/code/ (2dimageto3dmodel/code/) directory:

python main.py --dataset cub --batch_size 16 --weights pretrained_weights_cub

for the CUB Birds Dataset.

python main.py --dataset p3d --batch_size 16 --weights pretrained_weights_p3d

for the Pascal 3D+ Dataset.

License

MIT

Acknowledgment

This idea has been built based on the architecture of Insafutdinov & Dosovitskiy.
Poisson Surface Reconstruction was used for Point Cloud to 3D Mesh transformation.
The GAN architecture (used for texture mapping) is a mixture of Xian's TextureGAN and Li's GAN.

Comments
  • Where is cmr_data?

    Where is cmr_data?

    Keep running into this issue from cmr_data.p3d import P3dDataset and from cmr_data.p3d import CUBDataset

    but you do not have these files in your repo. I tried using cub_200_2011_dataset.py but it does not take in the same number of arguments as the CUBDataset class used in run_reconstruction.py.

    opened by achhabria7 6
  • ModuleNotFoundError: No module named 'kaolin.graphics'

    ModuleNotFoundError: No module named 'kaolin.graphics'

    Pascal 3D+ dataset with 4722 images is successfully loaded.

    Traceback (most recent call last): File "main.py", line 149, in <module> from rendering.renderer import Renderer File "/home/ujjawal/my_work/object_recon/2d3d/code/rendering/renderer.py", line 1, in <module> from kaolin.graphics.dib_renderer.rasterizer import linear_rasterizer ModuleNotFoundError: No module named kaolin.graphics

    I also downloaded the graphics folder from here https://github.com/NVIDIAGameWorks/kaolin/tree/e7e513173bd4159ae45be6b3e156a3ad156a3eb9 and tried to place in the graphics folder in the kaolin folder locally and here is the error Traceback (most recent call last): File "main.py", line 149, in <module> from rendering.renderer import Renderer File "/home/ujjawal/my_work/object_recon/2d3d/code/rendering/renderer.py", line 1, in <module> from kaolin.graphics.dib_renderer.rasterizer import linear_rasterizer File "/usr/local/lib/python3.6/dist-packages/kaolin-0.9.0-py3.6-linux-x86_64.egg/kaolin/graphics/__init__.py", line 2, in <module> File "/usr/local/lib/python3.6/dist-packages/kaolin-0.9.0-py3.6-linux-x86_64.egg/kaolin/graphics/nmr/__init__.py", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/kaolin-0.9.0-py3.6-linux-x86_64.egg/kaolin/graphics/nmr/rasterizer.py", line 30, in <module> ImportError: cannot import name rasterize_cuda

    opened by ujjawalcse 6
  • No module named 'models.reconstruction'

    No module named 'models.reconstruction'

    Dear NikolaZubic :
    Thanks for you updated the code recently. Did you put the reconstruction.py in the models folder?When I run “python run_reconstruction.py --name pretrained_reconstruction_cub --dataset cub --batch_size 10 --generate_pseudogt” it display
    No module named 'models.reconstruction.

    opened by lw0210 2
  • inference with single RGB pictures

    inference with single RGB pictures

    Hi, I am interested with your work, it is wonderful, and I want to use my own picture to test the model, could you provided the pretrained model and inference scripts.

    opened by 523997931 2
  • can't find the pseudogt_512*512\.npz file

    can't find the pseudogt_512*512\.npz file

    Dear NikolaZubic: I want to quote your paper, but I can't find the pseudogt_512512.npz file and can't reproduce it. Can you give me the pseudogt_512512.npz file and help me reproduce it? Thanks

    opened by Yangfuha 1
  • ValueError: Training a model requires the pseudo-ground-truth to be setup beforehand.

    ValueError: Training a model requires the pseudo-ground-truth to be setup beforehand.

    I recently read your paper and was very interested in it . I want to reproduce the code of this paper. When I followed your instructions, I found it difficult for me to run the commands(python main.py --dataset cub --batch_size 16 --weights pretrained_weights_cub and python main.py --dataset p3d --batch_size 16 --weights pretrained_weights_p3d.).And the program displayed a value error that training a model requires the pseudo-ground-truth to be setup beforehand. And I don’t know how to solve the problem, so I turn to you for help.I'm sorry to bother you, but I'really eager to solve the problem. I hope to get your reply.Thank you!

    opened by lw0210 1
  • Added step: switch to the correct correct Kaolin branch

    Added step: switch to the correct correct Kaolin branch

    This step will help others to avoid the "ModuleNotFoundError: No module named kaolin.graphics" error.

    Fix to issue: https://github.com/NikolaZubic/2dimageto3dmodel/issues/2

    opened by ricklentz 1
  • Shapenet V2 not training

    Shapenet V2 not training

    Great work guys. I was able to run the code on CUB dataset. But when I tried to run training_test_shape_net.py on Shape Net v2 chair class I'm getting errors because of missing files, unmatched file names, etc.

    So it would be helpful if you provide Shapenet Dataset Folder structure and files(images, masks) description or a sample folder and clear instructions for training the model shapenet dataset. And also if possible give pre-trained weights for the Shape net dataset models

    Thank you

    opened by girishdhegde 0
  • Pretrained model

    Pretrained model

    Hi, I find it hard to understand how to train the model on ShapeNet. It would be very helpful if you can provide a pretrained model on ShapeNet planes (I need it to test the performance in my project). If the pretrained models are not available, it would also be helpful to introduce me of how to train the model on ShapeNet.

    opened by YYYYYHC 0
  • How can I train on the boat set of the Pascal 3D+ dataset

    How can I train on the boat set of the Pascal 3D+ dataset

    I find the data of trainning such as "python run_reconstruction.py --name pretrained_reconstruction_p3d --dataset p3d --optimize_z0 --batch_size 50 --tensorboard" using the data of car.mat in sfm and data folder.Even if I rename the .mat to boat.mat and using the boat imageNet in Pascal 3D+ dataset,I find the shape of the result is more like a car not a boat.So I am wondering how to train the boat set.

    opened by lisentao 0
  • Custom Dataset

    Custom Dataset

    Hi!

    Love the work you guys have done. I am currently conducting a research. Could you please tell me how I would train on a custom dataset and how I would infer an image or create a 3d model out an image with pretrained weights that you have provided?

    opened by mahnoor-fatima-saad 0
  • How do I make my own dataset?

    How do I make my own dataset?

    Dear NikolaZubic: I want to use my own data set to replace the cub or P3D data set for training. Do you have any attention or requirements for images when making data sets?

    opened by lw0210 0
Releases(metadata)
Owner
Nikola Zubić
Interested in Artificial intelligence, Visual Computing and Cognitive science. For future AI projects: @reinai
Nikola Zubić
Propose a principled and practically effective framework for unsupervised accuracy estimation and error detection tasks with theoretical analysis and state-of-the-art performance.

Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles This project is for the paper: Detecting Errors and Estimating

Jiefeng Chen 13 Nov 21, 2022
DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.

Differentiable Model Compression via Pseudo Quantization Noise DiffQ performs differentiable quantization using pseudo quantization noise. It can auto

Facebook Research 145 Dec 30, 2022
Current state of supervised and unsupervised depth completion methods

Awesome Depth Completion Table of Contents About Sparse-to-Dense Depth Completion Current State of Depth Completion Unsupervised VOID Benchmark Superv

null 224 Dec 28, 2022
Rendering color and depth images for ShapeNet models.

Color & Depth Renderer for ShapeNet This library includes the tools for rendering multi-view color and depth images of ShapeNet models. Physically bas

Yinyu Nie 41 Dec 19, 2022
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [2021]

Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations This repo contains the Pytorch implementation of our paper: Revisit

Wouter Van Gansbeke 80 Nov 20, 2022
Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models

Patch-Rotation(PatchRot) Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models Submitted to Neurips2021 To

null 4 Jul 12, 2021
An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results

EasyDatas An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results Installation pip install git+https

Ximing Yang 4 Dec 14, 2021
A collection of pre-trained StyleGAN2 models trained on different datasets at different resolution.

Awesome Pretrained StyleGAN2 A collection of pre-trained StyleGAN2 models trained on different datasets at different resolution. Note the readme is a

Justin 1.1k Dec 24, 2022
LeafSnap replicated using deep neural networks to test accuracy compared to traditional computer vision methods.

Deep-Leafsnap Convolutional Neural Networks have become largely popular in image tasks such as image classification recently largely due to to Krizhev

Sujith Vishwajith 48 Nov 27, 2022
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

null 63 Oct 17, 2022
This repository contains a set of codes to run (i.e., train, perform inference with, evaluate) a diarization method called EEND-vector-clustering.

EEND-vector clustering The EEND-vector clustering (End-to-End-Neural-Diarization-vector clustering) is a speaker diarization framework that integrates

null 45 Dec 26, 2022
An official reimplementation of the method described in the INTERSPEECH 2021 paper - Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

Facebook Research 253 Jan 6, 2023
Implementation of the method described in the Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

null 4 Mar 11, 2022
Metrics to evaluate quality and efficacy of synthetic datasets.

An Open Source Project from the Data to AI Lab, at MIT Metrics for Synthetic Data Generation Projects Website: https://sdv.dev Documentation: https://

The Synthetic Data Vault Project 129 Jan 3, 2023
A complete, self-contained example for training ImageNet at state-of-the-art speed with FFCV

ffcv ImageNet Training A minimal, single-file PyTorch ImageNet training script designed for hackability. Run train_imagenet.py to get... ...high accur

FFCV 92 Dec 31, 2022
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.

TorchMultimodal (Alpha Release) Introduction TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.

Meta Research 663 Jan 6, 2023