A procedural Blender pipeline for photorealistic training image generation

Overview

BlenderProc2

Documentation Open In Collab License: GPL v3

Front readme image

A procedural Blender pipeline for photorealistic rendering.

Documentation | Tutorials | Examples | ArXiv paper | Workshop paper

Features

  • Loading: *.obj, *.ply, *.blend, BOP, ShapeNet, Haven, 3D-FRONT, etc.
  • Objects: Set or sample object poses, apply physics and collision checking.
  • Materials: Set or sample physically-based materials and textures
  • Lighting: Set or sample lights, automatic lighting of 3D-FRONT scenes.
  • Cameras: Set, sample or load camera poses from file.
  • Rendering: RGB, stereo, depth, normal and segmentation images/sequences.
  • Writing: .hdf5 containers, COCO & BOP annotations.

Installation

Via pip

The simplest way to install blenderproc is via pip:

pip install blenderproc

Git clone

If you need to make changes to blenderproc or you want to make use of the most recent version on the main-branch, clone the repository:

git clone https://github.com/DLR-RM/BlenderProc

To still make use of the blenderproc command and therefore use blenderproc anywhere on your system, make a local pip installation:

cd BlenderProc
pip install -e .

Usage

BlenderProc has to be run inside the blender python environment, as only there we can access the blender API. Therefore, instead of running your script with the usual python interpreter, the command line interface of BlenderProc has to be used.

blenderproc run <your_python_script>

In general, one run of your script first loads or constructs a 3D scene, then sets some camera poses inside this scene and renders different types of images (RGB, distance, semantic segmentation, etc.) for each of those camera poses. In the usual case, to create a big and diverse dataset, you therefore run your script multiple times, each time producing 5-20 images. With a little more experience, it is also possible to render multiple times in one script call, read here how this is done.

Quickstart

Create a python script quickstart.py with the following content:

import blenderproc as bproc
import numpy as np

bproc.init()

# Create a simple object:
obj = bproc.object.create_primitive("MONKEY")

# Create a point light next to it
light = bproc.types.Light()
light.set_location([2, -2, 0])
light.set_energy(300)

# Set the camera to be in front of the object
cam_pose = bproc.math.build_transformation_mat([0, -5, 0], [np.pi / 2, 0, 0])
bproc.camera.add_camera_pose(cam_pose)

# Render the scene
data = bproc.renderer.render()

# Write the rendering into an hdf5 file
bproc.writer.write_hdf5("output/", data)

Now run the script via:

blenderproc run quickstart.py

BlenderProc now creates the specified scene and renders the image into output/0.hdf5. To visualize that image, simply call:

blenderproc vis hdf5 output/0.hdf5

Thats it! You rendered your first image with BlenderProc!

Debugging

To understand what is actually going on, BlenderProc has the great feature of visualizing everything inside the blender UI. To do so, simply call your script with the debug instead of run subcommand:

blenderproc debug quickstart.py

Now the Blender UI opens up, the scripting tab is selected and the correct script is loaded. To start the BlenderProc pipeline, one now just has to press Run BlenderProc (see red circle in image). As in the normal mode, print statements are still printed to the terminal.

Front readme image

The pipeline can be run multiple times, as in the beginning of each run the scene is cleared.

What to do next?

As you now ran your first BlenderProc script, your ready to learn the basics:

Tutorials

Read through the tutorials, to get to know with the basic principles of how BlenderProc is used:

  1. Loading and manipulating objects
  2. Configuring the camera
  3. Rendering the scene
  4. Writing the results to file
  5. How key frames work
  6. Positioning objects via the physics simulator

Examples

We provide a lot of examples which explain all features in detail and should help you understand how BlenderProc works. Exploring our examples is the best way to learn about what you can do with BlenderProc. We also provide support for some datasets.

and much more, see our examples for more details.

Contributions

Found a bug? help us by reporting it. Want a new feature in the next BlenderProc release? Create an issue. Made something useful or fixed a bug? Start a PR. Check the contributions guidelines.

Change log

See our change log.

Citation

If you use BlenderProc in a research project, please cite as follows:

@article{denninger2019blenderproc,
  title={BlenderProc},
  author={Denninger, Maximilian and Sundermeyer, Martin and Winkelbauer, Dominik and Zidan, Youssef and Olefir, Dmitry and Elbadrawy, Mohamad and Lodhi, Ahsan and Katam, Harinandan},
  journal={arXiv preprint arXiv:1911.01911},
  year={2019}
}

Comments
  • property float texture_u

    property float texture_u

    Dear Drs. In BopLoader.py file,

    new_file_ply_content = new_file_ply_content.replace("property float texture_u",
                                                                               "property float s")
    new_file_ply_content = new_file_ply_content.replace("property float texture_v",
                                                                               "property float t")
    

    why need to replace these two values? We have a problem.

                       with open(model_path, "r") as file:
                           new_file_ply_content = file.read()
    

    This function cannot open our ply file since this file's encoding mode is None. Already tried many methods, but still cannot solve this issue. Actually, our ply file can be opened by meshlab, everything is ok. So could you help us? Thanks a lot.

    question 
    opened by jingweirobot 38
  • Strange behavior of BlendLoader

    Strange behavior of BlendLoader

    Hi, I want to load a blender object and render it, here is how it looks like in blender shading: Screenshot from 2020-12-10 17-11-03

    However, when I use blenderproc, the tires are broken: 000141 I can see one tire wrongly place inside the car (near the front window), and don't have idea where are the others. Anyway, it seems that the structure is broken. I think the scene is as simple as the .blend file in coco_annotation since it only contains one collection and few objects.

    Can you help me figure out where the problem is? The .blend file and the config are as follows:

    fo.zip

    # Args: <output_dir> <.blend file>
    {
      "version": 3,
      "setup": {
        "blender_install_path": "/home_local/<env:USER>/blender/",
        "pip": [
          "h5py",
          "scikit-image",
          "pypng==0.0.20",
          "scipy==1.2.2"
        ]
      },
      "modules": [
        {
          "module": "main.Initializer",
          "config": {
            "global": {
              "output_dir": "<args:0>",
            }
          }
        },
        {
          "module": "loader.BlendLoader",
          "config": {
            "path": "<args:1>",
            "load_from": "/Object",
          }
        },
        {
          "module": "manipulators.EntityManipulator",
          "config": {
            "selector": {
              "provider": "getter.Entity",
              "conditions": {
                "type": "MESH"  # this guarantees that the object is a mesh, and not for example a camera
              }
            },  
            "matrix_world":
                [[1, 0, 0, 0.5],
                 [0, 1, 0, 0],
                 [0, 0, 1, 0],
                 [0, 0, 0, 1]],
            "scale": [1, 1, 1] # Scale 3D model from mm to m
          },
        },
        {
          "module": "lighting.LightLoader",
          "config": {
            "lights": [
              {
                "type": "POINT",
                "location": [14, 10, 10],
                "energy": 100000
              }
            ]
          }
        },
        {
          "module": "camera.CameraLoader",
          "config": {
            "path": "examples/coco_annotations/camera_positions",
            "file_format": "location rotation/value",
            "default_cam_param": {
              "fov": 1
            }
          }
        },
        {
          "module": "renderer.RgbRenderer",
          "config": {
            "samples": 150,
            "transparent_background": True,
            "render_distance": True,
            "image_type": "PNG"
          }
        },
        {
          "module": "writer.BopWriter",
          "config": {
            "append_to_existing_output": True,
            "ignore_dist_thres": 10000,
            "postprocessing_modules": {
              "distance": [
                {"module": "postprocessing.Dist2Depth"}
              ]
            }
          }
        }
      ]
    }
    
    

    Update:

    If I remove the "manipulators.EntityManipulator" module, the model structure looks good: 000177 So it means the "manipulators.EntityManipulator" doesn't apply the correct transformation to all objects? Or, does it apply the transformation in their local coordinates respectively, instead of considering them as a whole group? If so, how can I group the objects together?

    bug 
    opened by kwea123 30
  • FEATURE: Adapt the 3D-FRONT loader to the official one

    FEATURE: Adapt the 3D-FRONT loader to the official one

    New Feature:

    It would be best if we remove our own 3D Front loader and replace it with the 3D-FRONT-Toolbox.

    Original Question:

    This question is about 3D-FRONT-Toolbox, not BlenderProc. But BlenderProc is the only 3D-FRONT-related project where issues are regularly answered. So I'm looking for help here. I ran the json2obj.py script in scripts folder, then imported the files into Blender. It seems that textures are not loaded at all for furniture or walls. I use Windows 11. Can anyone help? Thanks a lot. Sorry for my English image

    enhancement question first answer provided 
    opened by vts3rss 27
  • Iss 376 load from urdf

    Iss 376 load from urdf

    Adresses #376. Implements loading functions to parse xml elements from the urdf file and create links and their corresponding objects. Implements a robot class that wraps all this information and allows for manipulation of the whole robot as well as of the links. Implements small convenience functions for e.g. sequentially adding class ids to the joints or removing a particular joint. Open for any suggestions!

    cla-signed 
    opened by wboerdijk 26
  • [FEATURE]: Calling modules independently from within a script without using a .yaml file

    [FEATURE]: Calling modules independently from within a script without using a .yaml file

    Hi Guys,

    Is it possible to call the modules independently within a script, for example I would like to call the renderer module within my own script but without using a .yaml file.

    For example I've tried to call the NormalRenderer through a test script that I've created in the src folder similarly to the debug script:

    import bpy
    import mathutils
    import os
    import sys
    
    working_dir = os.path.dirname(bpy.context.space_data.text.filepath) + "/../"
    
    if not working_dir in sys.path:
        sys.path.append(working_dir)
    
    # Add path to custom packages inside the blender main directory
    if sys.platform == "linux" or sys.platform == "linux2":
        packages_path = os.path.abspath(os.path.join(os.path.dirname(sys.executable), "..", "..", "..", "custom-python-packages"))
    elif sys.platform == "darwin":
        packages_path = os.path.abspath(os.path.join(os.path.dirname(sys.executable), "..", "..", "..", "..", "Resources", "custom-python-packages"))
    elif sys.platform == "win32":
        packages_path = os.path.abspath(os.path.join(os.path.dirname(sys.executable), "..", "..", "..", "custom-python-packages"))
    else:
        raise Exception("This system is not supported yet: {}".format(sys.platform))
    sys.path.append(packages_path)
    
    # Delete all loaded models inside src/, as they are cached inside blender
    for module in list(sys.modules.keys()):
        if module.startswith("src"):
            print("Module to be deleted: {}".format(module))
            del sys.modules[module]
    
    from src.renderer.NormalRenderer import NormalRenderer
    
    normal_map = NormalRenderer
    
    normal_map.run()
    

    But this doesn't seem to work.

    Thanks.

    enhancement first answer provided 
    opened by ttsesm 26
  • How to import URDF robot model and manipulate it?

    How to import URDF robot model and manipulate it?

    Hello,

    So, I am trying to create a synthetic dataset for Robot pose estimation, and I have a URDF file. But blender does not support importing of URDF files. A plugin called Phobos (https://github.com/dfki-ric/phobos) gives blender the ability to do that, but it can only do it for Blender version 2.79, which I think BlenderProc does not support. If it is then imported, how do I control the robot using BlenderProc (entity manipulator)?

    How can I approach this problem and what could be the possible solution?

    opened by b-yogesh 24
  • BlenderProc0 fater then BlenderProc2 ?

    BlenderProc0 fater then BlenderProc2 ?

    Hey guys I'm using Blender Proc0 I need to do the 10000 segmentations, I decided to switch to BlenderProc2, made tests: var1: BlenderProc0 10 min 30 sec rendering segmentation - result 17 frames, var2: BlenderProc2 10 min 15 sec rendering segmentation - result 12 frames, why is the latest version of BlenderProc2 running slower with the same scenes and settings?)

    enhancement 
    opened by shtams 20
  • Dark scenes in shapenet_with_scenenet

    Dark scenes in shapenet_with_scenenet

    Hi,

    After the recent fix which introduced modifications to lighting (SurfaceLighting etc.) many of the scenes became dark in the shapenet_with_scenenet example:

    image

    Here is the config file I used:

    # Args: <obj_file> <texture_file> <path_to_shape-net-core> <output_dir>
    {
      "version": 3,
      "setup": {
        "blender_install_path": "/home_local/<env:USER>/blender/",
        "pip": [
          "h5py", "pandas"
        ]
      },
      "modules": [
        {
          "module": "main.Initializer",
          "config":{
            "global": {
              # "output_dir": "<args:3>"
              "output_dir": "examples/shapenet_with_scenenet/<args:2>"
            }
          }
        },
        {
          "module": "loader.SceneNetLoader",
          "config": {
            # after downloading the scenenet dataset it should be located inside of resources/scenenet/SceneNetData/
            # "file_path": "<args:0>",
            "file_path": "resources/scenenet/SceneNetData/<args:0>",
            # "texture_folder": "<args:1>",
            "texture_folder": "resources/texture_library",
            "add_properties": {
              "cp_physics": False
            }
          }
        },
        {
          "module": "object.FloorExtractor",
          "config": {
            "selector": {
              "provider": "getter.Entity",
              "conditions": {
                "cp_category_id": 1
              }
            }, # this one you need to now beforehand
            "compare_angle_degrees" : 7.5,
            "compare_height": 0.15,
            "name_for_split_obj": "floor",  # this is the new name of the object
            "add_properties": {
              "cp_category_id": 2
            },
            "should_skip_if_object_is_already_there": True
          }
        },
        {
          "module": "object.FloorExtractor",
          "config": {
            "selector": {
              "provider": "getter.Entity",
              "conditions": {
                "cp_category_id": 1
              }
            },
            "should_skip_if_object_is_already_there": True,
            "up_vector_upwards": False,  # the polygons are now facing downwards: [0, 0, -1]
            "compare_angle_degrees": 7.5,
            "compare_height": 0.15,
            "add_properties": {
              # "cp_category_id": 22
              "cp_category_id": 2
            },
            "name_for_split_obj": "ceiling"
          }
        },
        {
          "module": "lighting.SurfaceLighting",
          "config": {
            "selector": {
              "provider": "getter.Entity",
              "conditions": {
                "name": ".*[l|L]amp.*"
              }
            },
            "emission_strength": 15.0,
            # "emission_strength": 2.0,
            "keep_using_base_color": True
          }
        },
        {
          "module": "lighting.SurfaceLighting",
          "config": {
            "selector": {
              "provider": "getter.Entity",
              "conditions": {
                "name": ".*[c|C]eiling.*"
              }
            },
            "emission_strength": 2.0,
            # "keep_using_base_color": True
          }
        },
        {
          "module": "loader.ShapeNetLoader",
          "config": {
            # "data_path": "<args:2>",
            "data_path": "/data/guyga/datasets/ShapeNetCore.v2",
            
            # "used_synset_id": "02801938",
            "used_synset_id": "<args:1>",
            "add_properties": {
              "cp_shape_net_object": True,
              # set the custom property physics to True
              "cp_physics": True
            }
          }
        },
        {
          "module": "manipulators.EntityManipulator",
          "config": {
            # get all shape net objects, as we have only loaded one this returns only one entity
            "selector": {
              "provider": "getter.Entity",
              "conditions": {
                "cp_shape_net_object": True,
                "type": "MESH"
              }
            },
            # Sets the location of this entity above a bed
            "location": {
              "provider": "sampler.UpperRegionSampler",
              "min_height": 0.3,
              "use_ray_trace_check": True,
              "to_sample_on": {
                  "provider": "getter.Entity",
                  "conditions": {
                    # "cp_category_id": 4, # 4 is the category of the bed
                    "cp_category_id": 2, # Floor
                    "type": "MESH"
                  }
    
              }
            },
            # by adding a modifier we avoid that the objects falls through other objects during the physics simulation
            "cf_add_modifier": {
              "name": "Solidify",
              "thickness": 0.0025
            }
          }
        },
        {
          "module": "object.PhysicsPositioning",
          "config": {
            "solver_iters": 30,
            "substeps_per_frame": 40,
            "min_simulation_time": 0.5,
            "max_simulation_time": 4,
            "check_object_interval": 0.25,
            "mass_scaling": True,
            "mass_factor": 2000,
            "collision_margin": 0.00001,
            "collision_shape": "MESH"
            # "collision_shape": "CONVEX_HULL"
          }
        },
        {
          "module": "camera.CameraSampler",
          "config": {
            "cam_poses": [
            {
              # "number_of_samples": 5,
              "number_of_samples": 4,
              # "number_of_samples": 3,
              "proximity_checks": {
                "min": 0.5
              },
              "check_if_objects_visible": {
                "provider": "getter.Entity",
                "conditions": {
                  "cp_shape_net_object": True,
                  "type": "MESH"
                }
              },
              "location": {
                "provider":"sampler.PartSphere",
                "center": {
                  "provider": "getter.POI",
                  "selector": {
                    "provider": "getter.Entity",
                    "conditions": {
                      "cp_shape_net_object": True,
                      "type": "MESH"
                    }
                  }
                },
                "distance_above_center": 0.5,
                "radius": 2,
                "mode": "SURFACE"
              },
              "rotation": {
                "format": "look_at",
                "value": {
                  "provider": "getter.POI",
                  "selector": {
                    "provider": "getter.Entity",
                    "conditions": {
                      "cp_shape_net_object": True,
                      "type": "MESH"
                    }
                  }
                }
              }
            }
            ]
          }
        },
        # {
        #   "module": "camera.CameraSampler",
        #   "config": {
        #     "cam_poses": [
        #     {
        #       "min_interest_score": 0.4,
        #       # "number_of_samples": 5,
        #       "number_of_samples": 3,
        #       "location": {
        #         "provider":"sampler.PartSphere",
        #         "center": {
        #           "provider": "getter.POI",
        #           "selector": {
        #             "provider": "getter.Entity",
        #             "conditions": {
        #               "cp_shape_net_object": True,
        #               "type": "MESH"
        #             }
        #           }
        #         },
        #         "distance_above_center": 0.5,
        #         "radius": 2,
        #         "mode": "SURFACE"
        #       },
        #       "rotation": {
        #         "format": "look_at",
        #         "value": {
        #           "provider": "getter.POI",
        #           "selector": {
        #             "provider": "getter.Entity",
        #             "conditions": {
        #               "cp_shape_net_object": True,
        #               "type": "MESH"
        #             }
        #           }
        #         }
        #       }
        #     }
        #     ]
        #   }
        # },
        {
          "module": "renderer.RgbRenderer",
          "config": {
            "output_is_temp": False,
            "output_key": "colors",
            "samples": 150,
            # "render_distance": True,
            # "render_normals": True,
            "image_type": "JPEG",
          }
        },
        {
          "module": "renderer.SegMapRenderer",
          "config": {
            "map_by": ["instance", "class"]
          }
        },
        {
          "module": "writer.CocoAnnotationsWriter",
          "config": {
            # "supercategory": "chairs_and_tables",
            # "append_to_existing_output": True
          }
        }
        # {
        #   "module": "writer.Hdf5Writer",
        #   "config": {
        #     "postprocessing_modules": {
        #       "distance": [
        #       {
        #         "module": "postprocessing.TrimRedundantChannels",
        #       }
        #       ]
        #     },
        #     "append_to_existing_output": True
        #   }
        # }
      ]
    }
    
    bug wontfix 
    opened by ggaziv 20
  • How to set output image size and camera intrinsic by myself?

    How to set output image size and camera intrinsic by myself?

    Hi I am new to blenderproc and find it hard to make it "under control". I want to use blenderproc as a synthetic object-pose dataset, but it's hard for me to find the exact arguments for image size and intrinsic. Is there a way to render a pose with simply:

    1. camera intrinsic
    2. object pose
    3. mesh model, e.g. *.obj or *.ply
    4. image size

    An example is bop_renderer, where only 4 arguments are needed and really easily "programmable".

    question 
    opened by greatwallet 18
  • A new version of the BOP writer

    A new version of the BOP writer

    The BOP writer now stores the full dataset (including images) in the BOP format.

    The writer assumes that self._find_registered_output_by_key("depth") provides depth images (i.e. the Z coordinate is saved at each pixel). However, currently, this provides distance images (i.e. the distance from the optical center is saved at each pixel). I assume this is a bug that will be fixed in the next release.

    opened by thodan 18
  • [BUG]:  enable_rigidbody() for convex decomposition takes extremely long

    [BUG]: enable_rigidbody() for convex decomposition takes extremely long

    Describe the bug

    When using MeshObjectUtility.build_convex_decomposition_collision_shape() the runtime for any simulation increases exponentially.

    General Information

    1. 2.3.0, updated 27/07/22 - branch: https://github.com/DLR-RM/BlenderProc/tree/test_decompose_with_caching

    2. On which operating system are you? Linux

    3. Have you checked the issue tracker to see if a similar issue has been opened? Yes

    4. Have you changed BlenderProc in any way besides the config file? If yes, are you sure that this change does not affect the problem you are having? Added custom modules to Blenderproc, didn't change anything in blenderproc itself.

    To Reproduce Steps to reproduce the behavior:

    1. Provide the full command you used to run BlenderProc:

    blenderproc run script.py

    1. Provide the full python file, you used:
    import blenderproc as bproc
    import time
    
    start=time.time()
    obj_list=[]
    cached_objects={}
    
    ##generating objects
    for i in range(50):
        obj=bproc.loader.load_obj(("MUG.obj"),cached_objects)
        obj_list.append(obj[0])
    
    ##do convex_decomposition for objects
    for i in range(50):
        obj_list[i].build_convex_decomposition_collision_shape("convex_decomposition",cached_objects=cached_objects)
    
    print(time.time()-start)
    
    
    
    1. Provide a link to all 3D models you used, if they are from one of the publicly available supported datasets, provide the name or path so that it is possible to reproduce the error. happens for all .obj files (MUG.obj is attached)

    Expected behavior Using enable_rigidbody() shouldn't increase the runtime of the script by 30 times. ( See Additional context )

    Additional context While trying to figure out what slows my simulation down to 1000 sec(with convex decomposition) from about 30 sec(without convex decomposition) i found out that it is the line part.enable_rigidbody in MeshObjectUtility.build_convex_decomposition_collision_shape(). Unexpectedly this mostly comes from the line 241 in MeshObjectUtility.py, which calls:

    part.enable_rigidbody(True,"CONVEX_HULL")

    If i comment this line out my script runtime goes from 1000sec to 30 sec. But the convex shapes become useless obviously... ;-)

    MUG_DATA.zip

    bug 
    opened by Sodek93 17
  • [BUG]: No module named 'numpy.core._multiarray_umath' when trying to import numpy

    [BUG]: No module named 'numpy.core._multiarray_umath' when trying to import numpy

    Describe the bug Get: No module named 'numpy.core._multiarray_umath' when add-on(scatter 5.3.1) tries to import numpy

    General Information

    1. Which BlenderProc version are you using? 2.5.0

    2. On which operating system are you? Ubuntu 20.04

    3. Have you checked the issue tracker to see if a similar issue has been opened? Yes, Similar issue #722 was due to an add-on. But removing add-on would break data-gen code.

    4. How do you know BlenderProc is the issue and not the add-on? Separately installed blender 3.3.0 (version required for bproc 2.5.0) and was able to successfully install and use the add-on from the GUI.

    5. Have you changed BlenderProc in any way besides the config file? If yes, are you sure that this change does not affect the problem you are having? No.

    To Reproduce Steps to reproduce the behavior: Inside the virtual env

    1. Run
        pip uninstall -y blenderproc
        pip install --upgrade blenderproc
        blenderproc debug scatter_scene.py
    
    1. Before pressing 'Run Blenderproc'. Go to Edit->Preferences->add-ons and install scatter. (Fails in GUI and in CLI).

    2. Provide the full python file, you used:

    import blenderproc as bproc
    import bpy
    from bpy.ops import scatter5
    from blenderproc.python.utility.Utility import Utility
    
    import os
    import random
    import glob
    from PIL import Image
    import matplotlib.pyplot as plt
    
    import numpy as np
    
       def main():
              bproc.init()
          
             
              add_lights(bproc)
          
              bpy.ops.mesh.primitive_cylinder_add()
              cylinder = bproc.object.create_primitive('CYLINDER')
              cylinder.scale = (0.1, 0.1, 0.1)
              plane = bproc.object.create_primitive('PLANE')
              density_scatter_on_plane(plane, cylinder, 1)
    
       def density_scatter_on_plane(emitter_obj, objs_to_scatter,density_value = 100., density_scale = 'm', psy_name="DensityScatter"):
              bpy.context.scene.scatter5.emitter = bpy.data.objects[emitter_obj.get_name()]
              bpy.context.scene.scatter5.factory_event_listening_allow = False
          
              instance_name = objs_to_scatter.get_name()
              # instance_names = objs_to_scatter[0].get_name()
              # for obj in objs_to_scatter[1:]:
              #     instance_names += '_!#!_'+obj.get_name()
              scatter5.add_psy_density(emitter_name=emitter_obj.get_name(),instances_names=instance_name, 
                                      density_scale = density_scale, density_value = density_value,
                                      selection_mode = 'viewport', psy_name=psy_name)
    
    
    1. Provide a link to all 3D models you used, if they are from one of the publicly available supported datasets, provide the name or path so that it is possible to reproduce the error.

    Expected behavior Add-on should be ready to use

    Additional context The add-on was working on blenderproc 2.3.0 with a few tweaks but with the update. It doesn't with the latest Blenderproc. I'm able to import numpy on python3.10 venv without errors.

    bug 
    opened by sreddy-es 0
  • Get env:USER in a more reliable way

    Get env:USER in a more reliable way

    In some environment like Docker, $USER might not be set https://stackoverflow.com/questions/54411218/docker-why-isnt-user-environment-variable-set

    getpass.getuser checks the environment variables LOGNAME, USER, LNAME and USERNAME, in order, and returns the value of the first one which is set to a non-empty string. If none are set, the login name from the password database is returned on systems which support the pwd module, otherwise, an exception is raised.

    opened by YouJiacheng 3
  • [BUG]:3D plane project to render image,get the wrong result.

    [BUG]:3D plane project to render image,get the wrong result.

    Hey! I want to generate a dataset with homography matrix. So I create a plane, put the camera in the proper position, render image and save relevant data. However when I try to project 3d points to the image using relevant data, I get the wrong result.

    The details are shown below: save relevant data code: gt["K"] = camera.get_intrinsics_as_K_matrix() // K matrix gt['projection_matrix'] = get_projection_matrix() // projection matrix gt["nums"] = length gt["v"] = v // plane vertices gt["frame"] = [] for i in range(length): with KeyFrame(i): frame_meta_info = {"plane_local2world":plane.get_local2world_mat(),"camera_local2world":np.array(Entity(bpy.context.scene.camera).get_local2world_mat()),"frame":i} // plane local2world matrix and camera local2world matrix gt["frame"].append(frame_meta_info.copy())

    project code : v_plane_world = plane_local2world_matrix @v_plane_local.T world2camera_matrix = np.linalg.inv(cam_local2world_matrix) v_plane_camera=world2camera_matrix @v_plane_world v_plane_camera/=v_plane_camera[3] p_v = projection_matrix @v_plane_camera p_v /=p_v[3]

    uv = np.zeros((2,4))
    uv[0] = (p_v[0]+1)*K[0,2]
    uv[1] = (p_v[1]+1)*K[1,2]
    print(f"uv {uv}")
    return uv.T
    

    project result image image image

    Looking forward to your reply! kind regards!

    question bug? 
    opened by fh1999 8
  • [FEATURE]:  parameter to adjust brightness strength in `bproc.world.set_world_background_hdr_img`

    [FEATURE]: parameter to adjust brightness strength in `bproc.world.set_world_background_hdr_img`

    Is your feature request related to a problem? Please describe. After setting the hdri envrionment, no api to adjust brightness strength. Usually we could use bpy.data.worlds["$HDRINAME"].node_tree.nodes["Background"].inputs[1].default_value to adjust the brightness strength, but the bproc.world.set_world_background_hdr_img create a new node linked to output node which is different from the way blender set envrionment. So it's confusing to edit the new node to adjust adjust brightness strength.

    Describe the solution you'd like Provide a parameter in set_world_background_hdr_img to set the brightness strength.

    enhancement 
    opened by saprrow 0
  • Show warning on mac when custom pip package path is not writable

    Show warning on mac when custom pip package path is not writable

    See https://github.com/DLR-RM/BlenderProc/pull/767#issuecomment-1332331345

    On Mac OS, you have to add in the general settings that your terminal is allowed to perform updates. Else it can not execute the pip installs. That is interesting.

    Maybe we should add a warning if:

    PermissionError: [Errno 1] Operation not permitted: '/Users/max/blender/blender-3.3.0-macos-x64/Blender.app/Contents/Resources/custom-python-packages'

    enhancement 
    opened by cornerfarmer 0
  • [Question]: How to use materials from fake users

    [Question]: How to use materials from fake users

    Hey,

    I have a .blend file which contains a few objects and a bunch of materials I randomly want to choose from. I assigned the materials to a fake user. Unfortunately when I use bproc.material.collect_all() an empty list is returned, i.e. the materials were not found. How do I fix/circumvent this issue?

    For additional context: I would like to randomly select one material for each run and then apply the same material to all objects in the scene.

    runs = 5
    bproc.init()
    objs = bproc.loader.load_blend(scene_path)
    for j, obj in enumerate(objs):
        obj.set_cp("category_id", j+1)
    
    materials = bproc.material.collect_all()    
    
    light = bproc.types.Light()
    light.set_type("AREA")
    
    bproc.camera.set_resolution(400,400)
    
    
    for r in range(runs):
        bproc.utility.reset_keyframes()
        bproc.api.world.set_world_background_hdr_img(random.choice(hdri_list))
        rand_mat = random.choice(materials)
        for obj in objs:
            for i in range(len(obj.get_materials())):
                obj.set_material(i,rand_mat)
    
    question first answer provided 
    opened by melba404 1
Owner
DLR-RM
German Aerospace Center (DLR) - Institute of Robotics and Mechatronics (RM) - open source projects
DLR-RM
HyperBlend is a new type of hyperspectral image simulator based on Blender.

HyperBlend version 0.1.0 This is the HyperBlend leaf spectra simulator developed in Spectral Laboratory of University of Jyväskylä. You can use and mo

SILMAE 2 Jun 20, 2022
Blender addon to generate better building models from satellite imagery.

Blender addon to generate better building models from satellite imagery.

Ivan Ereshchenko 24 Apr 14, 2022
New program to export a Blender model to the LBA2 model format.

LBA2 Blender to Model 2 This is a new program to export a Blender model to the LBA2 model format. This is also the first publicly released version of

null 2 Nov 30, 2022
A Blender add-on to create interesting meshes using symmetry

Procedural Symmetries This Blender add-on automates the process of iteratively applying a set of reflection planes to a base mesh. The result will con

null 1 Dec 29, 2021
Unique image & metadata generation using weighted layer collections.

nft-generator-py nft-generator-py is a python based NFT generator which programatically generates unique images using weighted layer files. The progra

Jonathan Becker 243 Dec 31, 2022
Next-generation of the non-destructive, node-based 2D image graphics editor

Non-destructive, node-based 2D image graphics editor written in Python, focused on simplicity, speed, elegance, and usability

Gimel Studio 238 Dec 30, 2022
Image generation API.

Image Generator API This is an api im working on Currently its just a test project Im trying to make custom readme images with your discord account pr

Siddhesh Zantye 2 Feb 19, 2022
An open source image editor which can manipulate an image in many ways!

Image Editor - An open source image editor which can manipulate an image in many ways! If you need any more modes in repo or I

TroJanzHEX 44 Nov 17, 2022
Image enhancing model for making a blurred image to be somehow clearer than before

This is a very small prject which helps in enhancing the images by taking a Input images. This project has many features like detcting the faces and enhaning the faces itself and also a feature which enhances the whole image

null 3 Dec 3, 2021
Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.

img2dataset Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine. Also supports

Romain Beaumont 1.4k Jan 1, 2023
Nanosensor Image Processor (NanoImgPro), a python-based image analysis tool for dopamine nanosensors

NanoImgPro Nanosensor Image Processor (NanoImgPro), a python-based image analysis tool for dopamine nanosensors NanoImgPro.py contains the main class

null 1 Mar 2, 2022
A pure python implementation of the GIMP XCF image format. Use this to interact with GIMP image formats

Pure Python implementation of the GIMP image formats (.xcf projects as well as brushes, patterns, etc)

FHPyhtonUtils 8 Dec 30, 2022
Image-Viewer is a Windows image viewer based on Python 3.

Image-Viewer Hi! Image-Viewer is a Windows image viewer based on Python 3. Using You must download Image-Viewer.exe from the root of the repository. T

null 2 Apr 18, 2022
Image Reading, Metadata Conversion, and Image Writing for Microscopy Images in Python

AICSImageIO Image Reading, Metadata Conversion, and Image Writing for Microscopy Images in Pure Python Features Supports reading metadata and imaging

Allen Institute for Cell Science - Modeling 137 Dec 14, 2022
Seaborn-image is a Python image visualization library based on matplotlib and provides a high-level API to draw attractive and informative images quickly and effectively.

seaborn-image: image data visualization Description Seaborn-image is a Python image visualization library based on matplotlib and provides a high-leve

null 48 Jan 5, 2023
This app finds duplicate to near duplicate images by generating a hash value for each image stored with a specialized data structure called VP-Tree which makes searching an image on a dataset of 100Ks almost instantanious

Offline Reverse Image Search Overview This app finds duplicate to near duplicate images by generating a hash value for each image stored with a specia

null 53 Nov 15, 2022
Quickly 'anonymize' all people in an image. This script will put a black bar over all eye-pairs in an image

Face-Detacher Quickly 'anonymize' all people in an image. This script will put a black bar over all eye-pairs in an image This is a small python scrip

Don Cato 1 Oct 29, 2021
Fast Image Retrieval is an open source image retrieval framework

Fast Image Retrieval is an open source image retrieval framework release by Center of Image and Signal Processing Lab (CISiP Lab), Universiti Malaya. This framework implements most of the major binary hashing methods, together with both popular backbone networks and public datasets.

CISiP Lab 39 Nov 25, 2022
Fast Image Retrieval (FIRe) is an open source image retrieval project

Fast Image Retrieval (FIRe) is an open source image retrieval project release by Center of Image and Signal Processing Lab (CISiP Lab), Universiti Malaya. This project implements most of the major binary hashing methods to date, together with different popular backbone networks and public datasets.

CISiP Lab 39 Nov 25, 2022