Code release for Local Light Field Fusion at SIGGRAPH 2019

Overview





Local Light Field Fusion

Project | Video | Paper

Tensorflow implementation for novel view synthesis from sparse input images.

Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines
Ben Mildenhall*1, Pratul Srinivasan*1, Rodrigo Ortiz-Cayon2, Nima Khademi Kalantari3, Ravi Ramamoorthi4, Ren Ng1, Abhishek Kar2
1UC Berkeley, 2Fyusion Inc, 3Texas A&M, 4UC San Diego
*denotes equal contribution
In SIGGRAPH 2019

Table of Contents

Installation TL;DR: Setup and render a demo scene

First install docker (instructions) and nvidia-docker (instructions).

Run this in the base directory to download a pretrained checkpoint, download a Docker image, and run code to generate MPIs and a rendered output video on an example input dataset:

bash download_data.sh
sudo docker pull bmild/tf_colmap
sudo docker tag bmild/tf_colmap tf_colmap
sudo nvidia-docker run --rm --volume /:/host --workdir /host$PWD tf_colmap bash demo.sh

A video like this should be output to data/testscene/outputs/test_vid.mp4:

If this works, then you are ready to start processing your own images! Run

sudo nvidia-docker run -it --rm --volume /:/host --workdir /host$PWD tf_colmap

to enter a shell inside the Docker container, and skip ahead to the section on using your own input images for view synthesis.

Full Installation Details

You can either install the prerequisites by hand or use our provided Dockerfile to make a docker image.

In either case, start by downloading this repository, then running the download_data.sh script to download a pretrained model and example input dataset:

bash download_data.sh

After installing dependencies, try running bash demo.sh from the base directory. (If using Docker, run this inside the container.) This should generate the video shown in the Installation TL;DR section at data/testscene/outputs/test_vid.mp4.

Manual installation

  • Install CUDA, Tensorflow, COLMAP, ffmpeg
  • Install the required Python packages:
pip install -r requirements.txt
  • Optional: run make in cuda_renderer/ directory.
  • Optional: run make in opengl_viewer/ directory. You may need to install GLFW or some other OpenGL libraries. For GLFW:
sudo apt-get install libglfw3-dev

Docker installation

To build the docker image on your own machine, which may take 15-30 mins:

sudo docker build -t tf_colmap:latest .

To download the image (~6GB) instead:

sudo docker pull bmild/tf_colmap
sudo docker tag bmild/tf_colmap tf_colmap

Afterwards, you can launch an interactive shell inside the container:

sudo nvidia-docker run -it --rm --volume /:/host --workdir /host$PWD tf_colmap

From this shell, all the code in the repo should work (except opengl_viewer).

To run any single command <command...> inside the docker container:

sudo nvidia-docker run --rm --volume /:/host --workdir /host$PWD tf_colmap <command...>

Using your own input images for view synthesis

Our method takes in a set of images of a static scene, promotes each image to a local layered representation (MPI), and blends local light fields rendered from these MPIs to render novel views. Please see our paper for more details.

As a rule of thumb, you should use images where the maximum disparity between views is no more than about 64 pixels (watch the closest thing to the camera and don't let it move more than ~1/8 the horizontal field of view between images). Our datasets usually consist of 20-30 images captured handheld in a rough grid pattern.

Quickstart: rendering a video from a zip file of your images

You can quickly render novel view frames and a .mp4 video from a zip file of your captured input images with the zip2mpis.sh bash script.

bash zip2mpis.sh <zipfile> <your_outdir> [--height HEIGHT]

height is the output height in pixels. We recommend using a height of 360 pixels for generating results quickly.

General step-by-step usage

Begin by creating a base scene directory (e.g., scenedir/), and copying your images into a subdirectory called images/ (e.g., scenedir/images).

1. Recover camera poses

This script calls COLMAP to run structure from motion to get 6-DoF camera poses and near/far depth bounds for the scene.

python imgs2poses.py <your_scenedir>

2. Generate MPIs

This script uses our pretrained Tensorflow graph (make sure it exists in checkpoints/papermodel) to generate MPIs from the posed images. They will be saved in <your_mpidir>, a directory will be created by the script.

python imgs2mpis.py <your_scenedir> <your_mpidir> \
    [--checkpoint CHECKPOINT] \
    [--factor FACTOR] [--width WIDTH] [--height HEIGHT] [--numplanes NUMPLANES] \
    [--disps] [--psvs] 

You should set at most one of factor, width, or height to determine the output MPI resolution (factor will scale the input image size down an integer factor, eg. 2, 4, 8, and height/width directly scale the input images to have the specified height or width). numplanes is 32 by default. checkpoint is set to the downloaded checkpoint by default.

Example usage:

python imgs2mpis.py scenedir scenedir/mpis --height 360

3. Render novel views

You can either generate a list of novel view camera poses and render out a video, or you can load the saved MPIs in our interactive OpenGL viewer.

Generate poses for new view path

First, generate a smooth new view path by calling

python imgs2renderpath.py <your_scenedir> <your_posefile> \
	[--x_axis] [--y_axis] [--z_axis] [--circle][--spiral]

<your_posefile> is the path of an output .txt file that will be created by the script, and will contain camera poses for the rendered novel views. The five optional arguments specify the trajectory of the camera. The xyz-axis options are straight lines along each camera axis respectively, "circle" is a circle in the camera plane, and "spiral" is a circle combined with movement along the z-axis.

Example usage:

python imgs2renderpath.py scenedir scenedir/spiral_path.txt --spiral

See llff/math/pose_math.py for the code that generates these path trajectories.

Render video with CUDA

You can build this in the cuda_renderer/ directory by calling make.

Uses CUDA to render out a video. Specify the height of the output video in pixels (-1 for same resolution as the MPIs), the factor for cropping the edges of the video (default is 1.0 for no cropping), and the compression quality (crf) for the saved MP4 file (default is 18, lossless is 0, reasonable is 12-28).

./cuda_renderer mpidir <your_posefile> <your_videofile> height crop crf

<your_videofile> is the path to the video file that will be written by FFMPEG.

Example usage:

./cuda_renderer scenedir/mpis scenedir/spiral_path.txt scenedir/spiral_render.mp4 -1 0.8 18

Render video with Tensorflow

Use Tensorflow to render out a video (~100x slower than CUDA renderer). Optionally, specify how many MPIs are blended for each rendered output (default is 5) and what factor to crop the edges of the video (default is 1.0 for no cropping).

python mpis2video.py <your_mpidir> <your_posefile> videofile [--use_N USE_N] [--crop_factor CROP_FACTOR]

Example usage:

python mpis2video.py scenedir/mpis scenedir/spiral_path.txt scenedir/spiral_render.mp4 --crop_factor 0.8

Interactive OpenGL viewer

Controls:

  • ESC to quit
  • Move mouse to translate in camera plane
  • Click and drag to rotate camera
  • Scroll to change focal length (zoom)
  • 'L' to animate circle render path

The OpenGL viewer cannot be used in the Docker container.

You need OpenGL installed, particularly GLFW:

sudo apt-get install libglfw3-dev

You can build the viewer in the opengl_viewer/ directory by calling make.

General usage (in opengl_viewer/ directory) is

./opengl_viewer mpidir

Using your own poses without running COLMAP

Here we explain the poses_bounds.npy file format. This file stores a numpy array of size Nx17 (where N is the number of input images). You can see how it is loaded in the three lines here. Each row of length 17 gets reshaped into a 3x5 pose matrix and 2 depth values that bound the closest and farthest scene content from that point of view.

The pose matrix is a 3x4 camera-to-world affine transform concatenated with a 3x1 column [image height, image width, focal length] to represent the intrinsics (we assume the principal point is centered and that the focal length is the same for both x and y).

The right-handed coordinate system of the the rotation (first 3x3 block in the camera-to-world transform) is as follows: from the point of view of the camera, the three axes are [down, right, backwards] which some people might consider to be [-y,x,z], where the camera is looking along -z. (The more conventional frame [x,y,z] is [right, up, backwards]. The COLMAP frame is [right, down, forwards] or [x,-y,-z].)

If you have a set of 3x4 cam-to-world poses for your images plus focal lengths and close/far depth bounds, the steps to recreate poses_bounds.npy are:

  1. Make sure your poses are in camera-to-world format, not world-to-camera.
  2. Make sure your rotation matrices have the columns in the correct coordinate frame [down, right, backwards].
  3. Concatenate each pose with the [height, width, focal] intrinsics vector to get a 3x5 matrix.
  4. Flatten each of those into 15 elements and concatenate the close and far depths.
  5. Stack the 17-d vectors to get a Nx17 matrix and use np.save to store it as poses_bounds.npy in the scene's base directory (same level containing the images/ directory).

This should explain the pose processing after COLMAP.

Troubleshooting

  • PyramidCU::GenerateFeatureList: an illegal memory access was encountered: Some machine configurations might run into problems running the script imgs2poses.py. A solution to that would be to set the environment variable CUDA_VISIBLE_DEVICES. If the issue persists, try uncommenting this line to stop COLMAP from using the GPU to extract image features.
  • Black screen: In the latest versions of MacOS, OpenGL initializes a context with a black screen until the window is dragged or resized. If you run into this problem, please drag the window to another position.
  • COLMAP fails: If you see "Could not register, trying another image", you will probably have to try changing COLMAP optimization parameters or capturing more images of your scene. See here.

Citation

If you find this useful for your research, please cite the following paper.

@article{mildenhall2019llff,
  title={Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines},
  author={Ben Mildenhall and Pratul P. Srinivasan and Rodrigo Ortiz-Cayon and Nima Khademi Kalantari and Ravi Ramamoorthi and Ren Ng and Abhishek Kar},
  journal={ACM Transactions on Graphics (TOG)},
  year={2019},
}
Issues
  • Test with known camera pose

    Test with known camera pose

    Hi, Thank you very much for your excellent work and your open code source! I have followed your tutorial and got really amazing synthesis results, however, when I test with some other light field data, it seems that colmap can't work correctly and some errors happened. To avoid the problem caused by colmap, I want to skip the img2poses step and give directly camera poses to the following step, is there any way to feed camera poses for it? (I found in your code, you have done some processings like transpose to the estimated poses, but not much comments to explain these processings, could you please give some explaination over camera pose processing after img2poses?) As for other test data in your paper, I'm very intreseted their output, but I didn't find a download link, are these data available to the public? Thank you very much for your attention. Yours sincerely, Jinglei

    opened by JingleiSHI 12
  • Error using nvdia-docker

    Error using nvdia-docker

    Hello. I am trying to get the results as displayed but when I execute sudo nvidia-docker run --rm --volume /:/host --workdir /host$PWD tf_colmap bash demo.sh the docker returns a error. Below is the error that I have encountered: docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:413: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure [email protected]/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=10.0 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=410,driver<411 --pid=7506 /var/lib/docker/overlay2/74b368071c67140593255d9461eb525598dbbca0ab382047da530356351746c6/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: brand = tesla\\\\n\\\"\"": unknown.

    opened by danperazzo 11
  • The generation of MPI

    The generation of MPI

    Dear authors,

    Many thanks for your excellent work and open-source code!

    However I came up with some problems when I used my own dataset and tried to generate the MPI images.

    1. I had 32 MPI images indeed, but as far as I know: MPI contains 32 RGB layers and 32 alpha layers(I am not sure am I right or not). Are these 32 MPI images generated by corresponding RGB * alpha ?
    2. I also notice that when the inference runs, the closest indices of neighbors will be printed out, like [0, 6, 10, 4, 13, 0]. I suppose that the first 0 is the reference image, [0, 6, 10, 4, 13] are its closest neighbors. What does the last zero mean? Does it mean 0 is the target index of image?
    opened by mwsunshine 8
  • Bundle adjustment Not converged

    Bundle adjustment Not converged

    hi, authors, many thanks for your impressive work and open-source code!

    I have followed your instructions and got some amazing outputs(videos). however, for some other input images test cases, I got some errors.

    By using imgs2poses.py, the bundle adjustment report in the file colmap_output.txt says No convergence and all the other images could not register neither.

    I got 24 input images(the other test case 20 images), and there should be plenty of features within the images.

    the colmap_output.txt is like this:

    ============================================================================== Feature extraction

    Processed file [1/24] Name: 01.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 7604 Processed file [2/24] Name: 02.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 7460 Processed file [3/24] Name: 03.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 7671 Processed file [4/24] Name: 04.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 7636 Processed file [5/24] Name: 05.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 8000 Processed file [6/24] Name: 06.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 8334 Processed file [7/24] Name: 07.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 7943 Processed file [8/24] Name: 08.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 7749 Processed file [9/24] Name: 09.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 8346 Processed file [10/24] Name: 10.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 8143 Processed file [11/24] Name: 11.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 7826 Processed file [12/24] Name: 12.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 5609 Processed file [13/24] Name: 13.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 6984 Processed file [14/24] Name: 14.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 7445 Processed file [15/24] Name: 15.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 6511 Processed file [16/24] Name: 16.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 7683 Processed file [17/24] Name: 17.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 8220 Processed file [18/24] Name: 18.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 8505 Processed file [19/24] Name: 19.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 9363 Processed file [20/24] Name: 20.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 6798 Processed file [21/24] Name: 21.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 7544 Processed file [22/24] Name: 22.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 5691 Processed file [23/24] Name: 23.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 7068 Processed file [24/24] Name: 24.jpg Dimensions: 1440 x 1080 Camera: #1 - SIMPLE_RADIAL Focal Length: 1728.00px Features: 7409 Elapsed time: 0.041 [minutes]

    ============================================================================== Exhaustive feature matching

    Matching block [1/1, 1/1] in 6.299s Elapsed time: 0.109 [minutes]

    ============================================================================== Loading database

    Loading cameras... 1 in 0.000s Loading matches... 276 in 0.020s Loading images... 24 in 0.027s (connected 24) Building correspondence graph... in 0.108s (ignored 0)

    Elapsed time: 0.003 [minutes]

    ============================================================================== Initializing with image pair #10 and #5

    ============================================================================== Global bundle adjustment

    iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time 0 3.135909e+03 0.00e+00 1.41e+05 0.00e+00 0.00e+00 1.00e+04 0 1.75e-03 6.96e-03 1 3.892343e+03 -7.56e+02 0.00e+00 4.13e+01 -3.03e-01 5.00e+03 1 3.41e-03 1.04e-02 2 2.438515e+03 6.97e+02 1.06e+05 3.85e+01 2.86e-01 4.64e+03 1 4.46e-03 1.49e-02 3 6.548220e+02 1.78e+03 4.14e+03 5.68e+00 9.99e-01 1.39e+04 1 4.03e-03 1.89e-02 4 6.482664e+02 6.56e+00 1.20e+03 5.05e+00 9.94e-01 4.17e+04 1 3.28e-03 2.22e-02 5 6.452768e+02 2.99e+00 1.27e+03 1.75e+01 8.18e-01 5.61e+04 1 3.35e-03 2.56e-02 6 6.423903e+02 2.89e+00 1.93e+03 2.31e+01 5.51e-01 5.62e+04 1 3.35e-03 2.90e-02 7 6.377616e+02 4.63e+00 1.60e+03 2.26e+01 7.00e-01 6.00e+04 1 3.31e-03 3.23e-02 8 6.337074e+02 4.05e+00 1.75e+03 2.35e+01 6.49e-01 6.17e+04 1 3.68e-03 3.60e-02 9 6.294490e+02 4.26e+00 1.83e+03 2.36e+01 6.75e-01 6.44e+04 1 6.69e-03 4.27e-02 10 6.254248e+02 4.02e+00 1.99e+03 2.40e+01 6.59e-01 6.66e+04 1 3.82e-03 4.66e-02 11 6.214221e+02 4.00e+00 2.10e+03 2.43e+01 6.64e-01 6.90e+04 1 5.85e-03 5.25e-02 12 6.175485e+02 3.87e+00 2.23e+03 2.46e+01 6.58e-01 7.12e+04 1 6.20e-03 5.88e-02 13 6.137530e+02 3.80e+00 2.33e+03 2.48e+01 6.57e-01 7.35e+04 1 3.55e-03 6.23e-02 14 6.100624e+02 3.69e+00 2.43e+03 2.51e+01 6.53e-01 7.57e+04 1 3.46e-03 6.58e-02 15 6.064668e+02 3.60e+00 2.52e+03 2.53e+01 6.52e-01 7.78e+04 1 4.29e-03 7.01e-02 16 6.029737e+02 3.49e+00 2.61e+03 2.54e+01 6.49e-01 7.99e+04 1 3.60e-03 7.38e-02 17 5.995826e+02 3.39e+00 2.68e+03 2.56e+01 6.47e-01 8.20e+04 1 3.33e-03 7.71e-02 18 5.962960e+02 3.29e+00 2.74e+03 2.57e+01 6.45e-01 8.41e+04 1 3.59e-03 8.07e-02 19 5.931150e+02 3.18e+00 2.79e+03 2.57e+01 6.43e-01 8.61e+04 1 3.38e-03 8.41e-02 20 5.900405e+02 3.07e+00 2.84e+03 2.58e+01 6.41e-01 8.81e+04 1 3.50e-03 8.76e-02 21 5.870732e+02 2.97e+00 2.88e+03 2.58e+01 6.40e-01 9.00e+04 1 3.42e-03 9.11e-02 22 5.842132e+02 2.86e+00 2.90e+03 2.58e+01 6.38e-01 9.20e+04 1 3.44e-03 9.45e-02 23 5.814602e+02 2.75e+00 2.93e+03 2.57e+01 6.37e-01 9.39e+04 1 3.67e-03 9.82e-02 24 5.788136e+02 2.65e+00 2.94e+03 2.56e+01 6.35e-01 9.58e+04 1 6.73e-03 1.05e-01 25 5.762724e+02 2.54e+00 2.94e+03 2.55e+01 6.34e-01 9.77e+04 1 3.61e-03 1.09e-01 26 5.738352e+02 2.44e+00 2.94e+03 2.54e+01 6.33e-01 9.96e+04 1 6.27e-03 1.15e-01 27 5.715003e+02 2.33e+00 2.94e+03 2.52e+01 6.32e-01 1.01e+05 1 3.35e-03 1.18e-01 28 5.692657e+02 2.23e+00 2.92e+03 2.50e+01 6.31e-01 1.03e+05 1 3.17e-03 1.21e-01 29 5.671292e+02 2.14e+00 2.91e+03 2.48e+01 6.30e-01 1.05e+05 1 3.16e-03 1.25e-01 30 5.650882e+02 2.04e+00 2.88e+03 2.45e+01 6.29e-01 1.07e+05 1 4.22e-03 1.29e-01 31 5.631401e+02 1.95e+00 2.86e+03 2.42e+01 6.29e-01 1.09e+05 1 6.12e-03 1.35e-01 32 5.612820e+02 1.86e+00 2.83e+03 2.39e+01 6.28e-01 1.11e+05 1 7.77e-03 1.43e-01 33 5.595109e+02 1.77e+00 2.79e+03 2.36e+01 6.27e-01 1.13e+05 1 3.59e-03 1.46e-01 34 5.578237e+02 1.69e+00 2.75e+03 2.33e+01 6.27e-01 1.14e+05 1 3.35e-03 1.50e-01 35 5.562172e+02 1.61e+00 2.71e+03 2.29e+01 6.26e-01 1.16e+05 1 3.34e-03 1.53e-01 36 5.546881e+02 1.53e+00 2.67e+03 2.25e+01 6.26e-01 1.18e+05 1 3.27e-03 1.56e-01 37 5.532332e+02 1.45e+00 2.62e+03 2.21e+01 6.25e-01 1.20e+05 1 3.31e-03 1.60e-01 38 5.518493e+02 1.38e+00 2.57e+03 2.17e+01 6.25e-01 1.22e+05 1 3.53e-03 1.63e-01 39 5.505330e+02 1.32e+00 2.52e+03 2.13e+01 6.25e-01 1.24e+05 1 3.65e-03 1.67e-01 40 5.492811e+02 1.25e+00 2.47e+03 2.08e+01 6.25e-01 1.26e+05 1 3.54e-03 1.71e-01 41 5.480904e+02 1.19e+00 2.42e+03 2.04e+01 6.25e-01 1.28e+05 1 3.46e-03 1.74e-01 42 5.469577e+02 1.13e+00 2.37e+03 1.99e+01 6.25e-01 1.30e+05 1 3.32e-03 1.77e-01 43 5.458800e+02 1.08e+00 2.31e+03 1.95e+01 6.25e-01 1.32e+05 1 3.22e-03 1.81e-01 44 5.448544e+02 1.03e+00 2.26e+03 1.90e+01 6.26e-01 1.34e+05 1 3.22e-03 1.84e-01 45 5.438778e+02 9.77e-01 2.21e+03 1.85e+01 6.26e-01 1.36e+05 1 3.36e-03 1.87e-01 46 5.429474e+02 9.30e-01 2.15e+03 1.80e+01 6.27e-01 1.39e+05 1 3.38e-03 1.91e-01 47 5.420605e+02 8.87e-01 2.10e+03 1.76e+01 6.27e-01 1.41e+05 1 3.25e-03 1.94e-01 48 5.412145e+02 8.46e-01 2.05e+03 1.71e+01 6.28e-01 1.43e+05 1 3.35e-03 1.97e-01 49 5.404067e+02 8.08e-01 2.00e+03 1.66e+01 6.30e-01 1.46e+05 1 3.28e-03 2.00e-01 50 5.396346e+02 7.72e-01 1.95e+03 1.62e+01 6.31e-01 1.48e+05 1 3.31e-03 2.04e-01 51 5.388960e+02 7.39e-01 1.90e+03 1.57e+01 6.32e-01 1.51e+05 1 3.36e-03 2.07e-01 52 5.381884e+02 7.08e-01 1.85e+03 1.52e+01 6.34e-01 1.54e+05 1 3.94e-03 2.11e-01 53 5.375097e+02 6.79e-01 1.80e+03 1.48e+01 6.36e-01 1.57e+05 1 3.37e-03 2.15e-01 54 5.368576e+02 6.52e-01 1.76e+03 1.43e+01 6.38e-01 1.61e+05 1 3.58e-03 2.18e-01 55 5.362300e+02 6.28e-01 1.72e+03 1.39e+01 6.41e-01 1.65e+05 1 3.37e-03 2.22e-01 56 5.356248e+02 6.05e-01 1.68e+03 1.35e+01 6.44e-01 1.69e+05 1 3.36e-03 2.25e-01 57 5.350400e+02 5.85e-01 1.64e+03 1.31e+01 6.47e-01 1.73e+05 1 3.40e-03 2.28e-01 58 5.344735e+02 5.66e-01 1.60e+03 1.27e+01 6.50e-01 1.78e+05 1 9.00e-03 2.37e-01 59 5.339232e+02 5.50e-01 1.56e+03 1.23e+01 6.54e-01 1.83e+05 1 4.21e-03 2.42e-01 60 5.333872e+02 5.36e-01 1.53e+03 1.19e+01 6.58e-01 1.89e+05 1 3.32e-03 2.45e-01 61 5.328631e+02 5.24e-01 1.50e+03 1.16e+01 6.63e-01 1.96e+05 1 3.29e-03 2.48e-01 62 5.323487e+02 5.14e-01 1.48e+03 1.13e+01 6.69e-01 2.04e+05 1 3.31e-03 2.52e-01 63 5.318415e+02 5.07e-01 1.45e+03 1.11e+01 6.75e-01 2.13e+05 1 3.36e-03 2.55e-01 64 5.313387e+02 5.03e-01 1.44e+03 1.09e+01 6.82e-01 2.24e+05 1 3.43e-03 2.58e-01 65 5.308372e+02 5.02e-01 1.42e+03 1.08e+01 6.89e-01 2.36e+05 1 3.42e-03 2.62e-01 66 5.303331e+02 5.04e-01 1.41e+03 1.07e+01 6.98e-01 2.52e+05 1 3.46e-03 2.65e-01 67 5.298219e+02 5.11e-01 1.40e+03 1.08e+01 7.08e-01 2.72e+05 1 3.34e-03 2.69e-01 68 5.292984e+02 5.24e-01 1.40e+03 1.11e+01 7.18e-01 2.96e+05 1 3.34e-03 2.72e-01 69 5.287569e+02 5.42e-01 1.39e+03 1.17e+01 7.27e-01 3.27e+05 1 3.51e-03 2.76e-01 70 5.281945e+02 5.62e-01 1.36e+03 1.25e+01 7.28e-01 3.61e+05 1 3.52e-03 2.79e-01 71 5.276208e+02 5.74e-01 1.26e+03 1.37e+01 7.07e-01 3.88e+05 1 3.42e-03 2.83e-01 72 5.270656e+02 5.55e-01 1.02e+03 1.48e+01 6.45e-01 3.98e+05 1 3.34e-03 2.86e-01 73 5.265535e+02 5.12e-01 6.44e+02 1.55e+01 5.55e-01 3.98e+05 1 3.34e-03 2.89e-01 74 5.261001e+02 4.53e-01 5.15e+02 1.61e+01 4.48e-01 3.98e+05 1 3.31e-03 2.93e-01 75 5.257127e+02 3.87e-01 6.49e+02 1.66e+01 3.38e-01 3.85e+05 1 3.49e-03 2.96e-01 76 5.252750e+02 4.38e-01 7.28e+02 1.67e+01 3.32e-01 3.71e+05 1 3.56e-03 3.00e-01 77 5.248302e+02 4.45e-01 7.83e+02 1.67e+01 3.14e-01 3.52e+05 1 3.46e-03 3.03e-01 78 5.243386e+02 4.92e-01 8.03e+02 1.65e+01 3.33e-01 3.40e+05 1 3.41e-03 3.07e-01 79 5.238930e+02 4.46e-01 8.30e+02 1.65e+01 3.04e-01 3.20e+05 1 3.34e-03 3.10e-01 80 5.233664e+02 5.27e-01 8.10e+02 1.60e+01 3.56e-01 3.13e+05 1 3.35e-03 3.13e-01 81 5.229668e+02 4.00e-01 8.73e+02 1.61e+01 2.86e-01 2.90e+05 1 3.40e-03 3.17e-01 82 5.223966e+02 5.70e-01 8.34e+02 1.54e+01 4.04e-01 2.88e+05 1 3.46e-03 3.20e-01 83 5.220552e+02 3.41e-01 8.91e+02 1.56e+01 2.72e-01 2.63e+05 1 3.39e-03 3.24e-01 84 5.214732e+02 5.82e-01 8.03e+02 1.46e+01 4.51e-01 2.63e+05 1 3.38e-03 3.27e-01 85 5.211545e+02 3.19e-01 8.47e+02 1.49e+01 2.93e-01 2.46e+05 1 3.42e-03 3.30e-01 86 5.206662e+02 4.88e-01 7.80e+02 1.42e+01 4.33e-01 2.45e+05 1 3.44e-03 3.34e-01 87 5.203502e+02 3.16e-01 8.10e+02 1.44e+01 3.17e-01 2.34e+05 1 3.44e-03 3.37e-01 88 5.199311e+02 4.19e-01 7.68e+02 1.40e+01 4.09e-01 2.32e+05 1 3.48e-03 3.41e-01 89 5.196121e+02 3.19e-01 7.85e+02 1.41e+01 3.36e-01 2.24e+05 1 3.37e-03 3.44e-01 90 5.192324e+02 3.80e-01 7.57e+02 1.38e+01 3.94e-01 2.22e+05 1 3.54e-03 3.48e-01 91 5.189100e+02 3.22e-01 7.64e+02 1.38e+01 3.51e-01 2.17e+05 1 3.48e-03 3.51e-01 92 5.185548e+02 3.55e-01 7.44e+02 1.37e+01 3.85e-01 2.14e+05 1 3.51e-03 3.55e-01 93 5.182307e+02 3.24e-01 7.43e+02 1.36e+01 3.63e-01 2.10e+05 1 3.47e-03 3.58e-01 94 5.178916e+02 3.39e-01 7.29e+02 1.35e+01 3.82e-01 2.07e+05 1 5.19e-03 3.63e-01 95 5.175684e+02 3.23e-01 7.24e+02 1.35e+01 3.72e-01 2.04e+05 1 3.98e-03 3.68e-01 96 5.172400e+02 3.28e-01 7.13e+02 1.34e+01 3.81e-01 2.01e+05 1 7.22e-03 3.75e-01 97 5.169195e+02 3.20e-01 7.12e+02 1.33e+01 3.79e-01 1.98e+05 1 7.04e-03 3.82e-01 98 5.165985e+02 3.21e-01 7.13e+02 1.32e+01 3.83e-01 1.96e+05 1 3.49e-03 3.85e-01 99 5.162817e+02 3.17e-01 7.14e+02 1.31e+01 3.84e-01 1.93e+05 1 3.45e-03 3.89e-01 100 5.159661e+02 3.16e-01 7.16e+02 1.31e+01 3.86e-01 1.91e+05 1 3.44e-03 3.92e-01

    Bundle adjustment report

    Residuals : 4080
    

    Parameters : 3067 Iterations : 101 Time : 0.392774 [s] Initial cost : 0.876701 [px] Final cost : 0.355615 [px] Termination : No convergence

    => Filtered observations: 123 => Filtered images: 0

    ============================================================================== Registering image #4 (3)

    => Image sees 702 / 4523 points => Could not register, trying another image.

    ============================================================================== Registering image #9 (3)

    => Image sees 694 / 4947 points => Could not register, trying another image.

    ============================================================================== Registering image #8 (3)

    => Image sees 654 / 4674 points => Could not register, trying another image.

    ============================================================================== Registering image #3 (3)

    => Image sees 646 / 4530 points => Could not register, trying another image.

    ============================================================================== Registering image #6 (3)

    => Image sees 673 / 4529 points => Could not register, trying another image.

    ============================================================================== Registering image #11 (3)

    => Image sees 644 / 4900 points => Could not register, trying another image.

    ============================================================================== Registering image #7 (3)

    => Image sees 616 / 4430 points => Could not register, trying another image.

    ============================================================================== Registering image #16 (3)

    => Image sees 559 / 4688 points => Could not register, trying another image.

    ============================================================================== Registering image #14 (3)

    => Image sees 561 / 4622 points => Could not register, trying another image.

    ============================================================================== Registering image #2 (3)

    => Image sees 522 / 4383 points => Could not register, trying another image.

    ============================================================================== Registering image #17 (3)

    => Image sees 493 / 4716 points => Could not register, trying another image.

    ============================================================================== Registering image #18 (3)

    => Image sees 453 / 4233 points => Could not register, trying another image.

    ============================================================================== Registering image #1 (3)

    => Image sees 420 / 3982 points => Could not register, trying another image.

    ============================================================================== Registering image #12 (3)

    => Image sees 429 / 3419 points => Could not register, trying another image.

    ============================================================================== Registering image #15 (3)

    => Image sees 430 / 3892 points => Could not register, trying another image.

    ============================================================================== Registering image #21 (3)

    => Image sees 373 / 3995 points => Could not register, trying another image.

    ============================================================================== Registering image #13 (3)

    => Image sees 366 / 4086 points => Could not register, trying another image.

    ============================================================================== Registering image #20 (3)

    => Image sees 347 / 3149 points => Could not register, trying another image.

    ============================================================================== Registering image #19 (3)

    => Image sees 316 / 3682 points => Could not register, trying another image.

    ============================================================================== Registering image #22 (3)

    => Image sees 289 / 3132 points => Could not register, trying another image.

    ============================================================================== Registering image #23 (3)

    => Image sees 213 / 3458 points => Could not register, trying another image.

    ============================================================================== Registering image #24 (3)

    => Image sees 157 / 3211 points => Could not register, trying another image.

    ============================================================================== Retriangulation

    => Merged observations: 0 => Completed observations: 0 => Retriangulated observations: 0

    ============================================================================== Global bundle adjustment

    iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time 0 4.660303e+02 0.00e+00 6.94e+02 0.00e+00 0.00e+00 1.00e+04 0 1.29e-03 6.86e-03 1 4.542867e+02 1.17e+01 1.86e+01 2.67e+01 9.90e-01 3.00e+04 1 3.47e-03 1.04e-02 2 4.539819e+02 3.05e-01 4.54e+00 1.73e+01 9.57e-01 9.00e+04 1 3.31e-03 1.37e-02 3 4.537922e+02 1.90e-01 5.62e+01 4.26e+01 9.58e-01 2.70e+05 1 3.37e-03 1.71e-02 4 4.535130e+02 2.79e-01 3.76e+02 1.21e+02 4.99e-01 2.70e+05 1 3.38e-03 2.05e-02 5 4.529053e+02 6.08e-01 3.06e+02 1.19e+02 7.51e-01 3.09e+05 1 3.11e-03 2.36e-02 6 4.523693e+02 5.36e-01 3.21e+02 1.34e+02 6.79e-01 3.24e+05 1 3.11e-03 2.67e-02 7 4.517444e+02 6.25e-01 2.76e+02 1.40e+02 7.35e-01 3.61e+05 1 3.07e-03 2.98e-02 8 4.511257e+02 6.19e-01 2.48e+02 1.54e+02 7.11e-01 3.91e+05 1 3.08e-03 3.29e-02 9 4.504388e+02 6.87e-01 1.74e+02 1.65e+02 7.43e-01 4.41e+05 1 3.15e-03 3.61e-02 10 4.497332e+02 7.06e-01 1.10e+02 1.85e+02 7.26e-01 4.86e+05 1 3.07e-03 3.91e-02 11 4.489893e+02 7.44e-01 1.22e+02 2.01e+02 7.11e-01 5.25e+05 1 3.11e-03 4.23e-02 12 4.483014e+02 6.88e-01 3.84e+02 2.16e+02 6.16e-01 5.31e+05 1 3.05e-03 4.53e-02 13 4.476780e+02 6.23e-01 6.34e+02 2.17e+02 5.08e-01 5.31e+05 1 3.06e-03 4.84e-02 14 4.471637e+02 5.14e-01 8.62e+02 2.16e+02 3.73e-01 5.23e+05 1 3.13e-03 5.16e-02 15 4.466950e+02 4.69e-01 1.05e+03 2.12e+02 2.91e-01 4.87e+05 1 3.02e-03 5.46e-02 16 4.460323e+02 6.63e-01 1.09e+03 1.97e+02 3.63e-01 4.77e+05 1 3.07e-03 5.77e-02 17 4.455845e+02 4.48e-01 1.20e+03 1.93e+02 2.46e-01 4.22e+05 1 3.13e-03 6.08e-02 18 4.446908e+02 8.94e-01 1.07e+03 1.70e+02 4.59e-01 4.22e+05 1 3.15e-03 6.40e-02 19 4.443183e+02 3.72e-01 1.17e+03 1.70e+02 2.30e-01 3.64e+05 1 4.25e-03 6.83e-02 20 4.434147e+02 9.04e-01 9.67e+02 1.47e+02 5.21e-01 3.64e+05 1 3.41e-03 7.17e-02 21 4.430443e+02 3.70e-01 1.04e+03 1.47e+02 2.82e-01 3.36e+05 1 5.73e-03 7.75e-02 22 4.424412e+02 6.03e-01 9.57e+02 1.36e+02 4.34e-01 3.36e+05 1 5.30e-03 8.28e-02 23 4.420824e+02 3.59e-01 1.01e+03 1.36e+02 2.92e-01 3.13e+05 1 5.48e-03 8.83e-02 24 4.415397e+02 5.43e-01 9.43e+02 1.27e+02 4.24e-01 3.12e+05 1 3.33e-03 9.16e-02 25 4.411907e+02 3.49e-01 9.89e+02 1.26e+02 3.04e-01 2.95e+05 1 3.33e-03 9.50e-02 26 4.407024e+02 4.88e-01 9.31e+02 1.19e+02 4.13e-01 2.93e+05 1 3.36e-03 9.84e-02 27 4.403605e+02 3.42e-01 9.67e+02 1.19e+02 3.17e-01 2.79e+05 1 3.44e-03 1.02e-01 28 4.399180e+02 4.43e-01 9.22e+02 1.14e+02 4.02e-01 2.77e+05 1 3.45e-03 1.05e-01 29 4.395810e+02 3.37e-01 9.49e+02 1.13e+02 3.30e-01 2.67e+05 1 3.15e-03 1.08e-01 30 4.391754e+02 4.06e-01 9.17e+02 1.09e+02 3.92e-01 2.64e+05 1 3.24e-03 1.12e-01 31 4.388422e+02 3.33e-01 9.34e+02 1.08e+02 3.42e-01 2.56e+05 1 3.12e-03 1.15e-01 32 4.384659e+02 3.76e-01 9.13e+02 1.05e+02 3.85e-01 2.53e+05 1 7.35e-03 1.22e-01 33 4.381364e+02 3.29e-01 9.23e+02 1.04e+02 3.53e-01 2.47e+05 1 3.25e-03 1.26e-01 34 4.377831e+02 3.53e-01 9.09e+02 1.02e+02 3.80e-01 2.43e+05 1 3.47e-03 1.29e-01 35 4.374583e+02 3.25e-01 9.14e+02 1.01e+02 3.63e-01 2.38e+05 1 3.45e-03 1.32e-01 36 4.371228e+02 3.35e-01 9.05e+02 9.91e+01 3.79e-01 2.35e+05 1 7.07e-03 1.40e-01 37 4.368037e+02 3.19e-01 9.06e+02 9.81e+01 3.70e-01 2.31e+05 1 5.73e-03 1.45e-01 38 4.364820e+02 3.22e-01 9.01e+02 9.68e+01 3.79e-01 2.28e+05 1 3.20e-03 1.49e-01 39 4.361697e+02 3.12e-01 9.01e+02 9.58e+01 3.77e-01 2.24e+05 1 6.12e-03 1.55e-01 40 4.358587e+02 3.11e-01 8.98e+02 9.48e+01 3.82e-01 2.21e+05 1 5.48e-03 1.60e-01 41 4.355537e+02 3.05e-01 8.97e+02 9.40e+01 3.82e-01 2.19e+05 1 2.78e-03 1.63e-01 42 4.352514e+02 3.02e-01 8.95e+02 9.32e+01 3.85e-01 2.16e+05 1 2.82e-03 1.66e-01 43 4.349534e+02 2.98e-01 8.94e+02 9.26e+01 3.86e-01 2.13e+05 1 2.91e-03 1.69e-01 44 4.346585e+02 2.95e-01 8.93e+02 9.19e+01 3.89e-01 2.11e+05 1 4.07e-03 1.73e-01 45 4.343670e+02 2.91e-01 8.93e+02 9.14e+01 3.91e-01 2.09e+05 1 3.17e-03 1.76e-01 46 4.340786e+02 2.88e-01 8.92e+02 9.10e+01 3.93e-01 2.07e+05 1 2.92e-03 1.79e-01 47 4.337931e+02 2.86e-01 8.92e+02 9.06e+01 3.95e-01 2.05e+05 1 2.99e-03 1.82e-01 48 4.335103e+02 2.83e-01 8.92e+02 9.02e+01 3.97e-01 2.03e+05 1 2.97e-03 1.85e-01 49 4.332302e+02 2.80e-01 8.92e+02 9.00e+01 3.99e-01 2.01e+05 1 2.95e-03 1.88e-01 50 4.329527e+02 2.78e-01 8.92e+02 8.98e+01 4.01e-01 2.00e+05 1 3.44e-03 1.91e-01 51 4.326775e+02 2.75e-01 8.93e+02 8.96e+01 4.03e-01 1.98e+05 1 3.06e-03 1.94e-01 52 4.324046e+02 2.73e-01 8.93e+02 8.95e+01 4.05e-01 1.97e+05 1 3.12e-03 1.98e-01 53 4.321340e+02 2.71e-01 8.94e+02 8.95e+01 4.07e-01 1.96e+05 1 2.97e-03 2.01e-01 54 4.318654e+02 2.69e-01 8.94e+02 8.95e+01 4.09e-01 1.95e+05 1 3.02e-03 2.04e-01 55 4.315990e+02 2.66e-01 8.95e+02 8.95e+01 4.11e-01 1.94e+05 1 2.95e-03 2.07e-01 56 4.313345e+02 2.64e-01 8.96e+02 8.96e+01 4.13e-01 1.93e+05 1 2.95e-03 2.10e-01 57 4.310720e+02 2.63e-01 8.97e+02 8.98e+01 4.15e-01 1.92e+05 1 2.99e-03 2.13e-01 58 4.308113e+02 2.61e-01 8.97e+02 9.00e+01 4.17e-01 1.91e+05 1 3.05e-03 2.16e-01 59 4.305526e+02 2.59e-01 8.98e+02 9.02e+01 4.20e-01 1.90e+05 1 2.96e-03 2.19e-01 60 4.302956e+02 2.57e-01 8.99e+02 9.05e+01 4.22e-01 1.89e+05 1 3.01e-03 2.22e-01 61 4.300404e+02 2.55e-01 8.99e+02 9.08e+01 4.24e-01 1.89e+05 1 2.96e-03 2.25e-01 62 4.297870e+02 2.53e-01 9.00e+02 9.11e+01 4.26e-01 1.88e+05 1 3.00e-03 2.28e-01 63 4.295353e+02 2.52e-01 9.00e+02 9.15e+01 4.29e-01 1.87e+05 1 2.96e-03 2.31e-01 64 4.292854e+02 2.50e-01 9.00e+02 9.19e+01 4.31e-01 1.87e+05 1 2.95e-03 2.34e-01 65 4.290372e+02 2.48e-01 9.00e+02 9.24e+01 4.34e-01 1.87e+05 1 3.02e-03 2.37e-01 66 4.287907e+02 2.46e-01 9.00e+02 9.29e+01 4.36e-01 1.86e+05 1 2.99e-03 2.40e-01 67 4.285459e+02 2.45e-01 9.00e+02 9.34e+01 4.39e-01 1.86e+05 1 2.94e-03 2.43e-01 68 4.283029e+02 2.43e-01 8.99e+02 9.40e+01 4.42e-01 1.86e+05 1 3.09e-03 2.46e-01 69 4.280616e+02 2.41e-01 8.98e+02 9.46e+01 4.45e-01 1.85e+05 1 2.98e-03 2.49e-01 70 4.278220e+02 2.40e-01 8.97e+02 9.52e+01 4.48e-01 1.85e+05 1 3.08e-03 2.52e-01 71 4.275843e+02 2.38e-01 8.95e+02 9.59e+01 4.51e-01 1.85e+05 1 3.04e-03 2.55e-01 72 4.273483e+02 2.36e-01 8.93e+02 9.66e+01 4.54e-01 1.85e+05 1 2.98e-03 2.58e-01 73 4.271141e+02 2.34e-01 8.91e+02 9.73e+01 4.58e-01 1.85e+05 1 3.05e-03 2.61e-01 74 4.268818e+02 2.32e-01 8.88e+02 9.80e+01 4.62e-01 1.85e+05 1 3.48e-03 2.64e-01 75 4.266514e+02 2.30e-01 8.85e+02 9.88e+01 4.66e-01 1.85e+05 1 3.24e-03 2.68e-01 76 4.264229e+02 2.28e-01 8.81e+02 9.96e+01 4.70e-01 1.84e+05 1 3.06e-03 2.71e-01 77 4.261964e+02 2.27e-01 8.77e+02 1.00e+02 4.74e-01 1.84e+05 1 3.12e-03 2.74e-01 78 4.259719e+02 2.25e-01 8.72e+02 1.01e+02 4.79e-01 1.84e+05 1 3.03e-03 2.77e-01 79 4.257495e+02 2.22e-01 8.66e+02 1.02e+02 4.83e-01 1.84e+05 1 3.04e-03 2.80e-01 80 4.255292e+02 2.20e-01 8.60e+02 1.03e+02 4.88e-01 1.84e+05 1 3.05e-03 2.83e-01 81 4.253111e+02 2.18e-01 8.53e+02 1.04e+02 4.94e-01 1.84e+05 1 3.07e-03 2.86e-01 82 4.250953e+02 2.16e-01 8.45e+02 1.05e+02 4.99e-01 1.84e+05 1 3.12e-03 2.89e-01 83 4.248818e+02 2.13e-01 8.37e+02 1.05e+02 5.05e-01 1.84e+05 1 3.06e-03 2.92e-01 84 4.246707e+02 2.11e-01 8.28e+02 1.06e+02 5.11e-01 1.84e+05 1 3.28e-03 2.96e-01 85 4.244622e+02 2.09e-01 8.19e+02 1.07e+02 5.18e-01 1.84e+05 1 3.00e-03 2.99e-01 86 4.242562e+02 2.06e-01 8.09e+02 1.08e+02 5.25e-01 1.84e+05 1 3.17e-03 3.02e-01 87 4.240529e+02 2.03e-01 7.98e+02 1.09e+02 5.31e-01 1.85e+05 1 3.08e-03 3.05e-01 88 4.238524e+02 2.01e-01 7.87e+02 1.10e+02 5.38e-01 1.85e+05 1 3.13e-03 3.08e-01 89 4.236546e+02 1.98e-01 7.76e+02 1.11e+02 5.45e-01 1.85e+05 1 3.05e-03 3.11e-01 90 4.234597e+02 1.95e-01 7.65e+02 1.12e+02 5.52e-01 1.85e+05 1 3.24e-03 3.14e-01 91 4.232676e+02 1.92e-01 7.53e+02 1.13e+02 5.59e-01 1.85e+05 1 3.09e-03 3.17e-01 92 4.230783e+02 1.89e-01 7.42e+02 1.14e+02 5.66e-01 1.86e+05 1 3.04e-03 3.21e-01 93 4.228919e+02 1.86e-01 7.32e+02 1.15e+02 5.73e-01 1.86e+05 1 7.14e-03 3.28e-01 94 4.227082e+02 1.84e-01 7.21e+02 1.17e+02 5.79e-01 1.87e+05 1 4.19e-03 3.32e-01 95 4.225271e+02 1.81e-01 7.12e+02 1.18e+02 5.85e-01 1.88e+05 1 3.00e-03 3.35e-01 96 4.223486e+02 1.79e-01 7.03e+02 1.20e+02 5.91e-01 1.89e+05 1 2.78e-03 3.38e-01 97 4.221724e+02 1.76e-01 6.95e+02 1.21e+02 5.96e-01 1.90e+05 1 2.81e-03 3.41e-01 98 4.219985e+02 1.74e-01 6.88e+02 1.23e+02 6.01e-01 1.92e+05 1 2.96e-03 3.44e-01 99 4.218266e+02 1.72e-01 6.81e+02 1.25e+02 6.05e-01 1.94e+05 1 2.90e-03 3.46e-01 100 4.216566e+02 1.70e-01 6.75e+02 1.27e+02 6.09e-01 1.96e+05 1 4.07e-03 3.51e-01

    Bundle adjustment report

    Residuals : 3588
    

    Parameters : 2698 Iterations : 101 Time : 0.350847 [s] Initial cost : 0.360397 [px] Final cost : 0.34281 [px] Termination : No convergence

    => Merged observations: 0 => Completed observations: 0 => Filtered observations: 897 => Changed observations: 0.500000

    ============================================================================== Global bundle adjustment

    => Merged observations: 0 => Completed observations: 0 => Filtered observations: 0 => Changed observations: -nan

    ============================================================================== Global bundle adjustment

    => Merged observations: 0 => Completed observations: 0 => Filtered observations: 0 => Changed observations: -nan

    ============================================================================== Global bundle adjustment

    => Merged observations: 0 => Completed observations: 0 => Filtered observations: 0 => Changed observations: -nan

    ============================================================================== Global bundle adjustment

    => Merged observations: 0 => Completed observations: 0 => Filtered observations: 0 => Changed observations: -nan => Filtered images: 0

    Elapsed time: 0.022 [minutes]

    opened by mwsunshine 5
  • Unable to use light field datasets other than the testscene

    Unable to use light field datasets other than the testscene

    Hi, I have finished the installation and rendered the testscene successfully. However, when I tried to use pictures from other datasets, I just failed. The dataset I use is the MIT Synthetic Light Field Archive. I checked the log and I found the first error occured here:

    Need to run COLMAP
    Features extracted
    Features matched
    Sparse map created
    Finished running COLMAP, see data/carscene/output_5x5m/colmap_output.txt for logs
    Post-colmap
    ('Cameras', 5)
    ('Images #', 2)
    Traceback (most recent call last):
      File "imgs2poses.py", line 11, in <module>
        gen_poses(args.scenedir)
      File "/host/data2/l00362246/boyutian/LLFF/llff/poses/pose_utils.py", line 273, in gen_poses
        save_poses(basedir, poses, pts3d, perm)
      File "/host/data2/l00362246/boyutian/LLFF/llff/poses/pose_utils.py", line 63, in save_poses
        cams[ind-1] = 1
    IndexError: list assignment index out of range
    

    The scene I use contains 25 pictures, but only 2 pictures(the initial pair) has been registered successfully after running COLMAP. I think this is the main reason for the failure. I was wondering why that happens. Also, I checked the colmap output. One of the differences is that the pictures I use do not contain GPS information. I attach the colmap_output here. car_colmap_output.txt

    opened by CriusT 4
  • Can I use this method for lytro style light field?

    Can I use this method for lytro style light field?

    As stated, since colmap fails to register for Lytro style data, I am trying to manually generate poses and mpi_bds. However, I think I might do something wrong here.

    I figured out that the poses is a 5x3 matrix which generates the homography. If the horizontal baseline and vertical baseline of each view is x and y and the image is of size hxw and the focus is f, is this the right matrix for poses?

    1 0 0 (-y) h
    0 1 0 (-x) w
    0 0 1 0    f
    
    opened by DarwinSenior 4
  • extensive GPU/memory usage with high resolution output

    extensive GPU/memory usage with high resolution output

    Hi, many thanks for your impressive work!

    I come up with some questions that I might need your help when I try to get the outputs with high-resolution.

    1. I notice that the net given has a basic resolution of 640x480x3( and 32 default depth planes). When the resolution is higher than this, patches will be used via cutting the whole image. I tried a resolution of 1280x960, it worked fine. However, if I increase the resolution upto e.g. 4032x3024, the memory used will be increased dramatically(out of memory in my case).

    Have you ever tried this case and met the same problem? Is it possible to generate an output with high resolution?

    net

    the image showed above is the net structure given in your paper. As for the tensor named nnup7, the output channel is 64. if using the maximum resolution of 640x480 and 32 default planes. the full size of nnup7 is 1x32x480x640x64(batch, planes, height, width, channel). As I test this net structure, I find the GPU needed for this net is very large(greater than 10G).

    My question is that am I right about the size of nnup7? If I am right, how many GPU have your used?

    opened by mwsunshine 3
  • Installation on windows

    Installation on windows

    Can I install this on windows?

    opened by garimss 3
  • CUDA error when running demo.sh/imgs2poses.py

    CUDA error when running demo.sh/imgs2poses.py

    Hi authors, thanks for open-sourcing the code of your amazing paper! I've been encountering with a few issues when trying to run through your demo.

    I have followed your README to install the provided nvidia-docker. But it seems that when running demo.sh inside of the docker, CUDA errors exist. More specifically, when running this line, program complaints about "PyramidCU::GenerateFeatureList: an illegal memory access was encountered". I've prepared my running log after both with and without cuda-memcheck for debugging purpose. Could you please help me identify where could possibly go wrong?

    I'm using RTX 2080 Ti + cuda-10.1 + NVIDIA driver 418.39 on Ubuntu 18.04. Not sure if being too updated could be the problem.

    opened by hangg7 2
  • View poses rendering

    View poses rendering

    Hi, can you please explain me process of generating and storing values for positions of views for making video since I would like to modify it and make my own paths?

    Thank you

    opened by bolemebrige 2
  • Update pose_utils.py

    Update pose_utils.py

    fix the error when existing invalid views in the scene. Beside the pose_boundary.npy will be generated, it also will generate "view_imgs.txt" to mark which views are used.

    opened by starhiking 0
  • Colmap affecting disparity values?

    Colmap affecting disparity values?

    Hello,

    I noticed that the disparity values (from disparity map generated by LLFF) are very different when compared it with "theoretical value" (apparent pixel difference between the pair of stereo images). Is this due to the different scaling from colmap camera parameters?

    Thanks in advance!

    opened by LIN251196 0
  • Train Error python train.py requested GPUs: [0]

    Train Error python train.py requested GPUs: [0]

    Hello, I have followed your example to train NERF on my own data.

    Im using MacOS based M1 Chip, does anyone have any idea what this might be related to. I can't figure out what's going on. Can you explain what the problem might be and how to fix it ? Thank you in advance for any help.

    I am trying to run and get this error.

    python train.py \
       --dataset_name llff \
       --root_dir /Users/ab/ProjectTest/LLFF \
       --N_importance 64 --img_wh 504 378 \
       --num_epochs 30 --batch_size 1024 \
       --optimizer adam --lr 5e-4 \
       --lr_scheduler steplr --decay_step 10 20 --decay_gamma 0.5 \
       --exp_name exp
    
    Traceback (most recent call last):
      File "train.py", line 178, in <module>
        profiler=hparams.num_gpus==1)
      File "/opt/anaconda3/envs/nerf_pl/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 438, in __init__
        self.data_parallel_device_ids = parse_gpu_ids(self.gpus)
      File "/opt/anaconda3/envs/nerf_pl/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 712, in parse_gpu_ids
        gpus = sanitize_gpu_ids(gpus)
      File "/opt/anaconda3/envs/nerf_pl/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 678, in sanitize_gpu_ids
        """)
    pytorch_lightning.utilities.exceptions.MisconfigurationException: 
                    You requested GPUs: [0]
                    But your machine only has: [].
    
    opened by asbeg 0
  • request for the complete list of results on all tested scenes

    request for the complete list of results on all tested scenes

    Dear authors,

    I like the work LLFF very much.

    I want to compare LLFF's results with mine.

    Where are the detailed PSNR SSIM results of LLFF on all scenes?

    opened by derrick-xwp 0
  • Modify Dockerfile

    Modify Dockerfile

    • Version of Ceres
    • URL for get-pip.py (python2)
    opened by hiroaki-santo 0
  • Mapper.multiple_models in command 'colmap mapper'

    Mapper.multiple_models in command 'colmap mapper'

    What's the meaning of the arguments Mapper.multiple_models, Mapper.num_threads, Mapper.init_min_tri_angle, Mapper.extract_colors? I can't find the documents about these arguments in the Colmap tutorial.

    opened by SteveSZF 0
  • fix index bounds check

    fix index bounds check

    if len(cams) is equal to ind-1 there will be an index error rather than printing the proper error message. This fixes this.

    opened by PWhiddy 0
  • hello

    hello

    opened by poly24 0
  • IOS application

    IOS application

    Did you plan to release your application for the photo capturing?

    opened by lenpetlyak 0
  • Training code

    Training code

    Could you please provide the training code for your project? I am planning to compare performance on a new dataset.

    Thank you!

    opened by surajsy 0
Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

Han Xu 64 Nov 29, 2021
Code release for Universal Domain Adaptation(CVPR 2019)

Universal Domain Adaptation Code release for Universal Domain Adaptation(CVPR 2019) Requirements python 3.6+ PyTorch 1.0 pip install -r requirements.t

THUML @ Tsinghua University 191 Nov 29, 2021
PyTorch Implementation of "Light Field Image Super-Resolution with Transformers"

LFT PyTorch implementation of "Light Field Image Super-Resolution with Transformers", arXiv 2021. [pdf]. Contributions: We make the first attempt to a

Squidward 30 Nov 17, 2021
Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments

Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments Paper: arXiv (ICRA 2021) Video : https://youtu.be/CC

Sachini Herath 30 Nov 17, 2021
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

null 23 Sep 22, 2021
Supplementary code for SIGGRAPH 2021 paper: Discovering Diverse Athletic Jumping Strategies

SIGGRAPH 2021: Discovering Diverse Athletic Jumping Strategies project page paper demo video Prerequisites Important Notes We suspect there are bugs i

null 46 Oct 28, 2021
Code for HodgeNet: Learning Spectral Geometry on Triangle Meshes, in SIGGRAPH 2021.

HodgeNet | Webpage | Paper | Video HodgeNet: Learning Spectral Geometry on Triangle Meshes Dmitriy Smirnov, Justin Solomon SIGGRAPH 2021 Set-up To ins

Dima Smirnov 37 Nov 29, 2021
Code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in Video".

Consistent Depth of Moving Objects in Video This repository contains training code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in

Google 135 Nov 28, 2021
Code for Transformers Solve Limited Receptive Field for Monocular Depth Prediction

Official PyTorch code for Transformers Solve Limited Receptive Field for Monocular Depth Prediction. Guanglei Yang, Hao Tang, Mingli Ding, Nicu Sebe,

stanley 104 Nov 23, 2021
Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

DTU Acoustic Technology Group 2 Nov 28, 2021
Face Identity Disentanglement via Latent Space Mapping [SIGGRAPH ASIA 2020]

Face Identity Disentanglement via Latent Space Mapping Description Official Implementation of the paper Face Identity Disentanglement via Latent Space

null 116 Dec 1, 2021
General Virtual Sketching Framework for Vector Line Art (SIGGRAPH 2021)

General Virtual Sketching Framework for Vector Line Art - SIGGRAPH 2021 Paper | Project Page Outline Dependencies Testing with Trained Weights Trainin

Haoran MO 63 Nov 14, 2021
Implementation for "Seamless Manga Inpainting with Semantics Awareness" (SIGGRAPH 2021 issue)

Seamless Manga Inpainting with Semantics Awareness [SIGGRAPH 2021](To appear) | Project Website | BibTex Introduction: Manga inpainting fills up the d

null 52 Nov 5, 2021
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

null 18 Nov 19, 2021
TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)

TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020) About The goal of our research problem is illustrated below: give

null 58 Nov 16, 2021
Official implementation of "StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation" (SIGGRAPH 2021)

StyleCariGAN in PyTorch Official implementation of StyleCariGAN:Caricature Generation via StyleGAN Feature Map Modulation in PyTorch Requirements PyTo

PeterZhouSZ 41 Nov 23, 2021
Tracing Versus Freehand for Evaluating Computer-Generated Drawings (SIGGRAPH 2021)

Tracing Versus Freehand for Evaluating Computer-Generated Drawings (SIGGRAPH 2021) Zeyu Wang, Sherry Qiu, Nicole Feng, Holly Rushmeier, Leonard McMill

Zach Zeyu Wang 17 Nov 26, 2021
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.

st-nerf We provide PyTorch implementations for our paper: Editable Free-viewpoint Video Using a Layered Neural Representation SIGGRAPH 2021 Jiakai Zha

Diplodocus 169 Nov 30, 2021