This implements the learning and inference/proposal algorithm described in "Learning to Propose Objects, Krähenbühl and Koltun"

Related tags

Deep Learning lpo
Overview

Learning to propose objects

This implements the learning and inference/proposal algorithm described in "Learning to Propose Objects, Krähenbühl and Koltun, CVPR 2015".

Dependencies:

  • c++11 compiler (gcc >= 4.7)
  • cmake
  • boost-python
  • python (2.7 or 3.1+ should both work)
  • numpy
  • libmatio (optional)
  • libpng, libjpeg
  • Eigen 3 (3.2.0 or newer)
  • OpenMP (optional but recommended)

Compilation:

Go to the top level directory

mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DDATA_DIR=/path/to/datasets -DUSE_PYTHON=ON
make -j9

Here "-DUSE_PYTHON" specifies that the python wrapper should be built (highly recommended). You can use python 2.7 by specifying "-DUSE_PYTHON=2", any other argument will try to build a python 3 wrapper.

The flag "-DDATA_DIR=/path/to/datasets" is optional and can point to a directory containing the VOC2012, VOC2007 or COCO datset. Specify this path if you want to train or evaluate LPO on those dataset.

"/path/to/datasets" can be any directory containing subdirectories:

  • 'VOC2012/ImageSets'
  • 'VOC2012/SegmentationClass',
  • 'VOC2012/Annotations'
  • 'COCO/train2014'
  • 'COCO/val2014'
  • ...

and files:

  • 'COCO/instances_train2014.json'
  • 'COCO/instances_val2014.json'.

The coco files can be downloaded from http://mscoco.org/, the PASCAL VOC dataset http://pascallin.ecs.soton.ac.uk/challenges/VOC/voc2012/index.html .

The code should compile and run fine on both Linux and Mac OS, let me know if you have any difficulty or find a bug. For Windows you're on your own.

Experiments

The code to reproduce most results in the paper is included here. All experiments should be run from the src directory.

To generate the main comparison in table 3 run:

bash eval_all.sh

To analyze a model like table 2 run:

python analyze_model.py path/to/model

To do the bounding box evaluation call:

python eval_box.py path/to/output_file path/to/model1 path/to/model2 path/to/model3 path/to/model4

This will create a binary file measuring number of proposals vs best overlap per object. You can then use the results/box.py script to generate the bounding box evaluation and produce the plots. For your convenience we included the precomputed results of many prior methods on VOC 2012 in results/box/*.dat.

Citation

If you're using this code in a scientific publication please cite:

@inproceedings{kk-lpo-15,
  author    = {Philipp Kr{\"{a}}henb{\"{u}}hl and
               Vladlen Koltun},
  title     = {Learning to Propose Objects},
  booktitle = {CVPR},
  year      = {2015},
}

License

All my code is published under a BSD license, so feel free to reuse and/or share it. There are some dependencies which are under different licenses and/or patented. All those dependencies are located in the external directory.

Comments
  • Trying to figure out the output...

    Trying to figure out the output...

    Hey,

    I've been trying to get a small bounding-box example working, based on your propose_hf5.py code.

    Here is my code:

    imgs = [ lpo.imgproc.imread('cat.jpg') ]
    
    prop = lpo.proposals.LPO()
    prop.load( 'dats/lpo_VOC_0.02.dat' )
    
    detector = lpo.contour.MultiScaleStructuredForest()
    detector.load( 'dats/sf.dat' )
    
    over_segs = lpo.segmentation.generateGeodesicKMeans( detector, imgs, 1000 )
    
    props = prop.propose( over_segs, 0.01, True )
    
    props = props[0][0]
    
    fig = plt.figure()
    ax = fig.add_subplot(1, 1, 1)
    ax.imshow(imgs[0])
    for bb in props.toBoxes():
        ax.add_patch(matplotlib.patches.Rectangle((bb[0],bb[1]),bb[2],bb[3], color='red', fill=False))
    
    

    I end up with: catboxes

    If I play around with some of the parameters, I end up getting an enormous amount of proposal boxes.

    I was hoping somebody could provide some advice to help me get this working.

    opened by ghost 10
  • How to fix 'File '/path/to/datasets/VOC2012/ImageSets/Segmentation/val.txt' not found'?

    How to fix 'File '/path/to/datasets/VOC2012/ImageSets/Segmentation/val.txt' not found'?

    When I execute ./sam_eval_all.sh which just change to use python2.7 , it shows:

    sam@sam-desktop:~/code/download/Segmentation/lpo/src$ ./sam_eval_all.sh File '/path/to/datasets/VOC2012/ImageSets/Segmentation/val.txt' not found! Check if DATA_DIR is set properly. Traceback (most recent call last): File "train_lpo.py", line 139, in over_segs,segmentations,boxes,names = loadVOCAndOverSeg( 'test', detector='mssf' ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 98, in loadVOCAndOverSeg return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 70, in loadAndOverSegDataset data = loader() File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 98, in return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) ValueError: Failed to load dataset File '/path/to/datasets/VOC2012/ImageSets/Segmentation/val.txt' not found! Check if DATA_DIR is set properly. Traceback (most recent call last): File "train_lpo.py", line 139, in over_segs,segmentations,boxes,names = loadVOCAndOverSeg( 'test', detector='mssf' ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 98, in loadVOCAndOverSeg return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 70, in loadAndOverSegDataset data = loader() File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 98, in return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) ValueError: Failed to load dataset File '/path/to/datasets/VOC2012/ImageSets/Segmentation/val.txt' not found! Check if DATA_DIR is set properly. Traceback (most recent call last): File "train_lpo.py", line 139, in over_segs,segmentations,boxes,names = loadVOCAndOverSeg( 'test', detector='mssf' ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 98, in loadVOCAndOverSeg return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 70, in loadAndOverSegDataset data = loader() File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 98, in return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) ValueError: Failed to load dataset File '/path/to/datasets/VOC2012/ImageSets/Segmentation/val.txt' not found! Check if DATA_DIR is set properly. Traceback (most recent call last): File "train_lpo.py", line 139, in over_segs,segmentations,boxes,names = loadVOCAndOverSeg( 'test', detector='mssf' ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 98, in loadVOCAndOverSeg return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 70, in loadAndOverSegDataset data = loader() File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 98, in return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) ValueError: Failed to load dataset File '/path/to/datasets/VOC2012/ImageSets/Segmentation/val.txt' not found! Check if DATA_DIR is set properly. Traceback (most recent call last): File "train_lpo.py", line 139, in over_segs,segmentations,boxes,names = loadVOCAndOverSeg( 'test', detector='mssf' ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 98, in loadVOCAndOverSeg return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 70, in loadAndOverSegDataset data = loader() File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 98, in return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) ValueError: Failed to load dataset File '/path/to/datasets/VOC2012/ImageSets/Segmentation/val.txt' not found! Check if DATA_DIR is set properly. Traceback (most recent call last): File "train_lpo.py", line 139, in over_segs,segmentations,boxes,names = loadVOCAndOverSeg( 'test', detector='mssf' ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 98, in loadVOCAndOverSeg return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 70, in loadAndOverSegDataset data = loader() File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 98, in return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX )

    ValueError: Failed to load dataset

    sam@sam-desktop:~/code/download/Segmentation/lpo/src$ cat ./sam_eval_all.sh

    This script reproduces table 3 in the paper

    python train_lpo.py -f0 0.2 ../models/lpo_VOC_0.2.dat python train_lpo.py -f0 0.1 ../models/lpo_VOC_0.1.dat python train_lpo.py -f0 0.05 ../models/lpo_VOC_0.05.dat python train_lpo.py -f0 0.03 ../models/lpo_VOC_0.03.dat python train_lpo.py -f0 0.02 ../models/lpo_VOC_0.02.dat python train_lpo.py -f0 0.01 ../models/lpo_VOC_0.01.dat -iou 0.925 # Increase the IoU a bit to make sure the number of proposals match sam@sam-desktop:~/code/download/Segmentation/lpo/src$

    How to solve it? Thank you~

    opened by b2220333 5
  • How to solve 'global name 'FileNotFoundError' is not defined

    How to solve 'global name 'FileNotFoundError' is not defined" ?

    Hello, I run sed -e 's:python3:python:g' eval_all.sh > sam_eval_all.sh And here is the output when execute sam_eval_all.sh:

    Traceback (most recent call last): File "train_lpo.py", line 137, in over_segs,segmentations,boxes,names = loadVOCAndOverSeg( 'test', detector='mssf' ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 94, in loadVOCAndOverSeg return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 52, in loadAndOverSegDataset except FileNotFoundError: NameError: global name 'FileNotFoundError' is not defined Traceback (most recent call last): File "train_lpo.py", line 137, in over_segs,segmentations,boxes,names = loadVOCAndOverSeg( 'test', detector='mssf' ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 94, in loadVOCAndOverSeg return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 52, in loadAndOverSegDataset except FileNotFoundError: NameError: global name 'FileNotFoundError' is not defined Traceback (most recent call last): File "train_lpo.py", line 137, in over_segs,segmentations,boxes,names = loadVOCAndOverSeg( 'test', detector='mssf' ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 94, in loadVOCAndOverSeg return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 52, in loadAndOverSegDataset except FileNotFoundError: NameError: global name 'FileNotFoundError' is not defined Traceback (most recent call last): File "train_lpo.py", line 137, in over_segs,segmentations,boxes,names = loadVOCAndOverSeg( 'test', detector='mssf' ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 94, in loadVOCAndOverSeg return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 52, in loadAndOverSegDataset except FileNotFoundError: NameError: global name 'FileNotFoundError' is not defined Traceback (most recent call last): File "train_lpo.py", line 137, in over_segs,segmentations,boxes,names = loadVOCAndOverSeg( 'test', detector='mssf' ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 94, in loadVOCAndOverSeg return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 52, in loadAndOverSegDataset except FileNotFoundError: NameError: global name 'FileNotFoundError' is not defined Traceback (most recent call last): File "train_lpo.py", line 137, in over_segs,segmentations,boxes,names = loadVOCAndOverSeg( 'test', detector='mssf' ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 94, in loadVOCAndOverSeg return loadAndOverSegDataset( lambda: ldr(im_set=="train",im_set=="valid",im_set=="test"), "VOC%s_%s"%(year,im_set), detector=detector, N_SPIX=N_SPIX ) File "/home/sam/code/download/Segmentation/lpo/src/util.py", line 52, in loadAndOverSegDataset except FileNotFoundError: NameError: global name 'FileNotFoundError' is not defined

    Thank you~

    opened by b2220333 5
  • How to fix 'ValueError: Unknown model type 'GlobalCRFModel' problem?

    How to fix 'ValueError: Unknown model type 'GlobalCRFModel' problem?

    Hello, thanks for sharing codes! I compile your code success with python 2.7. I also change python3 to python in eval_all.sh file. When I run eval_all.sh, it shows:

    Traceback (most recent call last): File "train_lpo.py", line 129, in prop.load(save_name) ValueError: Unknown model type 'GlobalCRFModel'! Traceback (most recent call last): File "train_lpo.py", line 129, in prop.load(save_name) ValueError: Unknown model type 'GlobalCRFModel'! Traceback (most recent call last): File "train_lpo.py", line 129, in prop.load(save_name) ValueError: Unknown model type 'GlobalCRFModel'! Traceback (most recent call last): File "train_lpo.py", line 129, in prop.load(save_name) ValueError: Unknown model type 'GlobalCRFModel'! Traceback (most recent call last): File "train_lpo.py", line 129, in prop.load(save_name) ValueError: Unknown model type 'GlobalCRFModel'! Traceback (most recent call last): File "train_lpo.py", line 129, in prop.load(save_name) ValueError: Unknown model type 'GlobalCRFModel'! sam@sam-desktop:~/code/download/Segmentation/lpo/src$

    How to solve it? Thank you very much~

    opened by b2220333 5
  • ImportError: No module named python.lpo

    ImportError: No module named python.lpo

    I followed the install pipeline and no error. But at last when I run the command-''bash eval_all.sh', the console printed "ImportError: No module named pylab". Then I changed the eval_all.sh script file, all python3 to python. Though there wasn't previous error, the console printed a new error, "ImportError: No module named python.lpo". Now I had no idea and hope you could help me. Thanks.

    opened by huayong 4
  • analyze_model.py raises division by zero (with demonstration models)

    analyze_model.py raises division by zero (with demonstration models)

    When running analyze_model.py (with python 2.7) over the demonstratation model, I get a division by zero.

    python analyze_model.py ../models/lpo_VOC_0.1.dat 
    /home/rodrigob/.local/lib/python2.7/site-packages/numpy/core/_methods.py:55: RuntimeWarning: Mean of empty slice.
      warnings.warn("Mean of empty slice.", RuntimeWarning)
    /home/rodrigob/.local/lib/python2.7/site-packages/numpy/core/_methods.py:67: RuntimeWarning: invalid value encountered in double_scalars
      ret = ret.dtype.type(ret / rcount)
    Traceback (most recent call last):
      File "analyze_model.py", line 115, in <module>
        evaluateDetailed( prop, over_segs, segmentations )
      File "analyze_model.py", line 101, in evaluateDetailed
        print( names[m], '&', np.mean(ps[m]), '&', np.mean(bo[m]>=bbo)*100, '&', np.sqrt(np.mean(ma[m])), '&', t[m]/len(ma[m]) )
    ZeroDivisionError: integer division or modulo by zero
    

    I am not getting this right from the help and readme ?

    opened by rodrigob 3
  • Simple C++ example of process one image

    Simple C++ example of process one image

    Hello. I am very excited of your previous approach: Geodesic Object Proposals. And there you propose good example of using library in C++ code. Can you give some part of code for use this library? I found that they very similar.

    Thank you.

    opened by drozdvadym 2
  • More feedback to the user during load & computing

    More feedback to the user during load & computing

    These are some small edits (mainly adding printfs here and there), to make it more clear to the user "what are we waiting for ?".

    I added a progress bar (using boost/progress) when loading the VOC data.

    Seems that I accidentally re-formated lpo.cpp; which code formater/beautifier do you normally use ? (so I can sanitize the pull request).

    opened by rodrigob 2
  • matlab cmakelists GOP is included

    matlab cmakelists GOP is included

    Hi,

    Does this library require GOP library? because the cmakelists.txt of matlab folder contains:

  • add_library( gop_mex SHARED gop_mex.cpp )
  • target_link_libraries( gop_mex util imgproc learning contour segmentation proposals gomp )
  • opened by lolongcovas 2
  • More feedback to the user during load & computing

    More feedback to the user during load & computing

    These are some small edits (mainly adding printfs here and there), to make it more clear to the user "what are we waiting for ?".

    I added a progress bar (using boost/progress) when loading the VOC data and when computing the geodesic K-means.

    • Making python scripts more user friendly by adding printfs
    • Making voc.cpp more verbose about files not found
    • Added progress bar while loading images (voc.cpp)
    • Polishing training progress printfs
    • Added progress bar while computing the geodesic K-means
    opened by rodrigob 1
  • eval_box raises exception with demonstration models

    eval_box raises exception with demonstration models

    Following the readme steps, running eval_box.py also raises an exception.

    python eval_box.py test.txt ../models/lpo_VOC_0.1.dat
    Traceback (most recent call last):
      File "eval_box.py", line 57, in <module>
        bo,pool_s = evaluateBox( prop, over_segs, boxes, name='(tst)' )
      File "eval_box.py", line 43, in evaluateBox
        print( "LPO %05s & %d & %0.3f & %0.3f & %0.3f & %0.3f & %0.3f \\\\"%(name,np.nanmean(pool_s),np.mean(bo),np.mean(bo>=0.5), np.mean(bo>=0.7), np.mean(bo>=0.9), np.mean(2*np.maximum(bo-0.5,0)) ) )
    UnboundLocalError: local variable 'pool_s' referenced before assignment
    

    both bo, pool_s are dangerously non-initialized.

    I guess something else is off, but I cannot quite figure out what, since none of the scripts seems to work as expected.

    opened by rodrigob 1
  • error when make:  ‘sleep_for’ is not a member of ‘std::this_thread’

    error when make: ‘sleep_for’ is not a member of ‘std::this_thread’

    [ 26%] Building CXX object lib/util/CMakeFiles/util.dir/geodesics.cpp.o In file included from /media/dat1/liao/lpo/lib/util/threading.cpp:27:0: /media/dat1/liao/lpo/lib/util/threading.h: In member function ‘void ThreadedQueue<T>::process(ThreadedQueue<T>::F, const std::vector<T>&)’: /media/dat1/liao/lpo/lib/util/threading.h:188:5: error: ‘sleep_for’ is not a member of ‘std::this_thread’ make[2]: *** [lib/util/CMakeFiles/util.dir/threading.cpp.o] Error 1 make[2]: *** Waiting for unfinished jobs.... In file included from /media/dat1/liao/lpo/lib/util/algorithm.h:32:0, from /media/dat1/liao/lpo/lib/util/algorithm.cpp:27: /media/dat1/liao/lpo/lib/util/threading.h: In member function ‘void ThreadedQueue<T>::process(ThreadedQueue<T>::F, const std::vector<_RealType>&)’: /media/dat1/liao/lpo/lib/util/threading.h:188:5: error: ‘sleep_for’ is not a member of ‘std::this_thread’ make[2]: *** [lib/util/CMakeFiles/util.dir/algorithm.cpp.o] Error 1 make[1]: *** [lib/util/CMakeFiles/util.dir/all] Error 2 make: *** [all] Error 2

    opened by liao1995 0
  • error building the program

    error building the program

    When I was building the program, I got an error like :

    **collect2: error: ld returned 1 exit status

    examples/CMakeFiles/example.dir/build.make:97: recipe for target 'examples/example' failed make[2]: *** [examples/example] Error 1 CMakeFiles/Makefile2:729: recipe for target 'examples/CMakeFiles/example.dir/all' failed make[1]: *** [examples/CMakeFiles/example.dir/all] Error 2 [ 97%] Built target gop Makefile:116: recipe for target 'all' failed make: * [all] Error 2

    How to resolve my problem? Thank you very much!

    opened by Z-Yang-328 0
  • example please

    example please

    @philkr

    i got the code installed and could get it working - I rain eval_all.sh and it seemed to run fine.

    but i seem to be a bit lost post that. Specifically, I am looking for some simple steps on how to get this working for a custom data set (say even VOC 2007), step-by-step.

    To get the box proposals using our own datasets, I think we need to have the equivalent sf.dat and the lpo_VOC.xx.dat files as inputs. But, how to generate these 2 for our own datasets?

    When I ran train_lpo.py -f0 0.05 -t, it only created a VOC_2007_train_mssf_1000.dat in the /tmp folder. And I couldn't load it as I got an "out of memory error"

    opened by pradeepj247 0
  • error when training lpo for bounding box performance

    error when training lpo for bounding box performance

    When I tried to train LPO on the COCO dataset using the command: python train_lpo.py ../models/lpo_COCO_0.02.dat -b -t -f0 0.02 I get the following error: Traceback (most recent call last): File "train_lpo.py", line 121, in boxes = [proposals.Proposals(s,np.eye(np.max(s)+1).astype(bool)).toBoxes() for s in segmentations] File "/home/anaconda/lib/python2.7/site-packages/numpy/core/fromnumeric.py", line 2135, in amax out=out, keepdims=keepdims) File "/home/anaconda/lib/python2.7/site-packages/numpy/core/_methods.py", line 26, in _amax return umr_maximum(a, axis, None, out, keepdims) ValueError: operands could not be broadcast together with shapes (10,2) (5,2)

    Without the -b argument, I was able to successfully train LPO on the COCO dataset. I'm using Mac OS X 10.10.3.

    opened by vuptran 0
  • ImportError: dynamic module does not define init function

    ImportError: dynamic module does not define init function

    I followed the instructions in the readme and compiled with -DUSE_PYTHON=2 , when I do bash eval_all.sh I get error:

    Traceback (most recent call last): File "train_lpo.py", line 31, in from lpo import * File "/home/revathy/lpo-release/src/lpo.py", line 45, in from python.lpo import * ImportError: dynamic module does not define init function (PyInit_lpo)

    hope you could help me. Thanks.

    opened by mtrth 4
  • Convert from ndarray

    Convert from ndarray

    Hi @philkr, and other lpo users,

    Is there any way, convenient or not, to convert an ndarray object to the Image8u type used in this library? Using the lpo.improc.imread function obviously reads a file from disk, and provides the proper data type. I'm wondering if I can avoid having to read from disk if it's already in memory as an ndarray?

    Thanks

    opened by ghost 2
Owner
Philipp Krähenbühl
Philipp Krähenbühl
This repository contains the code for "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP".

Self-Diagnosis and Self-Debiasing This repository contains the source code for Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based

Timo Schick 62 Dec 12, 2022
PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud, CVPR 2019.

PointRCNN PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud Code release for the paper PointRCNN:3D Object Proposal Generation a

Shaoshuai Shi 1.5k Dec 27, 2022
git《FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding》(CVPR 2021) GitHub: [fig8]

FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding (CVPR 2021) This repo contains the implementation of our state-of-the-art fewshot ob

null 233 Dec 29, 2022
CVPR2021: Temporal Context Aggregation Network for Temporal Action Proposal Refinement

Temporal Context Aggregation Network - Pytorch This repo holds the pytorch-version codes of paper: "Temporal Context Aggregation Network for Temporal

Zhiwu Qing 63 Sep 27, 2022
A pytorch-version implementation codes of paper: "BSN++: Complementary Boundary Regressor with Scale-Balanced Relation Modeling for Temporal Action Proposal Generation"

BSN++: Complementary Boundary Regressor with Scale-Balanced Relation Modeling for Temporal Action Proposal Generation A pytorch-version implementation

null 11 Oct 8, 2022
Torchserve server using a YoloV5 model running on docker with GPU and static batch inference to perform production ready inference.

Yolov5 running on TorchServe (GPU compatible) ! This is a dockerfile to run TorchServe for Yolo v5 object detection model. (TorchServe (PyTorch librar

null 82 Nov 29, 2022
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022
Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.

human-pose-estimation-3d-python-cpp RealSenseD435 (RGB) 480x640 + CPU Corei9 45 FPS (Depth is not used) 1. Run 1-1. RealSenseD435 (RGB) 480x640 + CPU

Katsuya Hyodo 8 Oct 3, 2022
Data-depth-inference - Data depth inference with python

Welcome! This readme will guide you through the use of the code in this reposito

Marco 3 Feb 8, 2022
Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)

AudioCLIP Extending CLIP to Image, Text and Audio This repository contains implementation of the models described in the paper arXiv:2106.13043. This

null 458 Jan 2, 2023
Generative Query Network (GQN) in PyTorch as described in "Neural Scene Representation and Rendering"

Update 2019/06/24: A model trained on 10% of the Shepard-Metzler dataset has been added, the following notebook explains the main features of this mod

Jesper Wohlert 313 Dec 27, 2022
PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop.

VoiceLoop PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop. VoiceLoop is a n

Meta Archive 873 Dec 15, 2022
This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds

LiDARTag Overview This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds (PDF)(arXiv). This wo

University of Michigan Dynamic Legged Locomotion Robotics Lab 159 Dec 21, 2022
An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"

RASP Setup Mac or Linux Run ./setup.sh . It will create a python3 virtual environment and install the dependencies for RASP. It will also try to insta

null 141 Jan 3, 2023
PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper "A Closer Look at Spatiotemporal Convolutions for Action Recognition"

R2Plus1D-PyTorch PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper "A Closer Look at Spatiotemporal

Irhum Shafkat 342 Dec 16, 2022
An official reimplementation of the method described in the INTERSPEECH 2021 paper - Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

Facebook Research 253 Jan 6, 2023
Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

Wonjong Jang 8 Nov 1, 2022
Implementation of the method described in the Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

null 4 Mar 11, 2022
A pure PyTorch implementation of the loss described in "Online Segment to Segment Neural Transduction"

ssnt-loss ℹ️ This is a WIP project. the implementation is still being tested. A pure PyTorch implementation of the loss described in "Online Segment t

張致強 1 Feb 9, 2022