Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks

Overview

This is an implementation of Volodymyr Mnih's dissertation methods on his Massachusetts road & building dataset and my original methods that are published in this paper.

Requirements

  • Python 3.5 (anaconda with python 3.5.1 is recommended)
    • Chainer 1.5.0.2
    • Cython 0.23.4
    • NumPy 1.10.1
    • tqdm
  • OpenCV 3.0.0
  • lmdb 0.87
  • Boost 1.59.0
  • Boost.NumPy (26aaa5b)

Build Libraries

OpenCV 3.0.0

$ wget https://github.com/Itseez/opencv/archive/3.0.0.zip
$ unzip 3.0.0.zip && rm -rf 3.0.0.zip
$ cd opencv-3.0.0 && mkdir build && cd build
$ bash $SSAI_HOME/shells/build_opencv.sh
$ make -j32 install

If some libraries are missing, do below before compiling 3.0.0.

$ sudo apt-get install -y libopencv-dev libtbb-dev

Boost 1.59. 0

$ wget http://downloads.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.bz2
$ tar xvf boost_1_59_0.tar.bz2 && rm -rf boost_1_59_0.tar.bz2
$ cd boost_1_59_0
$ ./bootstrap.sh
$ ./b2 -j32 install cxxflags="-I/home/ubuntu/anaconda3/include/python3.5m"

Boost.NumPy

$ git clone https://github.com/ndarray/Boost.NumPy.git
$ cd Boost.NumPy && mkdir build && cd build
$ cmake -DPYTHON_LIBRARY=$HOME/anaconda3/lib/libpython3.5m.so ../
$ make install

Build utils

$ cd $SSAI_HOME/scripts/utils
$ bash build.sh

Create Dataset

$ bash shells/download.sh
$ bash shells/create_dataset.sh
Dataset Training Validation Test
mass_roads 8580352 108416 379456
mass_roads_mini 1060928 30976 77440
mass_buildings 1060928 30976 77440
mass_merged 1060928 30976 77440

Start Training

$ CHAINER_TYPE_CHECK=0 CHAINER_SEED=$1 \
nohup python scripts/train.py \
--seed 0 \
--gpu 0 \
--model models/MnihCNN_multi.py \
--train_ortho_db data/mass_merged/lmdb/train_sat \
--train_label_db data/mass_merged/lmdb/train_map \
--valid_ortho_db data/mass_merged/lmdb/valid_sat \
--valid_label_db data/mass_merged/lmdb/valid_map \
--dataset_size 1.0 \
> mnih_multi.log 2>&1 < /dev/null &

Prediction

python scripts/predict.py \
--model results/MnihCNN_multi_2016-02-03_03-34-58/MnihCNN_multi.py \
--param results/MnihCNN_multi_2016-02-03_03-34-58/epoch-400.model \
--test_sat_dir data/mass_merged/test/sat \
--channels 3 \
--offset 8 \
--gpu 0 &

Evaluation

$ PYTHONPATH=".":$PYTHONPATH python scripts/evaluate.py \
--map_dir data/mass_merged/test/map \
--result_dir results/MnihCNN_multi_2016-02-03_03-34-58/ma_prediction_400 \
--channel 3 \
--offset 8 \
--relax 3 \
--steps 1024

Results

Conventional methods

Model Mass. Buildings Mass. Roads Mass.Roads-Mini
MnihCNN 0.9150 0.8873 N/A
MnihCNN + CRF 0.9211 0.8904 N/A
MnihCNN + Post-processing net 0.9203 0.9006 N/A
Single-channel 0.9503062 0.91730195 (epoch 120) 0.89989258
Single-channel with MA 0.953766 0.91903522 (epoch 120) 0.902895

Multi-channel models (epoch = 400, step = 1024)

Model Building-channel Road-channel Road-channel (fixed)
Multi-channel 0.94346856 0.89379946 0.9033020025
Multi-channel with MA 0.95231262 0.89971473 0.90982972
Multi-channel with CIS 0.94417078 0.89415726 0.9039476538
Multi-channel with CIS + MA 0.95280431 0.90071099 0.91108087

Test on urban areas (epoch = 400, step = 1024)

Model Building-channel Road-channel
Single-channel with MA 0.962133 0.944748
Multi-channel with MA 0.962797 0.947224
Multi-channel with CIS + MA 0.964499 0.950465

x0_sigma for inverting feature maps

159.348674296

After prediction for single MA

$ bash shells/predict.sh
$ python scripts/integrate.py --result_dir results --epoch 200 --size 7,60
$ PYTHONPATH=".":$PYTHONPATH python scripts/evaluate.py --map_dir data/mass_merged/test/map --result_dir results/integrated_200 --channel 3 --offset 8 --relax 3 --steps 256
$ PYTHONPATH="." python scripts/eval_urban.py --result_dir results/integrated_200 --test_map_dir data/mass_merged/test/map --steps 256

Pre-trained models and Predicted results

Reference

If you use this code for your project, please cite this journal paper:

Shunta Saito, Takayoshi Yamashita, Yoshimitsu Aoki, "Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks", Journal of Imaging Science and Technology, Vol. 60, No. 1, pp. 10402-1-10402-9, 2015

Comments
  • cannot reshape array of size 3 into shape (92,92,3)

    cannot reshape array of size 3 into shape (92,92,3)

    Hi there! I'm having trouble to run this bash shells/create_dataset.sh

    The error is this: patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 Traceback (most recent call last): File "tests/test_dataset.py", line 36, in o_val, dtype=np.uint8).reshape((92, 92, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_dataset.py", line 36, in o_val, dtype=np.uint8).reshape((92, 92, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_dataset.py", line 36, in o_val, dtype=np.uint8).reshape((92, 92, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_dataset.py", line 36, in o_val, dtype=np.uint8).reshape((92, 92, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_transform.py", line 55, in (args.ortho_original_side, args.ortho_original_side, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_transform.py", line 55, in (args.ortho_original_side, args.ortho_original_side, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_transform.py", line 55, in (args.ortho_original_side, args.ortho_original_side, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_transform.py", line 55, in (args.ortho_original_side, args.ortho_original_side, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3)

    Any ideas? Thank you very much for your time!

    opened by flor88 8
  • Adam optimisation error

    Adam optimisation error

    Hi,

    I am trying set Adam optimization algorithm but receiving the following error, can you please help me how to set parameters?

    python3.5/site-packages/chainer-1.22.0-py3.5-linux-x86_64.egg/chainer/optimizers/adam.py", line 51, in lr return self.alpha * math.sqrt(fix2) / fix1 ZeroDivisionError: float division by zero

    Please see the below parameters what i have set.

    parser.add_argument('--opt', type=str, default='Adam', choices=['MomentumSGD', 'Adam', 'AdaGrad'])
    parser.add_argument('--weight_decay', type=float, default=0.0001)
    parser.add_argument('--alpha', type=float, default=0.001)
    parser.add_argument('--lr', type=float, default=0.0001)
    parser.add_argument('--lr_decay_freq', type=int, default=100)
    parser.add_argument('--lr_decay_ratio', type=float, default=0.1)
    parser.add_argument('--seed', type=int, default=1701)
    args = parser.parse_args()
    

    Thank you

    opened by Tejuwi 3
  • Warning: don't setup your environment using anaconda.

    Warning: don't setup your environment using anaconda.

    This project use Chianer as its deep learning framework. However, Chianer is based on a Cuda driver named CUPY, which is(seemingly) not supported by any Anaconda library. Therefore setup environment using anaconda will always result in fail to find cupy module.

    I tried to using system python cupy library and conda chainer library with no luck. If any one has solution, please follow this issue.

    opened by dragon9001 3
  • import Error

    import Error

    I am using python 3.6 on win 10 I run the script create_dataset Found this Error ModuleNotFoundError: No module named 'utils.patches' the Error relates to this lines: from utils.patches import divide_to_patches

    Kindly help..

    opened by mshakaib 2
  • Tesla P100 is slower

    Tesla P100 is slower

    Hi,

    I have tested an algorithm on Tesla p100 (Ubuntu Server16.04 LTS x86_64) and it takes one epoch 2Hrs. I applied same algorithm on Quadro M4000 (Ubuntu desktop 16.04 LTS x86_64) takes 2Hrs 40min.

    We expected that training time will comedown to half of the Quadro M4000 but their no much difference between Tesla p100 and Quadro M4000.

    Please give me guidance so that the training time will be reduced. I appreciate you kind help

    opened by Tejuwi 1
  • cupy.cuda.curand.CURANDError.__init__ (cupy/cuda/curand.cpp:1108)

    cupy.cuda.curand.CURANDError.__init__ (cupy/cuda/curand.cpp:1108)

    I just enter: CHAINER_TYPE_CHECK=0 CHAINER_SEED=$1 nohup python scripts/train.py --seed 0 --gpu 0 --model models/MnihCNN_multi.py --train_ortho_db data/mass_merged/lmdb/train_sat --train_label_db data/mass_merged/lmdb/train_map --valid_ortho_db data/mass_merged/lmdb/valid_sat --valid_label_db data/mass_merged/lmdb/valid_map --dataset_size 1.0 > mnih_multi.log 2>&1 < /dev/null & and this is the log: You are running using the stub version of curand .Traceback (most recent call last): File "scripts/train.py", line 300, in xp.random.seed(args.seed) File "/home/qs/anaconda3/lib/python3.5/site-packages/cupy/random/generator.py", line 318, in seed get_random_state().seed(seed) File "/home/qs/anaconda3/lib/python3.5/site-packages/cupy/random/generator.py", line 350, in get_random_state rs = RandomState(seed) File "/home/qs/anaconda3/lib/python3.5/site-packages/cupy/random/generator.py", line 45, in init self._generator = curand.createGenerator(method) File "cupy/cuda/curand.pyx", line 92, in cupy.cuda.curand.createGenerator (cupy/cuda/curand.cpp:1443) File "cupy/cuda/curand.pyx", line 96, in cupy.cuda.curand.createGenerator (cupy/cuda/curand.cpp:1381) File "cupy/cuda/curand.pyx", line 85, in cupy.cuda.curand.check_status (cupy/cuda/curand.cpp:1216) File "cupy/cuda/curand.pyx", line 79, in cupy.cuda.curand.CURANDError.init (cupy/cuda/curand.cpp:1108) KeyError: 51

    I have no idea,help

    opened by FogXcG 0
  • Updated environment versions and instructions

    Updated environment versions and instructions

    Hi, I'm trying to train my own dataset but failed to deploy after many different CUDA, CUDNN, CuPy, Python version trials on Ubuntu 16.04 on NVIDIA GTX-950M.

    Is there any updated environment versions for CUDA, CUDNN, CuPy, Python, Chainer, NumPy?

    Cheers,

    opened by Vol-i 3
  • IndexError: index 76 is out of bounds for axis 1 with size 3

    IndexError: index 76 is out of bounds for axis 1 with size 3

    Hello,

    I am currently trying to automate parts of this project and I am running into difficulties during the training phase using CPU mode, which throws an IndexError and appears to hang the entire training. I am using a very small dataset from the mass_buildings set, i.e. I am using 8 training images and 2 validation images. The purpose is only to test and not to have accurate results at the moment. Below is the state of the installation and steps I am using:

    System:

    uname -a
    Linux user-VirtualBox 4.10.0-28-generic #32~16.04.2-Ubuntu SMP Thu Jul 20 10:19:48 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
    

    Python (w/o Anaconda):

    $ python -V
    Python 3.5.2
    

    Python modules:

    user@user-VirtualBox:~/Source/ssai-cnn$ pip3 freeze
    ...
    chainer==1.5.0.2
    ...
    Cython==0.23.4
    ...
    h5py==2.7.1
    ...
    lmdb==0.87
    ...
    matplotlib==2.1.1
    ...
    numpy==1.10.1
    onboard==1.2.0
    opencv-python==3.1.0.3
    ...
    six==1.10.0
    tqdm==4.19.5
    ...
    

    Additionally, Boost 1.59.0 and OpenCV 3.0.0 have been build and installed from source and both installs appears successful. The utils is also built successfully.

    I have downloaded only a small subset of the mass_buildings dataset:

    # ls -R ./data/mass_buildings/train/
    ./data/mass_buildings/train/:
    map  sat
    
    ./data/mass_buildings/train/map:
    22678915_15.tif  22678930_15.tif  22678945_15.tif  22678960_15.tif
    
    ./data/mass_buildings/train/sat:
    22678915_15.tiff  22678930_15.tiff  22678945_15.tiff  22678960_15.tiff
    

    Below is the output obtained by running the shells/create_datasets.sh script, modified only to build the mass_buildings data:

    patch size: 92 24 16
    n_all_files: 1
    divide:0.6727173328399658
    0 / 1 n_patches: 7744
    patches:	 7744
    patch size: 92 24 16
    n_all_files: 1
    divide:0.6314394474029541
    0 / 1 n_patches: 7744
    patches:	 7744
    patch size: 92 24 16
    n_all_files: 4
    divide:0.6260504722595215
    0 / 4 n_patches: 7744
    divide:0.667414665222168
    1 / 4 n_patches: 15488
    divide:0.628319263458252
    2 / 4 n_patches: 23232
    divide:0.6634025573730469
    3 / 4 n_patches: 30976
    patches:	 30976
    0.03437542915344238 sec (128, 3, 64, 64) (128, 16, 16)
    

    Then the training script is initiated using the following command:

    user@user-VirtualBox:~/Source/ssai-cnn$ CHAINER_TYPE_CHECK=0 CHAINER_SEED=$1 \
    > nohup python ./scripts/train.py \
    > --seed 0 \
    > --gpu -1 \
    > --model ./models/MnihCNN_multi.py \
    > --train_orthokill _db data/mass_buildings/lmdb/train_sat \
    > --train_label_db data/mass_buildings/lmdb/train_map \
    > --valid_ortho_db data/mass_buildings/lmdb/valid_sat \
    > --valid_label_db data/mass_buildings/lmdb/valid_map \
    > --dataset_size 1.0 \
    > --epoch 1
    

    As you can see above, I've been using only 8 images and a single epoch. I left the entire process run an entire night and never completed. Hence the reason I believe the process simply hanged. Using nohup also does not complete. When forcefully stopped using Ctrl-C, I'm obtaining the following message:

    # cat nohup.out 
    Traceback (most recent call last):
      File "./scripts/train.py", line 313, in <module>
        model, optimizer = one_epoch(args, model, optimizer, epoch, True)
      File "./scripts/train.py", line 265, in one_epoch
        optimizer.update(model, x, t)
      File "/usr/local/lib/python3.5/dist-packages/chainer/optimizer.py", line 377, in update
        loss = lossfun(*args, **kwds)
      File "./models/MnihCNN_multi.py", line 31, in __call__
        self.loss = F.softmax_cross_entropy(h, t, normalize=False)
      File "/usr/local/lib/python3.5/dist-packages/chainer/functions/loss/softmax_cross_entropy.py", line 152, in softmax_cross_entropy
        return SoftmaxCrossEntropy(use_cudnn, normalize)(x, t)
      File "/usr/local/lib/python3.5/dist-packages/chainer/function.py", line 105, in __call__
        outputs = self.forward(in_data)
      File "/usr/local/lib/python3.5/dist-packages/chainer/function.py", line 183, in forward
        return self.forward_cpu(inputs)
      File "/usr/local/lib/python3.5/dist-packages/chainer/functions/loss/softmax_cross_entropy.py", line 39, in forward_cpu
        p = yd[six.moves.range(t.size), numpy.maximum(t.flat, 0)]
    IndexError: index 76 is out of bounds for axis 1 with size 3
    

    This is the only components that fails at this moment. I've tested the prediction and evaluation phases using the pre-trained data and both seems to complete successfully. Any assistance on how I could use the training script using custom datasets would be appreciated.

    Thank you

    opened by InfectedPacket 1
  • Unable to use cudnn (GPU)

    Unable to use cudnn (GPU)

    I am using the following code for training URL : https://github.com/mitmul/ssai-cnn My Environment specifications are given below: Xeon CPU, 128 GB RAM nVidia TITAN Xp Graphics Card Driver Version 384.90 CUDA 9.0, CUDNN 7.0

    Ubuntu 16.04, Anaconda3 Environment on python 3.5.1 chainer 1.5.0.2
    cupy 2.0.0
    curl 7.55.1 boost 1.65.1
    Cython 0.27.3
    h5py 2.7.1
    hdf5 1.10.1
    lmdb 0.87
    matplotlib 2.1.0
    numpy 1.13.3
    opencv 3.3.0
    pillow 4.3.0
    pycuda 2017.1.1
    python 3.5.1
    six 1.11.0
    tqdm 4.19.4

    I am running the training process on dataset of 430 images (Resolution 1500X1500 pixels) CUDNN is configured cuda.cudnn_enabled gives True Training doesn’t run fine in CPU environment . While using GPU environment using the flag CHAINER_CUDNN = 0 (i.e. cudnn disabled) it gives warning message about cudnn and the GPU usage shows only max. 25%. I feel it is not utilizing the GPU environment fully during training as 1 EPOCH takes 2-3 hrs on average, assuming this for 400 epochs it would take us too many 800-1200hrs (33-50 days) to complete the training. +++ Warning message showing CuDNN not enabled for training+++ /home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/cuda.py:85: UserWarning: cuDNN is not enabled. Please reinstall chainer after you install cudnn (see https://github.com/pfnet/chainer#installation). 'cuDNN is not enabled.\n' ++++++++++++++++++++++++++++++++++++++++++++++++++++++

    When I tried to run the training process using cudnn (i.e. CHAINER_CUDNN = 1) then it gives error. +++++ Error message +++++++ 2017-12-08 11:36:34 INFO start training... 2017-12-08 11:36:34 INFO learning rate:0.0005 2017-12-08 11:36:34 INFO random skip:2 Traceback (most recent call last): File "/common/workspace/ssai-cnn/src/scripts/train.py", line 373, in model, optimizer = one_epoch(args, model, optimizer, epoch, True) File "/common/workspace/ssai-cnn/src/scripts/train.py", line 285, in one_epoch optimizer.update(model, x, t) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/optimizer.py", line 377, in update loss = lossfun(*args, **kwds) File "/common/workspace/ssai-cnn/src/models/MnihCNN_multi.py", line 22, in call h = F.relu(self.conv1(x)) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/links/connection/convolution_2d.py", line 74, in call return convolution_2d.convolution_2d(x, self.W, self.b, self.stride, self.pad, self.use_cudnn) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/functions/connection/convolution_2d.py", line 267, in convolution_2d return func(x, W, b) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/function.py", line 105, in call outputs = self.forward(in_data) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/function.py", line 181, in forward return self.forward_gpu(inputs) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/functions/connection/convolution_2d.py", line 80, in forward_gpu (self.ph, self.pw), (self.sy, self.sx)) TypeError: create_convolution_descriptor() missing 1 required positional argument: 'dtype' ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    I just want to confirm if your process had run on cudnn enabled mode is yes then what chainer and cupy and other supporting package versions were used? What will be the challenges on existing code if I have to upgrade the Chainer version 3.0.0 or 3.1.0 , Cupy 2.1.0 and other packages? Which chainer-cupy version combination will be preferable?

    Thanks

    opened by sundeeptewari 4
  • Is it possible to build this for windows?

    Is it possible to build this for windows?

    Hi, I use Windows 10, anaconda3 and Python 3.6 I was wondering if i would be able to build the utils if I change the paths in the file build.sh. e.g. $PYTHON_DIR/lib/libpython3.5m.so to Windows Anaconda equivalent (if there is one). then run bash using cygwin.

    Or should I give up and use a Linux machine? Thanks, Véro

    opened by VeroL 2
  • fatal error: pyconfig.h: No such file or directory  # include <pyconfig.h>

    fatal error: pyconfig.h: No such file or directory # include

    -- Found PythonLibs: /home/yanghuan/.pyenv/versions/anaconda3-2.4.0/lib/libpython3.5m.so
    -- Configuring done -- Generating done CMake Warning: Manually-specified variables were not used by the project:

    PYTHON_INCLUDE_DIR2
    

    -- Build files have been written to: /home/yanghuan/下载/ssai-cnn-master/utils Scanning dependencies of target patches [ 16%] Building CXX object CMakeFiles/patches.dir/src/devide_to_patches.cpp.o In file included from /usr/local/include/boost/python/detail/prefix.hpp:13:0, from /usr/local/include/boost/python/args.hpp:8, from /usr/local/include/boost/python.hpp:11, from /home/yanghuan/下载/ssai-cnn-master/utils/src/devide_to_patches.cpp:4: /usr/local/include/boost/python/detail/wrap_python.hpp:50:23: fatal error: pyconfig.h: No such file or directory

    include <pyconfig.h>

                       ^
    

    compilation terminated. CMakeFiles/patches.dir/build.make:62: recipe for target 'CMakeFiles/patches.dir/src/devide_to_patches.cpp.o' failed make[2]: *** [CMakeFiles/patches.dir/src/devide_to_patches.cpp.o] Error 1 CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/patches.dir/all' failed make[1]: *** [CMakeFiles/patches.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2

    opened by yizuifangxiuyh 3
Releases(v1.0.0)
  • v1.0.0(Oct 22, 2018)

    This is an implementation of Volodymyr Mnih's dissertation methods on his Massachusetts road & building dataset and my original methods that are published in this paper.

    Requirements

    • Python 3.5 (anaconda with python 3.5.1 is recommended)
      • Chainer 1.5.0.2
      • Cython 0.23.4
      • NumPy 1.10.1
      • tqdm
    • OpenCV 3.0.0
    • lmdb 0.87
    • Boost 1.59.0
    • Boost.NumPy (26aaa5b)

    Build Libraries

    OpenCV 3.0.0

    $ wget https://github.com/Itseez/opencv/archive/3.0.0.zip
    $ unzip 3.0.0.zip && rm -rf 3.0.0.zip
    $ cd opencv-3.0.0 && mkdir build && cd build
    $ bash $SSAI_HOME/shells/build_opencv.sh
    $ make -j32 install
    

    If some libraries are missing, do below before compiling 3.0.0.

    $ sudo apt-get install -y libopencv-dev libtbb-dev
    

    Boost 1.59. 0

    $ wget http://downloads.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.bz2
    $ tar xvf boost_1_59_0.tar.bz2 && rm -rf boost_1_59_0.tar.bz2
    $ cd boost_1_59_0
    $ ./bootstrap.sh
    $ ./b2 -j32 install cxxflags="-I/home/ubuntu/anaconda3/include/python3.5m"
    

    Boost.NumPy

    $ git clone https://github.com/ndarray/Boost.NumPy.git
    $ cd Boost.NumPy && mkdir build && cd build
    $ cmake -DPYTHON_LIBRARY=$HOME/anaconda3/lib/libpython3.5m.so ../
    $ make install
    

    Build utils

    $ cd $SSAI_HOME/scripts/utils
    $ bash build.sh
    

    Create Dataset

    $ bash shells/download.sh
    $ bash shells/create_dataset.sh
    

    Dataset | Training | Validation | Test :-------------: | :------: | :--------: | :---: mass_roads | 8580352 | 108416 | 379456 mass_roads_mini | 1060928 | 30976 | 77440 mass_buildings | 1060928 | 30976 | 77440 mass_merged | 1060928 | 30976 | 77440

    Start Training

    $ CHAINER_TYPE_CHECK=0 CHAINER_SEED=$1 \
    nohup python scripts/train.py \
    --seed 0 \
    --gpu 0 \
    --model models/MnihCNN_multi.py \
    --train_ortho_db data/mass_merged/lmdb/train_sat \
    --train_label_db data/mass_merged/lmdb/train_map \
    --valid_ortho_db data/mass_merged/lmdb/valid_sat \
    --valid_label_db data/mass_merged/lmdb/valid_map \
    --dataset_size 1.0 \
    > mnih_multi.log 2>&1 < /dev/null &
    

    Prediction

    python scripts/predict.py \
    --model results/MnihCNN_multi_2016-02-03_03-34-58/MnihCNN_multi.py \
    --param results/MnihCNN_multi_2016-02-03_03-34-58/epoch-400.model \
    --test_sat_dir data/mass_merged/test/sat \
    --channels 3 \
    --offset 8 \
    --gpu 0 &
    

    Evaluation

    $ PYTHONPATH=".":$PYTHONPATH python scripts/evaluate.py \
    --map_dir data/mass_merged/test/map \
    --result_dir results/MnihCNN_multi_2016-02-03_03-34-58/ma_prediction_400 \
    --channel 3 \
    --offset 8 \
    --relax 3 \
    --steps 1024
    

    Results

    Conventional methods

    Model | Mass. Buildings | Mass. Roads | Mass.Roads-Mini :---------------------------- | :-------------- | :--------------------- | :-------------- MnihCNN | 0.9150 | 0.8873 | N/A MnihCNN + CRF | 0.9211 | 0.8904 | N/A MnihCNN + Post-processing net | 0.9203 | 0.9006 | N/A Single-channel | 0.9503062 | 0.91730195 (epoch 120) | 0.89989258 Single-channel with MA | 0.953766 | 0.91903522 (epoch 120) | 0.902895

    Multi-channel models (epoch = 400, step = 1024)

    Model | Building-channel | Road-channel | Road-channel (fixed) :-------------------------- | :--------------- | :----------- | :------------------- Multi-channel | 0.94346856 | 0.89379946 | 0.9033020025 Multi-channel with MA | 0.95231262 | 0.89971473 | 0.90982972 Multi-channel with CIS | 0.94417078 | 0.89415726 | 0.9039476538 Multi-channel with CIS + MA | 0.95280431 | 0.90071099 | 0.91108087

    Test on urban areas (epoch = 400, step = 1024)

    Model | Building-channel | Road-channel :-------------------------- | :--------------- | :----------- Single-channel with MA | 0.962133 | 0.944748 Multi-channel with MA | 0.962797 | 0.947224 Multi-channel with CIS + MA | 0.964499 | 0.950465

    x0_sigma for inverting feature maps

    159.348674296
    

    After prediction for single MA

    $ bash shells/predict.sh
    $ python scripts/integrate.py --result_dir results --epoch 200 --size 7,60
    $ PYTHONPATH=".":$PYTHONPATH python scripts/evaluate.py --map_dir data/mass_merged/test/map --result_dir results/integrated_200 --channel 3 --offset 8 --relax 3 --steps 256
    $ PYTHONPATH="." python scripts/eval_urban.py --result_dir results/integrated_200 --test_map_dir data/mass_merged/test/map --steps 256
    

    Pre-trained models and Predicted results

    Reference

    If you use this code for your project, please cite this journal paper:

    Shunta Saito, Takayoshi Yamashita, Yoshimitsu Aoki, "Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks", Journal of Imaging Science and Technology, Vol. 60, No. 1, pp. 10402-1-10402-9, 2015

    Source code(tar.gz)
    Source code(zip)
Owner
Shunta Saito
Ph.D in Engineering, Researcher at Preferred Networks, Inc.
Shunta Saito
UAV-Networks-Routing is a Python simulator for experimenting routing algorithms and mac protocols on unmanned aerial vehicle networks.

UAV-Networks Simulator - Autonomous Networking - A.A. 20/21 UAV-Networks-Routing is a Python simulator for experimenting routing algorithms and mac pr

null 0 Nov 13, 2021
This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of Coordinate Independent Convolutional Networks.

Orientation independent Möbius CNNs This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of

Maurice Weiler 59 Dec 9, 2022
FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery (TGRS)

FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery by Ailong Ma, Junjue Wang*, Yanfei Zhon

Kingdrone 43 Jan 5, 2023
Change is Everywhere: Single-Temporal Supervised Object Change Detection in Remote Sensing Imagery (ICCV 2021)

Change is Everywhere Single-Temporal Supervised Object Change Detection in Remote Sensing Imagery by Zhuo Zheng, Ailong Ma, Liangpei Zhang and Yanfei

Zhuo Zheng 125 Dec 13, 2022
Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

ReDet: A Rotation-equivariant Detector for Aerial Object Detection ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021), Jiam

csuhan 334 Dec 23, 2022
Learning Calibrated-Guidance for Object Detection in Aerial Images

Learning Calibrated-Guidance for Object Detection in Aerial Images arxiv We propose a simple yet effective Calibrated-Guidance (CG) scheme to enhance

null 51 Sep 22, 2022
Tiny Object Detection in Aerial Images.

AI-TOD AI-TOD is a dataset for tiny object detection in aerial images. [Paper] [Dataset] Description AI-TOD comes with 700,621 object instances for ei

jwwangchn 116 Dec 30, 2022
Official code for 'Robust Siamese Object Tracking for Unmanned Aerial Manipulator' and offical introduction to UAMT100 benchmark

SiamSA: Robust Siamese Object Tracking for Unmanned Aerial Manipulator Demo video ?? Our video on Youtube and bilibili demonstrates the evaluation of

Intelligent Vision for Robotics in Complex Environment 12 Dec 18, 2022
YOLTv5 rapidly detects objects in arbitrarily large aerial or satellite images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks

YOLTv5 rapidly detects objects in arbitrarily large aerial or satellite images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks.

Adam Van Etten 145 Jan 1, 2023
4st place solution for the PBVS 2022 Multi-modal Aerial View Object Classification Challenge - Track 1 (SAR) at PBVS2022

A Two-Stage Shake-Shake Network for Long-tailed Recognition of SAR Aerial View Objects 4st place solution for the PBVS 2022 Multi-modal Aerial View Ob

LinpengPan 5 Nov 9, 2022
SafePicking: Learning Safe Object Extraction via Object-Level Mapping, ICRA 2022

SafePicking Learning Safe Object Extraction via Object-Level Mapping Kentaro Wad

Kentaro Wada 49 Oct 24, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

AnalyticMesh Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and top

Karbo 45 Dec 21, 2022
Focal Sparse Convolutional Networks for 3D Object Detection (CVPR 2022, Oral)

Focal Sparse Convolutional Networks for 3D Object Detection (CVPR 2022, Oral) This is the official implementation of Focals Conv (CVPR 2022), a new sp

DV Lab 280 Jan 7, 2023
Code and model benchmarks for "SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology"

NeurIPS 2020 SEVIR Code for paper: SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology Requirement

USAF - MIT Artificial Intelligence Accelerator 46 Dec 15, 2022
Experiments on Flood Segmentation on Sentinel-1 SAR Imagery with Cyclical Pseudo Labeling and Noisy Student Training

Flood Detection Challenge This repository contains code for our submission to the ETCI 2021 Competition on Flood Detection (Winning Solution #2). Acco

Siddha Ganju 108 Dec 28, 2022
Train a deep learning net with OpenStreetMap features and satellite imagery.

DeepOSM Classify roads and features in satellite imagery, by training neural networks with OpenStreetMap (OSM) data. DeepOSM can: Download a chunk of

TrailBehind, Inc. 1.3k Nov 24, 2022
To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

Kunal Wadhwa 2 Jan 5, 2022
Deep Learning pipeline for motor-imagery classification.

BCI-ToolBox 1. Introduction BCI-ToolBox is deep learning pipeline for motor-imagery classification. This repo contains five models: ShallowConvNet, De

DongHee 18 Oct 31, 2022