Train a deep learning net with OpenStreetMap features and satellite imagery.

Overview

DeepOSM Build Status

Classify roads and features in satellite imagery, by training neural networks with OpenStreetMap (OSM) data.

DeepOSM can:

  • Download a chunk of satellite imagery
  • Download OSM data that shows roads/features for that area
  • Generate training and evaluation data
  • Display predictions of mis-registered roads in OSM data, or display raw predictions of ON/OFF

Running the code is as easy as install Docker, make dev, and run a script.

Contributions are welcome. Open an issue if you want to discuss something to do, or email me.

Default Data/Accuracy

By default, DeepOSM will analyze about 200 sq. km of area in Delaware. DeepOSM will

  • predict if the center 9px of a 64px tile contains road.
  • use the infrared (IR) band and RGB bands.
  • be 75-80% accurate overall, training only for a minute or so.
  • use a single fully-connected relu layer in TensorFlow.
  • render, as JPEGs, "false positive" predictions in the OSM data - i.e. where OSM lists a road, but DeepOSM thinks there isn't one.

NAIP with Ways and Predictions

Background on Data - NAIPs and OSM PBF

For training data, DeepOSM cuts tiles out of NAIP images, which provide 1-meter-per-pixel resolution, with RGB+infrared data bands.

For training labels, DeepOSM uses PBF extracts of OSM data, which contain features/ways in binary format that can be munged with Python.

The NAIPs come from a requester pays bucket on S3 set up by Mapbox, and the OSM extracts come from geofabrik.

Install Requirements

DeepOSM has been run successfully on both Mac (10.x) and Linux (14.04 and 16.04). You need at least 4GB of memory.

AWS Credentials

You need AWS credentials to download NAIPs from an S3 requester-pays bucket. This only costs a few cents for a bunch of images, but you need a credit card on file.

export AWS_ACCESS_KEY_ID='FOO'
export AWS_SECRET_ACCESS_KEY='BAR'

Install Docker

First, install a Docker Binary.

I also needed to set my VirtualBox default memory to 4GB, when running on a Mac. This is easy:

  • start Docker, per the install instructions
  • stop Docker
  • open VirtualBox, and increase the memory of the VM Docker made

(GPU Only) Install nvidia-docker

In order to use your GPU to accelerate DeepOSM, you will need to download and install the latest NVIDIA drivers for your GPU, and (after first installing docker itself), install nvidia-docker.

First, find the latest NVIDIA drivers for your GPU on NVIDIA's website. Make sure you check the version number of the driver, as the most recent release isn't always the latest version.

Once you have downloaded the appropriate NVIDIA-*.run file, install it as follows (based on these instructions):

Ensure your system is up-to-date and reboot to ensure the latest installed kernel is loaded:

# ensure your packages are up-to-date
sudo apt-get update
sudo apt-get dist-upgrade
# and reboot
sudo reboot

Once your system has rebooted, install build-essential and the linux-headers package for your current kernel version (or equivalents for your linux distribution):

sudo apt-get install build-essential linux-headers-$(uname -r) 

Then run the NVIDIA driver install you downloaded earlier, and reboot your machine afterwards:

sudo bash <location of ./NVIDIA-Linux-*.run file>
sudo reboot

Finally, verify that the NVIDIA drivers are installed correctly, and your GPU can be located using nvidia-smi:

nvidia-smi
Thu Mar  9 03:40:33 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.57                 Driver Version: 367.57                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GRID K520           Off  | 0000:00:03.0     Off |                  N/A |
| N/A   54C    P0    45W / 125W |      0MiB /  4036MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Now that the NVIDIA drivers are installed, nvidia-docker can be downloaded and installed as follows (based on these instructions):

wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb

And you can confirm the installation, by attempting to run nvida-smi inside of a docker container:

nvidia-docker run --rm nvidia/cuda nvidia-smi
Using default tag: latest
latest: Pulling from nvidia/cuda
d54efb8db41d: Pull complete 
f8b845f45a87: Pull complete 
e8db7bf7c39f: Pull complete 
9654c40e9079: Pull complete 
6d9ef359eaaa: Pull complete 
cdfa70f89c10: Pull complete 
3208f69d3a8f: Pull complete 
eac0f0483475: Pull complete 
4580f9c5bac3: Pull complete 
6ee6617c19de: Pull complete 
Digest: sha256:2b7443eb37da8c403756fb7d183e0611f97f648ed8c3e346fdf9484433ca32b8
Status: Downloaded newer image for nvidia/cuda:latest
Thu Mar  9 03:44:23 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.57                 Driver Version: 367.57                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GRID K520           Off  | 0000:00:03.0     Off |                  N/A |
| N/A   54C    P8    18W / 125W |      0MiB /  4036MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Once you have confirmed nvidia-smi works inside of nvidia-docker, you should be able to run DeepOSM using your GPU.

Run Scripts

Start Docker, then run:

make dev-gpu

Or if you don't have a capable GPU, run:

make dev

Download NAIP, PBF, and Analyze

Inside Docker, the following Python scripts will work. This will download all source data, tile it into training/test data and labels, train the neural net, and generate image and text output.

The default data is six NAIPs, which get tiled into 64x64x4 bands of data (RGB-IR bands). The training labels derive from PBF files that overlap the NAIPs.

python bin/create_training_data.py
python bin/train_neural_net.py

For output, DeepOSM will produce some console logs, and then JPEGs of the ways, labels, and predictions overlaid on the tiff.

Testing

There is a very limited test suite available at the moment, that can be accessed (from the host system) by running:

make test

Jupyter Notebook

Alternately, development/research can be done via jupyter notebooks:

make notebook

To access the notebook via a browser on your host machine, find the IP VirtualBox is giving your default docker container by running:

docker-machine ls

NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
default   *        virtualbox   Running   tcp://192.168.99.100:2376           v1.10.3

The notebook server is accessible via port 8888, so in this case you'd go to: http://192.168.99.100:8888

Readings

Also see a work journal here.

Papers - Relevant Maybe

Papers - Not All that Relevant

Papers to Review

Recent Recommendations

Citing Mnih and Hinton

I am reviewing these papers from Google Scholar that both cite the key papers and seem relevant to the topic.

Original Idea

This was the general idea to start, and working with TMS tiles sort of worked (see first 50 or so commits), so DeepOSM got switched to better data:

Deep OSM Project

Comments
  • Rebase docker on tensorflow (can now use nvidia-docker for GPU version)

    Rebase docker on tensorflow (can now use nvidia-docker for GPU version)

    Sorry for dropping a biggish pull request on you unannounced, but I haven't been able to get the GPU code working, and I noticed that a lot of the GPU (and CPU) Dockerfile seemed to be based on the tensorflow docker tools at https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/docker, so I thought it might be worth changing the base FROM image from gdal to tensorflow.

    This has made the Dockerfiles much simpler (and the gpu one is now automatically generated in the make file as it is only four characters different from the cpu version).

    I have also added a very simple set of python unittests that just test module importing at the moment, and added them to the Makefile and the Travis CI configuration. All test currently pass!

    I can also confirm that the CPU training works everywhere I've tried it, and the GPU training works on AWS EC2 GPU instances.

    I would like to update tensorflow and tflearn, but I've left it at 0.8 and the arbitrary git checkout for now. Updating those can be a problem for a different pull request.

    Thanks!

    opened by dbdean 11
  • Getting Error while training the neural net !!!

    Getting Error while training the neural net !!!

    1. Installed the Docker
    2. make dev command executed successfully.
    3. after that when I'm executing the python bin/create_training_data.py I'm getting some errors like :

    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! An unexpected error has occurred. Please try reproducing the error using the latest s3cmd code from the git master branch found at: https://github.com/s3tools/s3cmd and have a look at the known issues list: https://github.com/s3tools/s3cmd/wiki/Common-known-issues-and-their-solutions If the error persists, please report the following lines (removing any private info as necessary) to: [email protected]

    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    Invoked as: /usr/local/bin/s3cmd ls --recursive --skip-existing s3://aws-naip/in/2014/1m/rgbir/ --requester-pays Problem: gaierror: [Errno -2] Name or service not known S3cmd: 1.6.0 python: 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4] environment LANG=None

    Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/s3cmd-1.6.0-py2.7.egg/EGG-INFO/scripts/s3cmd", line 2805, in File "/usr/local/lib/python2.7/dist-packages/s3cmd-1.6.0-py2.7.egg/EGG-INFO/scripts/s3cmd", line 2713, in main File "/usr/local/lib/python2.7/dist-packages/s3cmd-1.6.0-py2.7.egg/EGG-INFO/scripts/s3cmd", line 120, in cmd_ls File "/usr/local/lib/python2.7/dist-packages/s3cmd-1.6.0-py2.7.egg/EGG-INFO/scripts/s3cmd", line 153, in subcmd_bucket_list File "build/bdist.linux-x86_64/egg/S3/S3.py", line 293, in bucket_list for dirs, objects in self.bucket_list_streaming(bucket, prefix, recursive, uri_params): File "build/bdist.linux-x86_64/egg/S3/S3.py", line 320, in bucket_list_streaming response = self.bucket_list_noparse(bucket, prefix, recursive, uri_params) File "build/bdist.linux-x86_64/egg/S3/S3.py", line 339, in bucket_list_noparse response = self.send_request(request) File "build/bdist.linux-x86_64/egg/S3/S3.py", line 1061, in send_request conn = ConnMan.get(self.get_hostname(resource['bucket'])) File "build/bdist.linux-x86_64/egg/S3/ConnMan.py", line 179, in get conn.c.connect() File "/usr/lib/python2.7/httplib.py", line 1216, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 553, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): gaierror: [Errno -2] Name or service not known

    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! An unexpected error has occurred. Please try reproducing the error using the latest s3cmd code from the git master branch found at: https://github.com/s3tools/s3cmd and have a look at the known issues list: https://github.com/s3tools/s3cmd/wiki/Common-known-issues-and-their-solutions If the error persists, please report the above lines (removing any private info as necessary) to: [email protected] !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! DOWNLOADING 1 PBFs... downloads took 3.7s EXTRACTED WAYS with locations from pbf file /DeepOSM/data/openstreetmap/delaware-latest.osm.pbf, took 7.1s


    please help Using macOS Sierra

    opened by sahil210695 7
  • split data creation and analysis into separate Docker apps

    split data creation and analysis into separate Docker apps

    Currently, Dockerfile.devel-gpu inherits from GDAL Dockerfile, and then adds in both stuff needed for 1) data creation, and stuff needed for 2) analysis... including a nested stack of Tensorflow Dockerfiles that got copied in.

    Two Dockers - Data Creation and Data Analysis

    It would be cleaner if one Dockerfile created training data and saved it to disk, and one Dockerfile used training data from disk to analyze. The training data docker file could be mostly like the existing non-GPU Dockerfile, and the analysis Docker file would inherit from stock Tensorflow Dockerfile and be short, simple, and easy to maintain/deploy to AWS.

    In production, I guess one Dockerfile saves data to S3, and the other mounts that data bucket for analysis. In development, it create a directory on disk, no S3 involved.

    API

    The data creation Docker app could also serve an API, to give people training data for their own models, or to support an MNIST like open research competition for maps.

    Infrastructure 
    opened by andrewljohnson 7
  • INSTALL ISSUES

    INSTALL ISSUES

    hi~ I have installed a docker but cd /DeepOSM-master make dev then run python bin/create_training_data.py ,i have a error : File "bin/create_training_data.py", line 6, in from src.training_data import download_and_serialize ImportError: No module named src.training_data can you give me a detailed tutorial

    opened by Hjy20255 5
  • gdal version mismatch?

    gdal version mismatch?

    Looks like the gdal swig binary is out of sync with newer python bindings? I'm running this inside the non-gpu docker image. I gather the GNM bits are relatively new, so it makes sense that they'd get hit.

    root@f059367028e3:/DeepOSM# python ./bin/create_training_data.py 
    Traceback (most recent call last):
      File "./bin/create_training_data.py", line 6, in <module>
        from src.training_data import download_and_serialize
      File "/DeepOSM/src/training_data.py", line 10, in <module>
        from osgeo import gdal
      File "/usr/local/lib/python2.7/dist-packages/osgeo/gdal.py", line 86, in <module>
        from gdalconst import *
      File "/usr/local/lib/python2.7/dist-packages/osgeo/gdalconst.py", line 148, in <module>
        OF_GNM = _gdalconst.OF_GNM
    AttributeError: 'module' object has no attribute 'OF_GNM'
    

    Also, I've tried going straight to https://hub.docker.com/r/homme/gdal/ and I can't reproduce there.

    I'm wondering if one of the apt-get installs brought along an out-of-date gdal binary?

    Here are a few more files:

    root@f059367028e3:/usr/local/lib/python2.7/dist-packages# ls -lha /usr/local/lib/libgdal*
    -rw-r--r-- 1 root root 288M Oct 28 11:24 /usr/local/lib/libgdal.a
    -rwxr-xr-x 1 root root 1.6K Oct 28 11:24 /usr/local/lib/libgdal.la
    lrwxrwxrwx 1 root root   17 Oct 28 11:24 /usr/local/lib/libgdal.so -> libgdal.so.20.1.0
    lrwxrwxrwx 1 root root   17 Oct 28 11:24 /usr/local/lib/libgdal.so.20 -> libgdal.so.20.1.0
    -rwxr-xr-x 1 root root 116M Oct 28 11:24 /usr/local/lib/libgdal.so.20.1.0
    root@f059367028e3:/usr/local/lib/python2.7/dist-packages# ls -lha *gdal*
    -rw-r--r-- 1 root staff 128 Oct 28 10:15 gdal.py
    -rw-r--r-- 1 root staff 244 Oct 28 11:25 gdal.pyc
    -rw-r--r-- 1 root staff 143 Oct 28 10:15 gdalconst.py
    -rw-r--r-- 1 root staff 274 Oct 28 11:25 gdalconst.pyc
    -rw-r--r-- 1 root staff 147 Oct 28 10:15 gdalnumeric.py
    -rw-r--r-- 1 root staff 279 Oct 28 11:25 gdalnumeric.pyc
    
    opened by jmontrose 4
  • New Contributor

    New Contributor

    Hey Guys, I am B.Tech. CS graduate and I have been working on image processing since 1.5 years. I would like to contribute to this project. Can anyone assign me some task, doesn't matter how simple or tough ?

    opened by batCoder95 4
  • explain how NAIPs are indexed on the S3 bucket

    explain how NAIPs are indexed on the S3 bucket

    I don't understand how I can make an index of the NAIP bucket on S3. The parameters are as follows: ['md', '2013', '1m', 'rgbir', '38077']

    I understand all the parameters but the last one... I assume that is some sort of grid or ordinal ID.

    If I knew what the final parameter was, I could write a script to get training data from any state/year. Now it's limited to a certain place in Maryland, around Washington DC.

    help wanted 
    opened by andrewljohnson 4
  • Cannot download NAIP images on s3 using boto

    Cannot download NAIP images on s3 using boto

    I‘ve configurated s3cfg, and i can download naip image via s3cmd fine, but when running bin/create_training_data.py, it ends up 403 error when downloading naip images.

    opened by andrewjavao 3
  • Fail to run train_neural_net.py

    Fail to run train_neural_net.py

    “For output, DeepOSM will produce some console logs, and then JPEGs of the ways, labels, and predictions overlaid on the tiff.” i run train_neural_net.py successfully . but there no JPEGs of the ways, labels, and predictions overlaid on the tiff.

    opened by Hjy20255 3
  • thoughts on better neural net

    thoughts on better neural net

    The current neural net uses something really made for handwritten digit classification (MNIST).

    We could do stuff like use Alexnet instead, or implement the neural nets described in Mnih/Hinton too. Literature also describes using a sequence of pre and post processing neural nets, where you can fill in gaps in road networks.

    Expanding on these vague comments, there is a whole body of literature about how to use CNNs, RNNs, global topology, lidar elevation data, and much more to improve the accuracy of satellite imagery label. We should be able to get above 90% on the pixel level, just using semi-local RGB data, and push past that with multiple neural nets, more data, and other documented improvements in the last 2-3 years since Mnih's thesis.

    See the README for a list of readings.

    enhancement 
    opened by andrewljohnson 3
  • Getting the import error

    Getting the import error

    python bin/run_analysis.py Traceback (most recent call last): File "bin/run_analysis.py", line 7, in from src.run_analysis import analyze, render_results_as_images File "/DeepOSM/src/run_analysis.py", line 6, in import label_chunks_cnn_cifar File "/DeepOSM/src/label_chunks_cnn_cifar.py", line 11, in import tflearn File "/usr/local/lib/python2.7/dist-packages/tflearn/init.py", line 21, in from .layers import normalization File "/usr/local/lib/python2.7/dist-packages/tflearn/layers/init.py", line 10, in from .recurrent import lstm, gru, simple_rnn, bidirectional_rnn, File "/usr/local/lib/python2.7/dist-packages/tflearn/layers/recurrent.py", line 8, in from tensorflow.contrib.rnn.python.ops.core_rnn import static_rnn as _rnn, ImportError: No module named core_rnn

    and this

    File "bin/run_analysis.py", line 7, in from src.run_analysis import analyze, render_results_as_images File "/DeepOSM/src/run_analysis.py", line 6, in import label_chunks_cnn_cifar File "/DeepOSM/src/label_chunks_cnn_cifar.py", line 11, in import tflearn File "/usr/local/lib/python2.7/dist-packages/tflearn/init.py", line 4, in from . import config File "/usr/local/lib/python2.7/dist-packages/tflearn/config.py", line 5, in from .variables import variable File "/usr/local/lib/python2.7/dist-packages/tflearn/variables.py", line 7, in from tensorflow.contrib.framework.python.ops import add_arg_scope as contrib_add_arg_scope ImportError: cannot import name add_arg_scope

    while running the github project https://github.com/zilongzhong/DeepOSM In this one I am able to create training data successfully but not able to run_analysis.py and while running I'm getting this error and

    opened by sahil210695 2
  • Training stops with 'NoneType' object has no attribute 'name'

    Training stops with 'NoneType' object has no attribute 'name'

    ..any idea? or a hint, where I might get deeper information about it?

    Kind regards,

    Daniel

    Training samples: 199 Validation samples: 23

    Training Step: 899 | total loss: 0.23639 Training Step: 899 | total loss: 0.23639 acc: 0.9115 | val_loss: 0.29569 - val_| Momentum | epoch: 001 | loss: 0.23639 - acc: 0.9115 | val_loss: 0.29569 - val_acc: 0.8261 -- iter: 199/199

    Training Step: 903 | total loss: 0.36168 Training Step: 903 | total loss: 0.36168 acc: 0.8505 | val_loss: 0.23084 - val_| Momentum | epoch: 002 | loss: 0.36168 - acc: 0.8505 | val_loss: 0.23084 - val_acc: 0.9565 -- iter: 199/199

    Training Step: 907 | total loss: 0.34085 Training Step: 907 | total loss: 0.34085 acc: 0.8723 | val_loss: 0.11795 - val_| Momentum | epoch: 003 | loss: 0.34085 - acc: 0.8723 | val_loss: 0.11795 - val_acc: 0.9565 -- iter: 199/199

    Training Step: 911 | total loss: 0.61323 Training Step: 911 | total loss: 0.61323 acc: 0.7900 | val_loss: 0.16038 - val_| Momentum | epoch: 004 | loss: 0.61323 - acc: 0.7900 | val_loss: 0.16038 - val_acc: 0.9565 -- iter: 199/199

    Training Step: 915 | total loss: 0.55020 Training Step: 915 | total loss: 0.55020 acc: 0.7925 | val_loss: 0.23164 - val_| Momentum | epoch: 005 | loss: 0.55020 - acc: 0.7925 | val_loss: 0.23164 - val_acc: 0.9565 -- iter: 199/199

    WARNING:tensorflow:Error encountered when serializing data_augmentation. Type is unsupported, or the types of the items don't match field type in CollectionDef. 'NoneType' object has no attribute 'name' WARNING:tensorflow:Error encountered when serializing summary_tags. Type is unsupported, or the types of the items don't match field type in CollectionDef. 'dict' object has no attribute 'name' WARNING:tensorflow:Error encountered when serializing data_preprocessing. Type is unsupported, or the types of the items don't match field type in CollectionDef. 'NoneType' object has no attribute 'name'

    opened by dakoller 1
  • Is web mercator transform correct?

    Is web mercator transform correct?

    In geo_util.py, this line gives me the correct lat lon in web mercator.

    x2, y2 = transform(in_proj, out_proj, ulon, ulat)
    

    But this last line (just before the return) seems to transform it back to lat lon in degrees:

    x2, y2 = out_proj(x2, y2, inverse=True)
    

    which I believe is incorrect. Am I missing something here?

    opened by WillieMaddox 1
  • Clock in docker is wrong, so aws doesn't allow download

    Clock in docker is wrong, so aws doesn't allow download

    This appears when I'm using mac and linux centos7 both, after i do 'make dev' and working container is successfully launched, the clock in container is far away from the local clock, and seems the offset is random, and i can't set clock inside container. This may cause downloading failure when runing create_training_data.py in container. And I'm in timezone +8.

    opened by andrewjavao 1
  • [Suggest] Can you develop new function that generates website content?

    [Suggest] Can you develop new function that generates website content?

    Since deeposm.org site no longger exists, so can you provide codes that generates local website contents? So user can have a presentation of the results(findings, errors .etc) other than label files.

    opened by andrewjavao 0
  • Pillow version is low, cause error when saving JPEG files

    Pillow version is low, cause error when saving JPEG files

    I met this error when I rendering findings to JPEG, using render_results_for_analysis method in file src/training_visualization.py, traceback: Wrong JPEG library version: library is 90, caller expects 80 Traceback (most recent call last): File "upload_data.py", line 31, in <module> main() File "upload_data.py", line 27, in main render_findings(raster_data_paths, model, training_info, model_info['bands'], True) File "/DeepOSM/src/s3_client_deeposm.py", line 30, in render_findings training_info['tile_size']) File "/DeepOSM/src/training_visualization.py", line 34, in render_results_for_analysis tile_size) File "/DeepOSM/src/training_visualization.py", line 54, in render_predictions predictions=predictions_by_naip) File "/DeepOSM/src/training_visualization.py", line 111, in render_results_as_image im.save(outfile, "JPEG") Upgrade Pillow python library version to 4.0.0 can solve this.

    btw, Since deeposm.org site no longger exists, so can you provide codes that generates local website contents? Is there ways to view the results other than JPEG with findings on map?

    opened by andrewjavao 2
Owner
TrailBehind, Inc.
TrailBehind, Inc.
To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

Kunal Wadhwa 2 Jan 5, 2022
Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectrum sensing.

Deep-Learning-based-Spectrum-Sensing Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectru

null 10 Dec 14, 2022
Deep Learning pipeline for motor-imagery classification.

BCI-ToolBox 1. Introduction BCI-ToolBox is deep learning pipeline for motor-imagery classification. This repo contains five models: ShallowConvNet, De

DongHee 18 Oct 31, 2022
Satellite labelling tool for manual labelling of storm top features such as overshooting tops, above-anvil plumes, cold U/Vs, rings etc.

Satellite labelling tool About this app A tool for manual labelling of storm top features such as overshooting tops, above-anvil plumes, cold U/Vs, ri

Czech Hydrometeorological Institute - Satellite Department 10 Sep 14, 2022
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

MIC-DKFZ 1.2k Jan 4, 2023
Neural networks applied in recognizing guitar chords using python, AutoML.NET with C# and .NET Core

Chord Recognition Demo application The demo application is written in C# with .NETCore. As of July 9, 2020, the only version available is for windows

Andres Mauricio Rondon Patiño 24 Oct 22, 2022
U^2-Net - Portrait matting This repository explores possibilities of using the original u^2-net model for portrait matting.

U^2-Net - Portrait matting This repository explores possibilities of using the original u^2-net model for portrait matting.

Dennis Bappert 104 Nov 25, 2022
U-2-Net: U Square Net - Modified for paired image training of style transfer

U2-Net: U Square Net Modified for paired image training of style transfer This is an unofficial repo making use of the code which was made available b

Doron Adler 43 Oct 3, 2022
RGBD-Net - This repository contains a pytorch lightning implementation for the 3DV 2021 RGBD-Net paper.

[3DV 2021] We propose a new cascaded architecture for novel view synthesis, called RGBD-Net, which consists of two core components: a hierarchical depth regression network and a depth-aware generator network.

Phong Nguyen Ha 4 May 26, 2022
A PyTorch implementation of Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks

SVHNClassifier-PyTorch A PyTorch implementation of Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks If

Potter Hsu 182 Jan 3, 2023
YOLTv5 rapidly detects objects in arbitrarily large aerial or satellite images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks

YOLTv5 rapidly detects objects in arbitrarily large aerial or satellite images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks.

Adam Van Etten 145 Jan 1, 2023
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

null 139 Jan 1, 2023
Aerial Imagery dataset for fire detection: classification and segmentation (Unmanned Aerial Vehicle (UAV))

Aerial Imagery dataset for fire detection: classification and segmentation using Unmanned Aerial Vehicle (UAV) Title FLAME (Fire Luminosity Airborne-b

null 79 Jan 6, 2023
Experiments on Flood Segmentation on Sentinel-1 SAR Imagery with Cyclical Pseudo Labeling and Noisy Student Training

Flood Detection Challenge This repository contains code for our submission to the ETCI 2021 Competition on Flood Detection (Winning Solution #2). Acco

Siddha Ganju 108 Dec 28, 2022
improvement of CLIP features over the traditional resnet features on the visual question answering, image captioning, navigation and visual entailment tasks.

CLIP-ViL In our paper "How Much Can CLIP Benefit Vision-and-Language Tasks?", we show the improvement of CLIP features over the traditional resnet fea

null 310 Dec 28, 2022
FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery (TGRS)

FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery by Ailong Ma, Junjue Wang*, Yanfei Zhon

Kingdrone 43 Jan 5, 2023
Change is Everywhere: Single-Temporal Supervised Object Change Detection in Remote Sensing Imagery (ICCV 2021)

Change is Everywhere Single-Temporal Supervised Object Change Detection in Remote Sensing Imagery by Zhuo Zheng, Ailong Ma, Liangpei Zhang and Yanfei

Zhuo Zheng 125 Dec 13, 2022