Raster Vision is an open source Python framework for building computer vision models on satellite, aerial, and other large imagery sets

Overview

Raster Vision Logo  

Pypi Docker Repository on Quay Join the chat at https://gitter.im/azavea/raster-vision License Build Status codecov Documentation Status

Raster Vision is an open source Python framework for building computer vision models on satellite, aerial, and other large imagery sets (including oblique drone imagery).

  • It allows users (who don't need to be experts in deep learning!) to quickly and repeatably configure experiments that execute a machine learning pipeline including: analyzing training data, creating training chips, training models, creating predictions, evaluating models, and bundling the model files and configuration for easy deployment. Overview of Raster Vision workflow
  • There is built-in support for chip classification, object detection, and semantic segmentation with backends using PyTorch. Examples of chip classification, object detection and semantic segmentation
  • Experiments can be executed on CPUs and GPUs with built-in support for running in the cloud using AWS Batch.
  • The framework is extensible to new data sources, tasks (eg. instance segmentation), backends (eg. Detectron2), and cloud providers.

See the documentation for more details.

Setup

There are several ways to setup Raster Vision:

  • To build Docker images from scratch, after cloning this repo, run docker/build, and run the container using docker/run.
  • Docker images are published to quay.io. The tag for the raster-vision image determines what type of image it is:
    • The pytorch-* tags are for running the PyTorch containers.
    • We publish a new tag per merge into master, which is tagged with the first 7 characters of the commit hash. To use the latest version, pull the latest suffix, e.g. raster-vision:pytorch-latest. Git tags are also published, with the Github tag name as the Docker tag suffix.
  • Raster Vision can be installed directly using pip install rastervision. However, some of its dependencies will have to be installed manually.

For more detailed instructions, see the Setup docs.

Example

The best way to get a feel for what Raster Vision enables is to look at an example of how to configure and run an experiment. Experiments are configured using a fluent builder pattern that makes configuration easy to read, reuse and maintain.

# tiny_spacenet.py

from rastervision.core.rv_pipeline import *
from rastervision.core.backend import *
from rastervision.core.data import *
from rastervision.pytorch_backend import *
from rastervision.pytorch_learner import *


def get_config(runner) -> SemanticSegmentationConfig:
    root_uri = '/opt/data/output/'
    base_uri = ('https://s3.amazonaws.com/azavea-research-public-data/'
                'raster-vision/examples/spacenet')

    train_image_uri = f'{base_uri}/RGB-PanSharpen_AOI_2_Vegas_img205.tif'
    train_label_uri = f'{base_uri}/buildings_AOI_2_Vegas_img205.geojson'
    val_image_uri = f'{base_uri}/RGB-PanSharpen_AOI_2_Vegas_img25.tif'
    val_label_uri = f'{base_uri}/buildings_AOI_2_Vegas_img25.geojson'

    channel_order = [0, 1, 2]
    class_config = ClassConfig(
        names=['building', 'background'], colors=['red', 'black'])

    def make_scene(scene_id: str, image_uri: str,
                   label_uri: str) -> SceneConfig:
        """
        - The GeoJSON does not have a class_id property for each geom,
          so it is inferred as 0 (ie. building) because the default_class_id
          is set to 0.
        - The labels are in the form of GeoJSON which needs to be rasterized
          to use as label for semantic segmentation, so we use a RasterizedSource.
        - The rasterizer set the background (as opposed to foreground) pixels
          to 1 because background_class_id is set to 1.
        """
        raster_source = RasterioSourceConfig(
            uris=[image_uri], channel_order=channel_order)
        vector_source = GeoJSONVectorSourceConfig(
            uri=label_uri, default_class_id=0, ignore_crs_field=True)
        label_source = SemanticSegmentationLabelSourceConfig(
            raster_source=RasterizedSourceConfig(
                vector_source=vector_source,
                rasterizer_config=RasterizerConfig(background_class_id=1)))
        return SceneConfig(
            id=scene_id,
            raster_source=raster_source,
            label_source=label_source)

    scene_dataset = DatasetConfig(
        class_config=class_config,
        train_scenes=[
            make_scene('scene_205', train_image_uri, train_label_uri)
        ],
        validation_scenes=[
            make_scene('scene_25', val_image_uri, val_label_uri)
        ])

    # Use the PyTorch backend for the SemanticSegmentation pipeline.
    chip_sz = 300

    backend = PyTorchSemanticSegmentationConfig(
        data=SemanticSegmentationGeoDataConfig(
            scene_dataset=scene_dataset,
            window_opts=GeoDataWindowConfig(
                method=GeoDataWindowMethod.random,
                size=chip_sz,
                size_lims=(chip_sz, chip_sz + 1),
                max_windows=10)),
        model=SemanticSegmentationModelConfig(backbone=Backbone.resnet50),
        solver=SolverConfig(lr=1e-4, num_epochs=1, batch_sz=2))

    return SemanticSegmentationConfig(
        root_uri=root_uri,
        dataset=scene_dataset,
        backend=backend,
        train_chip_sz=chip_sz,
        predict_chip_sz=chip_sz)

Raster Vision uses a unittest-like method for executing experiments. For instance, if the above was defined in tiny_spacenet.py, with the proper setup you could run the experiment using:

> rastervision run local tiny_spacenet.py

See the Quickstart for a more complete description of running this example.

Resources

Contact and Support

You can find more information and talk to developers (let us know what you're working on!) at:

Contributing

We are happy to take contributions! It is best to get in touch with the maintainers about larger features or design changes before starting the work, as it will make the process of accepting changes smoother.

Everyone who contributes code to Raster Vision will be asked to sign the Azavea CLA, which is based off of the Apache CLA.

  1. Download a copy of the Raster Vision Individual Contributor License Agreement or the Raster Vision Corporate Contributor License Agreement

  2. Print out the CLAs and sign them, or use PDF software that allows placement of a signature image.

  3. Send the CLAs to Azavea by one of:

  • Scanning and emailing the document to [email protected]
  • Faxing a copy to +1-215-925-2600.
  • Mailing a hardcopy to: Azavea, 990 Spring Garden Street, 5th Floor, Philadelphia, PA 19107 USA
Comments
  • Potsdam Example

    Potsdam Example

    Overview

    Workflow configuration for doing DeepLab-based segmentation on the Potsdam dataset.

    Checklist

    • [x] Ran scripts/format_code and commited any changes
    • [x] Documentation updated if needed
    • [x] PR has a name that won't get you publicly shamed for vagueness
    • [x] https://github.com/azavea/raster-vision/pull/366
    • [x] https://github.com/azavea/raster-vision/pull/370
    opened by jamesmcclain 19
  • Add GeoDataset to allow reading directly from source imagery during training without chipping + refactor existing datasets

    Add GeoDataset to allow reading directly from source imagery during training without chipping + refactor existing datasets

    Description

    Overview

    This PR adds a new type of Dataset, GeoDataset, that allows the Learner to read rasters and labels directly from a Scene during training. GeoDataset is further subclassed into SlidingWindowGeoDataset and RandomWindowGeoDataset, that represent two different reading methods.

    These datasets can be configured using GeoDataConfig, which is subclassed from DataConfig, and accepts a rastervision.core.data.DatasetConfig and either a single GeoDataWindowConfig Config that applies to all scenes or a Dict[scene.id --> GeoDataWindowConfig].

    The Dataset classes

    pytorch_learner/dataset/*.py

    The Dataset classes are now structured as shown below. (Only classificaiton and semantic segmentation have been shown, but it's the same for object detection and regression too). ALL datasets are now based on AlbumentationsDataset. graph

    DataConfig

    pytorch_learner/learner_config.py pytorch_learner/classification_learner_config.py pytorch_learner/regression_learner_config.py pytorch_learner/object_detection_learner_config.py pytorch_learner/semantic_segmentation_learner_config.py

    To draw a distinction from GeoData- classes, the old classes (that work with chips) have been renamed to ImageData- where applicable.

    Dataset creation

    pytorch_learner/learner_config.py pytorch_learner/learner.py

    Dataset creation code has been factored out into Image- and Geo- DataConfigs and Learner. And removed from the Learner subclasses.

    Backend config

    pytorch_backend/pytorch_*_config.py

    To avoid cluttering PyTorckLearnerBackendConfig, it now accepts a DataConfig directly instead of accepting individual fields of DataConfig like it did before. The examples and integration tests have been updated to make use of this.

    Other changes

    • ~Adds persist param to RasterioSource that forces it to keep the file(s) open.~
    • Modifies ActivateMixin to make it possible to keep a source permanently activated.
    • Adds lazy param to ChipClassificationLabelSource that makes it skip label reading/inference during initialization.
    • Refactors Box, Scene, RasterSource, SemanticSegmentationLabelSource, ChipClassificationLabelSource, ChipClassificationLabels to make them work better with GeoDataset.

    Checklist

    • [x] Updated docs/changelog.rst
    • [x] Added needs-backport label if PR is bug fix that applies to previous minor release
    • [x] Ran scripts/format_code and committed any changes
    • [x] Documentation updated if needed
    • [x] PR has a name that won't get you publicly shamed for vagueness

    Testing Instructions

    • How to test this PR
      • See updated integration tests and examples.
      • ~See new examples (isprs_potsdam_nochip.py and spacenet_rio_nochip.py).~
      • See updated examples (isprs_potsdam.py, spacenet_rio.py, and cowc_potsdam.py).

    Closes #1040

    opened by AdeelH 18
  •  4 channel test- Training dataset has fewer elements than batch size

    4 channel test- Training dataset has fewer elements than batch size

    Hi! I am trying to train on 4 channels (R,G,B, elevation). I am using the master branch in a Docker image with local data.

    After many tries I get the same error when the run reach the train command : 'Training dataset has fewer elements than batch size.' I tried to set batch size to 1 and increase number of epochs, I also tried to both train and validate on image 2 instead of image 3. But I get the same error every time.

    Can´t figure if it´s something in my code or the data I have to change?

    Message:

    File "/opt/src/rastervision_pytorch_learner/rastervision/pytorch_learner/learner.py", line 541, in setup_data 'Training dataset has fewer elements than batch size.')

    My data:

    https://drive.google.com/drive/folders/1ed0NpcjWOdkiSEuliszkDmytuLqVrdO5?usp=sharing

    1 Image 1

    2 Image 2

    3 Image 3

    import os
    from os.path import join, basename
    
    from rastervision.core.rv_pipeline import *
    from rastervision.core.backend import *
    from rastervision.core.data import *
    from rastervision.core.analyzer import *
    from rastervision.pytorch_backend import *
    from rastervision.pytorch_learner import *
    from rastervision.pytorch_backend.examples.utils import (get_scene_info,
                                                             save_image_crop)
    from rastervision.pytorch_backend.examples.semantic_segmentation.utils import (
        example_multiband_transform, example_rgb_transform, `imagenet_stats,
        Unnormalize)
    
    def get_config(runner,
                   multiband: bool = True,
                   external_model: bool = False,
                   augment: bool = False,
                   nochip: bool = False,
                   test: bool = False):
        root_uri = '/opt/data/output/'
        train_image_uris = ['/opt/data/data_input/images/1.tif','/opt/data/data_input/images/2.tif']
        train_label_uris = ['/opt/data/data_input/labels/1.geojson','/opt/data/data_input/labels/2.geojson']
        train_scene_ids = ['1','2']
        train_scene_list = list(zip(train_scene_ids, train_image_uris, train_label_uris))
    
        val_image_uri = '/opt/data/data_input/images/3.tif'
        val_label_uri = '/opt/data/data_input/labels/3.geojson'
        val_scene_id = '3'
      
    
        train_scenes_input = []
    
        if multiband:
            # use all 4 channels
            channel_order = [0, 1, 2, 3]
            channel_display_groups = {'RGB': (0, 1, 2), 'elev': (3, )}
            aug_transform = example_multiband_transform
        else:
            # use elev, red, & green channels only
            channel_order = [3, 0, 1]
            channel_display_groups = None
            aug_transform = example_rgb_transform
    
        if augment:
            mu, std = imagenet_stats['mean'], imagenet_stats['std']
            mu, std = mu[channel_order], std[channel_order]
    
            base_transform = A.Normalize(mean=mu.tolist(), std=std.tolist())
            plot_transform = Unnormalize(mean=mu, std=std)
    
            aug_transform = A.to_dict(aug_transform)
            base_transform = A.to_dict(base_transform)
            plot_transform = A.to_dict(plot_transform)
        else:
            aug_transform = None
            base_transform = None
            plot_transform = None
    
        chip_sz = 300
        img_sz = chip_sz
        if nochip:
            chip_options = SemanticSegmentationChipOptions()
        else:
            chip_options = SemanticSegmentationChipOptions(
                window_method=SemanticSegmentationWindowMethod.sliding,
                stride=chip_sz)
    
        class_config = ClassConfig(
        names=['building', 'background'], colors=['red', 'black'])
    
        def make_scene(scene_id, image_uri, label_uri):
         
            raster_source = RasterioSourceConfig(
                uris=[image_uri],
                channel_order=channel_order,
                transformers=[StatsTransformerConfig()])
            vector_source = GeoJSONVectorSourceConfig(
                uri=label_uri, default_class_id=0, ignore_crs_field=True)
            label_source = SemanticSegmentationLabelSourceConfig(
                raster_source=RasterizedSourceConfig(
                    vector_source=vector_source,
                    rasterizer_config=RasterizerConfig(background_class_id=1)))
            return SceneConfig(
                id=scene_id,
                raster_source=raster_source,
                label_source=label_source)
    
    
        for scene in train_scene_list:
            train_scenes_input.append(make_scene(*scene))
            
        dataset = DatasetConfig(
        class_config=class_config,
        train_scenes=
            train_scenes_input
        ,
        validation_scenes=[
            make_scene(val_scene_id, val_image_uri, val_label_uri)
        ])
        
        
    
        # Use the PyTorch backend for the SemanticSegmentation pipeline.
        chip_sz = 300
        backend = PyTorchSemanticSegmentationConfig(
            data=SemanticSegmentationImageDataConfig(),
            model=SemanticSegmentationModelConfig(backbone=Backbone.resnet50),
            solver=SolverConfig(lr=1e-4, num_epochs=10, batch_sz=1, one_cycle=True))
        chip_options = SemanticSegmentationChipOptions(
            window_method=SemanticSegmentationWindowMethod.random_sample,
            chips_per_scene=10)
    
        return SemanticSegmentationConfig(
            root_uri=root_uri,
            dataset=dataset,
            backend=backend,
            train_chip_sz=chip_sz,
            predict_chip_sz=chip_sz)`
    
    bug 
    opened by Tobias1234 14
  • SemSeg class issue

    SemSeg class issue

    Note that my semantic segmentation ground truths are arrays that are 0 for background and 255 for buildings. Images are RGB. Most likely the ground truth should be changed before passing it to raster-vision but I'm not sure what's the expected format. The following is the semantic segmentation module I've set up.

    import os
    from os.path import join, basename
    
    from rastervision.core.rv_pipeline import *
    from rastervision.core.backend import *
    from rastervision.core.data import *
    from rastervision.core.analyzer import *
    from rastervision.pytorch_backend import *
    from rastervision.pytorch_learner import *
    from rastervision.pytorch_backend.examples.utils import (get_scene_info,
                                                             save_image_crop)
    from rastervision.pytorch_backend.examples.semantic_segmentation.utils import (
        example_multiband_transform, example_rgb_transform, imagenet_stats,
        Unnormalize)
    
    # INRIA dataset in total contains 360 images of size 5000 x 5000 across 10 cities
    TRAIN_CITIES = ["austin", "chicago", "kitsap", "tyrol-w", "vienna"]
    VAL_CITIES = ["vienna", "kitsap"]
    # Debug
    TRAIN_CITIES = ["austin"]
    VAL_CITIES = ["vienna"]
    
    TEST_CITIES = ["bellingham", "bloomington", "innsbruck", "sfo", "tyrol-e"]
    NUM_IMAGES_PER_CITY = 3 #36
    
    CLASS_NAMES = ["Building"]
    CLASS_COLORS = ["red"]
    
    raw_uri = "/opt/data/inria/AerialImageDataset"
    root_uri = "/opt/src/code/output"
    nochip = False
    augment = False
    test= False
    
    def get_config(runner,
                   raw_uri: str,
                   root_uri: str,
                   augment: bool = False,
                   nochip: bool = False,
                   test: bool = False):
    
        channel_order = [0, 1, 2]
        channel_display_groups = None
        aug_transform = example_rgb_transform
    
        if augment:
            mu, std = imagenet_stats['mean'], imagenet_stats['std']
            mu, std = mu[channel_order], std[channel_order]
    
            base_transform = A.Normalize(mean=mu.tolist(), std=std.tolist())
            plot_transform = Unnormalize(mean=mu, std=std)
    
            aug_transform = A.to_dict(aug_transform)
            base_transform = A.to_dict(base_transform)
            plot_transform = A.to_dict(plot_transform)
        else:
            aug_transform = None
            base_transform = None
            plot_transform = None
    
    
        chip_sz = 256
        img_sz = chip_sz
        
        if nochip:
            chip_options = SemanticSegmentationChipOptions()
        else:
            chip_options = SemanticSegmentationChipOptions(
                window_method=SemanticSegmentationWindowMethod.sliding,
                stride=chip_sz)
    
        class_config = ClassConfig(names=CLASS_NAMES, colors=CLASS_COLORS)
        class_config.ensure_null_class()
    
        
        def make_scene(id) -> SceneConfig:
            raster_uri = f'{raw_uri}/train/images/{id}.tif'
            label_uri = f'{raw_uri}/train/gt/{id}.tif'
    
    
            raster_source = RasterioSourceConfig(
                uris=[raster_uri], channel_order=[0,1,2])
    
            label_source = SemanticSegmentationLabelSourceConfig(
                raster_source=RasterioSourceConfig(uris=[label_uri]))
    
            # URI will be injected by scene config.
            # Using rgb=True because we want prediction TIFFs to be in
            # RGB format.
            label_store = SemanticSegmentationLabelStoreConfig(
                rgb=True, vector_output=[PolygonVectorOutputConfig(class_id=0)])
    
            scene = SceneConfig(
                id=id,
                raster_source=raster_source,
                label_source=label_source,
                label_store=label_store)
    
            return scene
    
        scene_dataset = DatasetConfig(
            class_config=class_config,
            train_scenes=[make_scene(city + str(n)) for city in TRAIN_CITIES for n in range(1, NUM_IMAGES_PER_CITY + 1)],
            validation_scenes=[make_scene(city + str(n)) for city in VAL_CITIES for n in range(1, NUM_IMAGES_PER_CITY + 1)])
    
    
        if nochip:
            window_opts = {}
            # set window configs for training scenes
            for s in scene_dataset.train_scenes:
                window_opts[s.id] = GeoDataWindowConfig(
                    # method=GeoDataWindowMethod.sliding,
                    method=GeoDataWindowMethod.random,
                    size=img_sz,
                    # size_lims=(200, 300),
                    h_lims=(200, 300),
                    w_lims=(200, 300),
                    max_windows=2209,
                )
            # set window configs for validation scenes
            for s in scene_dataset.validation_scenes:
                window_opts[s.id] = GeoDataWindowConfig(
                    method=GeoDataWindowMethod.sliding,
                    size=img_sz,
                    stride=img_sz // 2)
    
            data = SemanticSegmentationGeoDataConfig(
                scene_dataset=scene_dataset,
                window_opts=window_opts,
                img_sz=img_sz,
                img_channels=len(channel_order),
                num_workers=4,
                channel_display_groups=channel_display_groups)
        else:
            data = SemanticSegmentationImageDataConfig(
                img_sz=img_sz,
                img_channels=len(channel_order),
                num_workers=4,
                channel_display_groups=channel_display_groups,
                base_transform=base_transform,
                aug_transform=aug_transform,
                plot_options=PlotOptions(transform=plot_transform))
    
        model = SemanticSegmentationModelConfig(backbone=Backbone.resnet50)
    
        backend = PyTorchSemanticSegmentationConfig(
            data=data,
            model=model,
            solver=SolverConfig(
                lr=1e-4,
                num_epochs=10,
                test_num_epochs=2,
                batch_sz=8,
                test_batch_sz=2,
                one_cycle=True),
            log_tensorboard=True,
            run_tensorboard=False,
            test_mode=test)
    
        pipeline = SemanticSegmentationConfig(
            root_uri=root_uri,
            dataset=scene_dataset,
            backend=backend,
            channel_display_groups=channel_display_groups,
            train_chip_sz=chip_sz,
            predict_chip_sz=chip_sz,
            chip_options=chip_options)
    
        return pipeline
    
    

    Error encountered:

    ....
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [831,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [1,0,0], thread: [960,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [1,0,0], thread: [961,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [1,0,0], thread: [962,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [1,0,0], thread: [963,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [1,0,0], thread: [189,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [1,0,0], thread: [190,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [1,0,0], thread: [191,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [1,0,0], thread: [698,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [1,0,0], thread: [699,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [1,0,0], thread: [700,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [1,0,0], thread: [701,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [1,0,0], thread: [702,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [1,0,0], thread: [703,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [530,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [531,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [273,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [274,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [275,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [276,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [277,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [278,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [279,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [280,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:103: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [4,0,0], thread: [281,0,0] Assertion `t >= 0 && t < n_classes` failed.
    
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 248, in <module>
        main()
      File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 722, in __call__
        return self.main(*args, **kwargs)
      File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 697, in main
        rv = self.invoke(ctx)
      File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
        return _process_result(sub_ctx.command.invoke(sub_ctx))
      File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 895, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 535, in invoke
        return callback(*args, **kwargs)
      File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 240, in run_command
        runner=runner)
      File "/opt/src/rastervision_pipeline/rastervision/pipeline/cli.py", line 217, in _run_command
        command_fn()
      File "/opt/src/rastervision_core/rastervision/core/rv_pipeline/rv_pipeline.py", line 115, in train
        backend.train(source_bundle_uri=self.config.source_bundle_uri)
      File "/opt/src/rastervision_pytorch_backend/rastervision/pytorch_backend/pytorch_learner_backend.py", line 75, in train
        learner.main()
      File "/opt/src/rastervision_pytorch_learner/rastervision/pytorch_learner/learner.py", line 146, in main
        self.train()
      File "/opt/src/rastervision_pytorch_learner/rastervision/pytorch_learner/learner.py", line 1149, in train
        train_metrics = self.train_epoch()
      File "/opt/src/rastervision_pytorch_learner/rastervision/pytorch_learner/learner.py", line 1083, in train_epoch
        loss.backward()
      File "/opt/conda/lib/python3.6/site-packages/torch/tensor.py", line 118, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph)
      File "/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward
        allow_unreachable=True)  # allow_unreachable flag
    RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED
    /opt/src/code/output/Makefile:12: recipe for target '2' failed
    make: *** [2] Error 1
    
    opened by ashnair1 13
  • Use one model.pth file for multiple datasets.

    Use one model.pth file for multiple datasets.

    i have many datasets for training. but while training "No RAM space" error is displayed. so i want to divide whole dataset into smaller datasets.

    so is it possible to use same "model.pth" for multiple datasets and how ?

    opened by sagar1899 12
  • Chip classification training and prediction on multiple GPU's

    Chip classification training and prediction on multiple GPU's

    Overview

    Making use of multi GPU systems during training and prediction of the chip classification. The multiprocessing is done in the batch dimension, so your batch size needs to be equal to or larger than the amount of GPU's you want to be used. No user input is required. Automatically the most amount of available GPU's is used.

    Checklist

    • [x] Updated docs/changelog.rst
    • [x] Added needs-backport label if PR is bug fix that applies to previous minor release
    • [x] Ran scripts/format_code and committed any changes
    • [x] Documentation updated if needed
    • [x] PR has a name that won't get you publicly shamed for vagueness

    Notes

    Does not work yet on multiple experiments, but only with one.

    Testing Instructions

    • Tested it on multi GPU system, batch size needs to be larger than 1 before the 2nd GPU is used
    • ~~Training on multiple experiments results in a memory error. Not sure how this can be resolved yet.~~

    Closes #XXX

    opened by lmbak 12
  • Unable to use with_uris method to handle multiple uris

    Unable to use with_uris method to handle multiple uris

    The with_uris(uris) is used to Set URIs for a GeoTIFFs containing as raster data. After following the documentation, i am still unable to include multiple TIFF URIs. Has anyone been able to do this?

    Including my a snippet of my code for review:

    image_base_uri = os.listdir('images')
    train_image_uri = [x for x in image_base_uri]
    
    train_raster_source = rv.RasterSourceConfig.builder(rv.GEOTIFF_SOURCE) \
                                                       .with_uris(train_image_uri) \
                                                       .with_stats_transformer() \
                                                       .build()
    

    And the error I get:

     File "/opt/src/rastervision/runner/experiment_runner.py", line 153, in run
        unique_commands, rerun_commands, skip_file_check=skip_file_check)
      File "/opt/src/rastervision/runner/command_dag.py", line 61, in __init__
        '\t{}\n'.format(',\b\t'.join(missing_files)))
    rastervision.core.config.ConfigError: Files do not exist and are not supplied by commands:
    

    Basically, I need a sample code that is using multiple TIF files from a local directory to train, not online. The rastervision documentation displayed an experiment that trains with just one sample.

    opened by oluwayetty 12
  • Segmentation: Chips and Training

    Segmentation: Chips and Training

    Overview

    This pull request adds the ability to generate Deeplab-compatible TFRecords and the ability to train a Deeplab model using the same. Preliminary to https://github.com/azavea/raster-vision/pull/321 .

    Checklist

    • [x] Ran scripts/format_code and commited any changes
    • [x] Documentation updated if needed
      • Will do as part of #321
    • [x] PR has a name that won't get you publicly shamed for vagueness
    • [x] DocStrings
    • [x] Remote operation
    • [x] Ensure Tensorboard
    • [x] Hex to class mapping, use colors in debug chips
    • [x] Add Potsdam Georeferencing Script
    • [x] Respond to comments

    Notes

    Optional. Ancillary topics, caveats, alternative strategies that didn't work out, anything else.

    Testing Instructions

    Step 1

    Prepare the test data by using the contrib/cowc/transfer_georeference.py script. Typing

    transfer_georeference.py  top_potsdam_2_10_RGBIR.tif top_potsdam_2_10_label_noBoundary.tif top_potsdam_2_10_label_georeferenced.tif
    

    will apply the georeferencing information from top_potsdam_2_10_RGBIR.tif to the ungeoreferenced label file top_potsdam_2_10_label_noBoundary.tif to produce a new file top_potsdam_2_10_label_georeferenced.tif. The same must be done for 2_11.

    Step 2

    Use one of the two workflow configuration files: samples/workflow-configs/segmentation/deeplab-test.json or samples/workflow-configs/segmentation/deeplab-remote-test.json.

    opened by jamesmcclain 12
  • Why model_bundle is not predicting custom data?

    Why model_bundle is not predicting custom data?

    ❓ Questions and Help


    Hey, I tried training semantic segmentation in RV using this example

    export RAW_URI="data/spacenet-dataset/" export ROOT_URI="data/local-output/" rastervision run local rastervision_pytorch_backend/rastervision/pytorch_backend/examples/semantic_segmentation/spacenet_vegas -a raw_uri $RAW_URI -a root_uri $ROOT_URI

    I downloaded the dataset using aws s3 sync.The data download failed in the middle, but i started training with the data available. My dataset directory looks like this - /data/spacenet-dataset/spacenet/SN2_buildings/train/AOI_2_Vegas The model trained for 4 epochs.

    And then when i used the model_bundle.zip to make predictions on my own data. It didn't predict anything at all.

    image

    ❓Why is this happening? Should i change the number of epochs or another parameters?

    Can the model which was trained using Vegas dataset predict my data? If not, please provide documentation for custom training of semantic segmentation model and labelling, exporting tif data. Thanks in advance.

    Environment: Ubuntu 20.04 LTS

    2022-04-29 08:59:46:rastervision.pytorch_learner.learner: INFO - Devices: 2022-04-29 08:59:46:rastervision.pytorch_learner.learner: INFO - index, name, driver_version, memory.total [MiB], memory.used [MiB], memory.free [MiB] 0, NVIDIA GeForce RTX 2070 SUPER, 510.60.02, 8192 MiB, 1781 MiB, 6192 MiB

    2022-04-29 08:59:46:rastervision.pytorch_learner.learner: INFO - PyTorch version: 1.9.1+cu102 2022-04-29 08:59:46:rastervision.pytorch_learner.learner: INFO - CUDA available: True 2022-04-29 08:59:46:rastervision.pytorch_learner.learner: INFO - CUDA version: 10.2 2022-04-29 08:59:46:rastervision.pytorch_learner.learner: INFO - CUDNN version: 7605 2022-04-29 08:59:46:rastervision.pytorch_learner.learner: INFO - Number of CUDA devices: 1 2022-04-29 08:59:46:rastervision.pytorch_learner.learner: INFO - Active CUDA Device: GPU 0

    question 
    opened by Vijay-P1999 11
  • Support for new windowing methods

    Support for new windowing methods

    In addition to the standard chipping for ssd models, new windowing methods are useful for faster rcnn (or models that maintain aspect ratios).

    I can't say I'm a fan of my changes to src/rastervision/utils/files.py to be lazy with rasters in S3 (particularly AWS NAIP S3 bucket). But it convenient instead of having to make sure that the requester pays flag is set...

    Also had to add scripts/compile as it isn't in the repo

    opened by dustymugs 11
  • Semantic Segmentation Experiment: Predicted labels are correct in `test_preds.png` but the actual predictions are wrong

    Semantic Segmentation Experiment: Predicted labels are correct in `test_preds.png` but the actual predictions are wrong

    🐛 Bug

    To Reproduce

    Steps to reproduce the behavior:

    1. Use docker image rastervision-pytorch-v.0.13.1
    2. Use the model_bundle.zip and rionegro_rgb_8_2.tif for predict. Files are available here.
    3. This model was trained for prediction of cars and motorcyles. You should be getting correct predictions, but won't happen. You will get incorrect results similar to labels.tif file that is here. Also, there should be vector outputs as well in the output folder that you provide. But that doesn't happen either.

    The model had been trained using AWS batch. Here's the get_config() code:

    import os
    
    from rastervision.core.rv_pipeline import *
    from rastervision.core.backend import *
    from rastervision.core.data import *
    from rastervision.pytorch_backend import *
    from rastervision.pytorch_learner import *
    
    IDS = ['El_Retiro_1_3', 'El_Retiro_1_4', 'El_Retiro_2_1', ...]
    
    def build_scene(uri, id, channel_order=None):
        label_file = id + '.geojson'
        label_uri = uri + label_file
        print("Label uri: ", label_uri)
    
        image_file = id + '.tif'
        image_uri = uri + image_file
        print("Image uri: ", image_uri)
    
        raster_source = RasterioSourceConfig(
            uris=[image_uri],
            channel_order=channel_order,
            transformers=[StatsTransformerConfig()])
        
        vector_source = GeoJSONVectorSourceConfig(
            uri=label_uri,
            default_class_id=0,
            ignore_crs_field=True)
    
        label_source = SemanticSegmentationLabelSourceConfig(
            raster_source=RasterizedSourceConfig(
                vector_source=vector_source,
                rasterizer_config=RasterizerConfig(background_class_id=0)))
        
        label_store = SemanticSegmentationLabelStoreConfig(
            vector_output=[PolygonVectorOutputConfig(class_id=1),
                            PolygonVectorOutputConfig(class_id=2),
                            PolygonVectorOutputConfig(class_id=3)])
    
        return SceneConfig(
            id=id,
            raster_source=raster_source,
            label_source=label_source,
            label_store=label_store)
    
    def get_config(runner, raw_uri, root_uri, test=True):
        scene_ids = IDS
    
        split_ratio = 0.8
        num_train_ids = round(len(scene_ids) * split_ratio)
        train_ids = scene_ids[0:num_train_ids]
        val_ids = scene_ids[num_train_ids:]
    
        num_epochs = 200
    
        if test:
            train_ids = train_ids[:5:]
            val_ids = val_ids[:3:]
    
            num_epochs = 2
        
        class_config = ClassConfig(
            names=['background', 'motorcycle', 'car', 'ghost'],
            colors=['#ffff00', '#0000ff', '#00ffff', '#00ff00'],
            null_class='background'
        )
    
        channel_order = [0, 1, 2]
        train_scenes = [
            build_scene(raw_uri, id, channel_order) for id in train_ids
        ]
        val_scenes = [
            build_scene(raw_uri, id, channel_order) for id in val_ids
        ]
    
        dataset = DatasetConfig(
            class_config=class_config,
            train_scenes=train_scenes,
            validation_scenes=val_scenes)
        
        chip_sz = 325
        img_sz = chip_sz
        chip_options = SemanticSegmentationChipOptions(
            window_method=SemanticSegmentationWindowMethod.sliding,
            stride=img_sz)
    
        data = SemanticSegmentationImageDataConfig(
                img_sz=img_sz, num_workers=4)
    
        backend = PyTorchSemanticSegmentationConfig(
            data=data,
            model=SemanticSegmentationModelConfig(backbone=Backbone.resnet50),
            solver=SolverConfig(
                lr=1e-4,
                num_epochs=num_epochs,
                test_num_epochs=2,
                batch_sz=8,
                one_cycle=True))
    
        return SemanticSegmentationConfig(
            root_uri=root_uri,
            dataset=dataset,
            backend=backend,
            train_chip_sz=img_sz,
            predict_chip_sz=img_sz,
            chip_options=chip_options)
    

    Expected behavior

    Please have a look at eval.json and test_preds.jpeg. Files available here. According to them, the model has been trained properly as can be seen in the predicted labels at test_preds.jpeg. Moreover, it should also provide vector/geojson of each class, as I have already configured the get_config() code that way. If you see, I've already added:

        label_store = SemanticSegmentationLabelStoreConfig(
            vector_output=[PolygonVectorOutputConfig(class_id=1),
                            PolygonVectorOutputConfig(class_id=2),
                            PolygonVectorOutputConfig(class_id=3)])
    
        return SceneConfig(
            id=id,
            raster_source=raster_source,
            label_source=label_source,
            label_store=label_store)
    

    Environment

    Docker image: /pytorch-v.0.13.1

    bug 
    opened by theoway 10
  • Bump numpy from 1.23.3 to 1.24.1 in /rastervision_core

    Bump numpy from 1.23.3 to 1.24.1 in /rastervision_core

    Bumps numpy from 1.23.3 to 1.24.1.

    Release notes

    Sourced from numpy's releases.

    v1.24.1

    NumPy 1.24.1 Release Notes

    NumPy 1.24.1 is a maintenance release that fixes bugs and regressions discovered after the 1.24.0 release. The Python versions supported by this release are 3.8-3.11.

    Contributors

    A total of 12 people contributed to this release. People with a "+" by their names contributed a patch for the first time.

    • Andrew Nelson
    • Ben Greiner +
    • Charles Harris
    • Clément Robert
    • Matteo Raso
    • Matti Picus
    • Melissa Weber Mendonça
    • Miles Cranmer
    • Ralf Gommers
    • Rohit Goswami
    • Sayed Adel
    • Sebastian Berg

    Pull requests merged

    A total of 18 pull requests were merged for this release.

    • #22820: BLD: add workaround in setup.py for newer setuptools
    • #22830: BLD: CIRRUS_TAG redux
    • #22831: DOC: fix a couple typos in 1.23 notes
    • #22832: BUG: Fix refcounting errors found using pytest-leaks
    • #22834: BUG, SIMD: Fix invalid value encountered in several ufuncs
    • #22837: TST: ignore more np.distutils.log imports
    • #22839: BUG: Do not use getdata() in np.ma.masked_invalid
    • #22847: BUG: Ensure correct behavior for rows ending in delimiter in...
    • #22848: BUG, SIMD: Fix the bitmask of the boolean comparison
    • #22857: BLD: Help raspian arm + clang 13 about __builtin_mul_overflow
    • #22858: API: Ensure a full mask is returned for masked_invalid
    • #22866: BUG: Polynomials now copy properly (#22669)
    • #22867: BUG, SIMD: Fix memory overlap in ufunc comparison loops
    • #22868: BUG: Fortify string casts against floating point warnings
    • #22875: TST: Ignore nan-warnings in randomized out tests
    • #22883: MAINT: restore npymath implementations needed for freebsd
    • #22884: BUG: Fix integer overflow in in1d for mixed integer dtypes #22877
    • #22887: BUG: Use whole file for encoding checks with charset_normalizer.

    Checksums

    ... (truncated)

    Commits
    • a28f4f2 Merge pull request #22888 from charris/prepare-1.24.1-release
    • f8fea39 REL: Prepare for the NumPY 1.24.1 release.
    • 6f491e0 Merge pull request #22887 from charris/backport-22872
    • 48f5fe4 BUG: Use whole file for encoding checks with charset_normalizer [f2py] (#22...
    • 0f3484a Merge pull request #22883 from charris/backport-22882
    • 002c60d Merge pull request #22884 from charris/backport-22878
    • 38ef9ce BUG: Fix integer overflow in in1d for mixed integer dtypes #22877 (#22878)
    • bb00c68 MAINT: restore npymath implementations needed for freebsd
    • 64e09c3 Merge pull request #22875 from charris/backport-22869
    • dc7bac6 TST: Ignore nan-warnings in randomized out tests
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • Bump gdal from 3.5.2 to 3.6.1

    Bump gdal from 3.5.2 to 3.6.1

    Bumps gdal from 3.5.2 to 3.6.1.

    Release notes

    Sourced from gdal's releases.

    GDAL 3.6.1

    Bug fix release. See release notes: https://github.com/OSGeo/gdal/blob/v3.6.1/NEWS.md

    Warning: GDAL 3.6.1 officially retracts GDAL 3.6.0 which could cause corruption of the spatial index of GeoPackage files it created (in tables with 100 000 features or more): cf qgis/QGIS#51188 and OSGeo/gdal#6911. GDAL 3.6.1 fixes that issue. Setting OGR_GPKG_ALLOW_THREADED_RTREE=NO environment variable (at generation time) also works around the issue with GDAL 3.6.0. Users who have generated corrupted GeoPackage files with 3.6.0 can regnerate them with 3.6.1 with, for example, "ogr2ogr out_ok.gpkg in_corrupted.gpkg" (assuming a GeoPackage file with vector content only)

    GDAL 3.6.0

    Warning: GDAL 3.6.0 has officially been retracted and is superseded per GDAL 3.6.1. GDAL 3.6.0 which could cause corruption of the spatial index of GeoPackage files it created (in tables with 100 000 features or more): cf qgis/QGIS#51188 and OSGeo/gdal#6911. GDAL 3.6.1 fixes that issue. Setting OGR_GPKG_ALLOW_THREADED_RTREE=NO environment variable (at generation time) also works around the issue with GDAL 3.6.0. Users who have generated corrupted GeoPackage files with 3.6.0 can regnerate them with 3.6.1 with, for example, "ogr2ogr out_ok.gpkg in_corrupted.gpkg" (assuming a GeoPackage file with vector content only)

    • CMake is the only build system available in-tree. autoconf and nmake build systems have been removed
    • OpenFileGDB: write and update support (v10.x format only), without requiring any external dependency, with same (and actually larger) functional scope as write side of the FileGDB driver
    • RFC 86: Column-oriented read API for vector layers. Implemented in core, Arrow, Parquet, GPKG and FlatGeoBuf drivers
    • Add read/write raster JPEGXL driver for standalone JPEG-XL files. Requires libjxl
    • Add KTX2 and BASISU read/write raster drivers for texture formats. Require (forked) basisu library
    • Vector layer API: table relationship discovery & creation, Upsert() operation
    • GeoTIFF: add multi-threaded read capabilities (requires NUM_THREADS open option or GDAL_NUM_THREADS configuration option to be set)
    • Multiple performance improvements in GPKG driver
    • ogr_layer_algebra.py: promoted to official script (#1581)
    • Code linting and security fixes
    • Bump of shared lib major version
    • Full release notes at https://github.com/OSGeo/gdal/blob/v3.6.0/NEWS.md

    GDAL 3.5.3

    Bug fix release. See release notes: https://github.com/OSGeo/gdal/blob/v3.5.3/NEWS.md

    Changelog

    Sourced from gdal's changelog.

    GDAL/OGR 3.6.1 Release Notes

    GDAL 3.6.1 is a bugfix release. It officially retracts GDAL 3.6.0 which could cause corruption of the spatial index of GeoPackage files it created (in tables with 100 000 features or more): cf qgis/QGIS#51188 and OSGeo/gdal#6911. GDAL 3.6.1 fixes that issue. Setting OGR_GPKG_ALLOW_THREADED_RTREE=NO environment variable (at generation time) also works around the issue with GDAL 3.6.0. Users who have generated corrupted GeoPackage files with 3.6.0 can regnerate them with 3.6.1 with, for example, "ogr2ogr out_ok.gpkg in_corrupted.gpkg" (assuming a GeoPackage file with vector content only)

    Build

    • Fix build with -DOGR_ENABLE_DRIVER_GML=OFF (#6647)
    • Add build support for libhdf5 1.13.2 and 1.13.3 (#6657)
    • remove RECOMMENDED flag to BRUNSLI and QB3. Add it for CURL (cf spack/spack#33856)
    • configure.cmake: fix wrong detection of pread64 for iOS
    • FindSQLite3.cmake: add logic to invalidate SQLite3_HAS_ variables if the library changes
    • detect if sqlite3 is missing mutex support
    • Fix build when sqlite3_progress_handler() is missing
    • do not use Armadillo if it lacks LAPACK support (such as on Alpine)
    • make it a FATAL_ERROR if the user used -DGDAL_USE_ARMADILLO=ON and it can't be used
    • Fix static HDF4 libraries not found on Windows
    • Internal libjpeg: rename extra symbol for iOS compatibility (#6725)
    • gdaldataset: fix false-positive gcc 12.2.1 -O2 warning about truncation of buffer
    • Add minimal support for reading 12-bit JPEG images with libjpeg-turbo 2.2dev and internal libjpeg12
    • Fix detection of blosc version number
    • Add missing includes to fix build with upcoming gcc 13

    GDAL 3.6.1

    Port

    • CPLGetExecPath(): add MacOSX and FreeBSD implementations; prevent potential one-byte overflow on Linux&Windows
    • /vsiaz/: make AppendBlob operation compatible of Azurite (#6759)
    • /vsiaz/: accept Azure connection string with only BlobEndpoint and SharedAccessSignature (#6870)
    • S3: fix issue with EC2 IDMSv2 request failing inside Docker container with default networking

    Algorithms

    ... (truncated)

    Commits
    • 6500333 Prepare for GDAL 3.6.1
    • 8c2c9d9 Merge pull request #6917 from OSGeo/backport-6915-to-release/3.6
    • 7538be4 Add missing <cstdint headers for uint*_t types
    • c997674 Merge pull request #6916 from OSGeo/backport-6911-to-release/3.6
    • 25ecdaa Merge pull request #6914 from OSGeo/backport-6907-to-release/3.6
    • f25aee2 Merge pull request #6913 from OSGeo/backport-6906-to-release/3.6
    • c4f8495 GPKG: Add heuristics to try to detect corrupted RTree generated by GDAL 3.6.0...
    • aa069fc GPKG: add debug traces and testing for issue of https://github.com/qgis/QGIS/...
    • 32d9462 GPKG: fix issue with StartTransaction() causing features to be omitted when c...
    • f2183a0 gdalwarp: speed-up warping with cutline when the source dataset or processing...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Bump awscli from 1.25.90 to 1.27.32 in /rastervision_aws_s3

    Bump awscli from 1.25.90 to 1.27.32 in /rastervision_aws_s3

    Bumps awscli from 1.25.90 to 1.27.32.

    Changelog

    Sourced from awscli's changelog.

    1.27.32

    • api-change:appflow: This release updates the ListConnectorEntities API action so that it returns paginated responses that customers can retrieve with next tokens.
    • api-change:cloudfront: Updated documentation for CloudFront
    • api-change:datasync: AWS DataSync now supports the use of tags with task executions. With this new feature, you can apply tags each time you execute a task, giving you greater control and management over your task executions.
    • api-change:efs: Update efs command to latest version
    • api-change:guardduty: This release provides the valid characters for the Description and Name field.
    • api-change:iotfleetwise: Updated error handling for empty resource names in "UpdateSignalCatalog" and "GetModelManifest" operations.
    • api-change:sagemaker: AWS sagemaker - Features: This release adds support for random seed, it's an integer value used to initialize a pseudo-random number generator. Setting a random seed will allow the hyperparameter tuning search strategies to produce more consistent configurations for the same tuning job.

    1.27.31

    • api-change:backup-gateway: This release adds support for VMware vSphere tags, enabling customer to protect VMware virtual machines using tag-based policies for AWS tags mapped from vSphere tags. This release also adds support for customer-accessible gateway-hypervisor interaction log and upload bandwidth rate limit schedule.
    • api-change:connect: Added support for "English - New Zealand" and "English - South African" to be used with Amazon Connect Custom Vocabulary APIs.
    • api-change:ecs: This release adds support for container port ranges in ECS, a new capability that allows customers to provide container port ranges to simplify use cases where multiple ports are in use in a container. This release updates TaskDefinition mutation APIs and the Task description APIs.
    • api-change:eks: Add support for Windows managed nodes groups.
    • api-change:glue: This release adds support for AWS Glue Crawler with native DeltaLake tables, allowing Crawlers to classify Delta Lake format tables and catalog them for query engines to query against.
    • api-change:kinesis: Added StreamARN parameter for Kinesis Data Streams APIs. Added a new opaque pagination token for ListStreams. SDKs will auto-generate Account Endpoint when accessing Kinesis Data Streams.
    • api-change:location: This release adds support for a new style, "VectorOpenDataStandardLight" which can be used with the new data source, "Open Data Maps (Preview)".
    • api-change:m2: Adds an optional create-only KmsKeyId property to Environment and Application resources.
    • api-change:sagemaker: SageMaker Inference Recommender now allows customers to load tests their models on various instance types using private VPC.
    • api-change:securityhub: Added new resource details objects to ASFF, including resources for AwsEc2LaunchTemplate, AwsSageMakerNotebookInstance, AwsWafv2WebAcl and AwsWafv2RuleGroup.
    • api-change:translate: Raised the input byte size limit of the Text field in the TranslateText API to 10000 bytes.

    1.27.30

    • api-change:ce: This release supports percentage-based thresholds on Cost Anomaly Detection alert subscriptions.
    • api-change:cloudwatch: Update cloudwatch command to latest version
    • api-change:networkmanager: Appliance Mode support for AWS Cloud WAN.
    • api-change:redshift-data: This release adds a new --client-token field to ExecuteStatement and BatchExecuteStatement operations. Customers can now run queries with the additional client token parameter to ensures idempotency.
    • api-change:sagemaker-metrics: Update SageMaker Metrics documentation.

    1.27.29

    • api-change:cloudtrail: Merging mainline branch for service model into mainline release branch. There are no new APIs.
    • api-change:rds: This deployment adds ClientPasswordAuthType field to the Auth structure of the DBProxy.

    1.27.28

    • api-change:customer-profiles: This release allows custom strings in PartyType and Gender through 2 new attributes in the CreateProfile and UpdateProfile APIs: PartyTypeString and GenderString.
    • api-change:ec2: This release updates DescribeFpgaImages to show supported instance types of AFIs in its response.

    ... (truncated)

    Commits
    • 418e693 Merge branch 'release-1.27.32'
    • 903903f Bumping version to 1.27.32
    • 92908be Update changelog based on model updates
    • 5b19a8a Merge branch 'release-1.27.31'
    • 3b5a597 Merge branch 'release-1.27.31' into develop
    • 31c4ad4 Bumping version to 1.27.31
    • e8afc60 Merge commit '952d073e8ea55e4a68c6772e443807d86e5a0590' into stage-release-de...
    • 485d36b Update changelog based on model updates
    • 8076a39 Merge branch 'release-1.27.30'
    • cd7cf2e Merge branch 'release-1.27.30' into develop
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Bump boto3 from 1.24.89 to 1.26.32 in /rastervision_aws_s3

    Bump boto3 from 1.24.89 to 1.26.32 in /rastervision_aws_s3

    Bumps boto3 from 1.24.89 to 1.26.32.

    Changelog

    Sourced from boto3's changelog.

    1.26.32

    • enhancement:s3: s3.transfer methods accept path-like objects as input
    • api-change:appflow: [botocore] This release updates the ListConnectorEntities API action so that it returns paginated responses that customers can retrieve with next tokens.
    • api-change:cloudfront: [botocore] Updated documentation for CloudFront
    • api-change:datasync: [botocore] AWS DataSync now supports the use of tags with task executions. With this new feature, you can apply tags each time you execute a task, giving you greater control and management over your task executions.
    • api-change:efs: [botocore] Update efs client to latest version
    • api-change:guardduty: [botocore] This release provides the valid characters for the Description and Name field.
    • api-change:iotfleetwise: [botocore] Updated error handling for empty resource names in "UpdateSignalCatalog" and "GetModelManifest" operations.
    • api-change:sagemaker: [botocore] AWS sagemaker - Features: This release adds support for random seed, it's an integer value used to initialize a pseudo-random number generator. Setting a random seed will allow the hyperparameter tuning search strategies to produce more consistent configurations for the same tuning job.

    1.26.31

    • api-change:backup-gateway: [botocore] This release adds support for VMware vSphere tags, enabling customer to protect VMware virtual machines using tag-based policies for AWS tags mapped from vSphere tags. This release also adds support for customer-accessible gateway-hypervisor interaction log and upload bandwidth rate limit schedule.
    • api-change:connect: [botocore] Added support for "English - New Zealand" and "English - South African" to be used with Amazon Connect Custom Vocabulary APIs.
    • api-change:ecs: [botocore] This release adds support for container port ranges in ECS, a new capability that allows customers to provide container port ranges to simplify use cases where multiple ports are in use in a container. This release updates TaskDefinition mutation APIs and the Task description APIs.
    • api-change:eks: [botocore] Add support for Windows managed nodes groups.
    • api-change:glue: [botocore] This release adds support for AWS Glue Crawler with native DeltaLake tables, allowing Crawlers to classify Delta Lake format tables and catalog them for query engines to query against.
    • api-change:kinesis: [botocore] Added StreamARN parameter for Kinesis Data Streams APIs. Added a new opaque pagination token for ListStreams. SDKs will auto-generate Account Endpoint when accessing Kinesis Data Streams.
    • api-change:location: [botocore] This release adds support for a new style, "VectorOpenDataStandardLight" which can be used with the new data source, "Open Data Maps (Preview)".
    • api-change:m2: [botocore] Adds an optional create-only KmsKeyId property to Environment and Application resources.
    • api-change:sagemaker: [botocore] SageMaker Inference Recommender now allows customers to load tests their models on various instance types using private VPC.
    • api-change:securityhub: [botocore] Added new resource details objects to ASFF, including resources for AwsEc2LaunchTemplate, AwsSageMakerNotebookInstance, AwsWafv2WebAcl and AwsWafv2RuleGroup.
    • api-change:translate: [botocore] Raised the input byte size limit of the Text field in the TranslateText API to 10000 bytes.

    1.26.30

    • api-change:ce: [botocore] This release supports percentage-based thresholds on Cost Anomaly Detection alert subscriptions.
    • api-change:cloudwatch: [botocore] Update cloudwatch client to latest version
    • api-change:networkmanager: [botocore] Appliance Mode support for AWS Cloud WAN.
    • api-change:redshift-data: [botocore] This release adds a new --client-token field to ExecuteStatement and BatchExecuteStatement operations. Customers can now run queries with the additional client token parameter to ensures idempotency.
    • api-change:sagemaker-metrics: [botocore] Update SageMaker Metrics documentation.

    1.26.29

    • api-change:cloudtrail: [botocore] Merging mainline branch for service model into mainline release branch. There are no new APIs.
    • api-change:rds: [botocore] This deployment adds ClientPasswordAuthType field to the Auth structure of the DBProxy.

    1.26.28

    • bugfix:Endpoint provider: [botocore] Updates ARN parsing resourceId delimiters

    ... (truncated)

    Commits
    • 3041f18 Merge branch 'release-1.26.32'
    • 6af157c Bumping version to 1.26.32
    • a445626 Add changelog entries from botocore
    • 3442539 S3 upload_file, download file to support path-lib objects (#2259)
    • dc25471 Merge branch 'release-1.26.31'
    • 3f5a1b3 Merge branch 'release-1.26.31' into develop
    • aa0f99a Bumping version to 1.26.31
    • 5051df9 Add changelog entries from botocore
    • c5d8e4a Merge branch 'release-1.26.30'
    • ed6b741 Merge branch 'release-1.26.30' into develop
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Bump boto3 from 1.24.89 to 1.26.32 in /rastervision_aws_batch

    Bump boto3 from 1.24.89 to 1.26.32 in /rastervision_aws_batch

    Bumps boto3 from 1.24.89 to 1.26.32.

    Changelog

    Sourced from boto3's changelog.

    1.26.32

    • enhancement:s3: s3.transfer methods accept path-like objects as input
    • api-change:appflow: [botocore] This release updates the ListConnectorEntities API action so that it returns paginated responses that customers can retrieve with next tokens.
    • api-change:cloudfront: [botocore] Updated documentation for CloudFront
    • api-change:datasync: [botocore] AWS DataSync now supports the use of tags with task executions. With this new feature, you can apply tags each time you execute a task, giving you greater control and management over your task executions.
    • api-change:efs: [botocore] Update efs client to latest version
    • api-change:guardduty: [botocore] This release provides the valid characters for the Description and Name field.
    • api-change:iotfleetwise: [botocore] Updated error handling for empty resource names in "UpdateSignalCatalog" and "GetModelManifest" operations.
    • api-change:sagemaker: [botocore] AWS sagemaker - Features: This release adds support for random seed, it's an integer value used to initialize a pseudo-random number generator. Setting a random seed will allow the hyperparameter tuning search strategies to produce more consistent configurations for the same tuning job.

    1.26.31

    • api-change:backup-gateway: [botocore] This release adds support for VMware vSphere tags, enabling customer to protect VMware virtual machines using tag-based policies for AWS tags mapped from vSphere tags. This release also adds support for customer-accessible gateway-hypervisor interaction log and upload bandwidth rate limit schedule.
    • api-change:connect: [botocore] Added support for "English - New Zealand" and "English - South African" to be used with Amazon Connect Custom Vocabulary APIs.
    • api-change:ecs: [botocore] This release adds support for container port ranges in ECS, a new capability that allows customers to provide container port ranges to simplify use cases where multiple ports are in use in a container. This release updates TaskDefinition mutation APIs and the Task description APIs.
    • api-change:eks: [botocore] Add support for Windows managed nodes groups.
    • api-change:glue: [botocore] This release adds support for AWS Glue Crawler with native DeltaLake tables, allowing Crawlers to classify Delta Lake format tables and catalog them for query engines to query against.
    • api-change:kinesis: [botocore] Added StreamARN parameter for Kinesis Data Streams APIs. Added a new opaque pagination token for ListStreams. SDKs will auto-generate Account Endpoint when accessing Kinesis Data Streams.
    • api-change:location: [botocore] This release adds support for a new style, "VectorOpenDataStandardLight" which can be used with the new data source, "Open Data Maps (Preview)".
    • api-change:m2: [botocore] Adds an optional create-only KmsKeyId property to Environment and Application resources.
    • api-change:sagemaker: [botocore] SageMaker Inference Recommender now allows customers to load tests their models on various instance types using private VPC.
    • api-change:securityhub: [botocore] Added new resource details objects to ASFF, including resources for AwsEc2LaunchTemplate, AwsSageMakerNotebookInstance, AwsWafv2WebAcl and AwsWafv2RuleGroup.
    • api-change:translate: [botocore] Raised the input byte size limit of the Text field in the TranslateText API to 10000 bytes.

    1.26.30

    • api-change:ce: [botocore] This release supports percentage-based thresholds on Cost Anomaly Detection alert subscriptions.
    • api-change:cloudwatch: [botocore] Update cloudwatch client to latest version
    • api-change:networkmanager: [botocore] Appliance Mode support for AWS Cloud WAN.
    • api-change:redshift-data: [botocore] This release adds a new --client-token field to ExecuteStatement and BatchExecuteStatement operations. Customers can now run queries with the additional client token parameter to ensures idempotency.
    • api-change:sagemaker-metrics: [botocore] Update SageMaker Metrics documentation.

    1.26.29

    • api-change:cloudtrail: [botocore] Merging mainline branch for service model into mainline release branch. There are no new APIs.
    • api-change:rds: [botocore] This deployment adds ClientPasswordAuthType field to the Auth structure of the DBProxy.

    1.26.28

    • bugfix:Endpoint provider: [botocore] Updates ARN parsing resourceId delimiters

    ... (truncated)

    Commits
    • 3041f18 Merge branch 'release-1.26.32'
    • 6af157c Bumping version to 1.26.32
    • a445626 Add changelog entries from botocore
    • 3442539 S3 upload_file, download file to support path-lib objects (#2259)
    • dc25471 Merge branch 'release-1.26.31'
    • 3f5a1b3 Merge branch 'release-1.26.31' into develop
    • aa0f99a Bumping version to 1.26.31
    • 5051df9 Add changelog entries from botocore
    • c5d8e4a Merge branch 'release-1.26.30'
    • ed6b741 Merge branch 'release-1.26.30' into develop
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
Releases(v0.20.1)
  • v0.20.1(Dec 29, 2022)

    Changelog

    https://docs.rastervision.io/en/0.20/changelog.html#raster-vision-0-20-1

    Pypi

    pip install rastervision==0.20.1
    

    https://pypi.org/project/rastervision/0.20.1/

    Notes

    • The pip installation is only guaranteed to work with Python 3.9. Anaconda is highly recommended.

    Docker image

    docker pull quay.io/azavea/raster-vision:pytorch-0.20
    
    Source code(tar.gz)
    Source code(zip)
  • v0.20(Dec 16, 2022)

    Changelog

    https://docs.rastervision.io/en/0.20/changelog.html#raster-vision-0-20-0

    Pypi

    pip install rastervision==0.20
    

    https://pypi.org/project/rastervision/0.20/

    Notes

    • The pip installation is only guaranteed to work with Python 3.9. Anaconda is highly recommended.
    • If you encounter problems with gdal installation while installing rastervision_gdal_vsi, you can
      • try installing gdal via conda install -c conda-forge gdal==3.5.2
      • install all Raster Vision plugins except rastervision_gdal_vsi
        pip install \
            rastervision_pipeline==0.20 \
            rastervision_aws_s3==0.20 \
            rastervision_aws_batch==0.20 \
            rastervision_core==0.20 \
            rastervision_pytorch_learner==0.20 \
            rastervision_pytorch_backend==0.20
        

    Docker image

    docker pull quay.io/azavea/raster-vision:pytorch-0.20
    
    Source code(tar.gz)
    Source code(zip)
  • v0.13.1(Mar 26, 2021)

    Pypi

    pip install rastervision==0.13.1
    

    https://pypi.org/project/rastervision/0.13.1/

    Quay.io

    docker pull quay.io/azavea/raster-vision:pytorch-0.13.1
    
    Source code(tar.gz)
    Source code(zip)
  • v0.13(Mar 8, 2021)

    Pypi

    pip install rastervision==0.13
    

    https://pypi.org/project/rastervision/0.13/

    Quay.io

    docker pull quay.io/azavea/raster-vision:pytorch-0.13
    
    Source code(tar.gz)
    Source code(zip)
  • v0.12.1(Jan 28, 2021)

    Pypi

    pip install rastervision==0.12.1
    

    https://pypi.org/project/rastervision/0.12.1/

    Quay.io

    docker pull quay.io/azavea/raster-vision:pytorch-0.12.1
    
    Source code(tar.gz)
    Source code(zip)
  • v0.12.0(Jul 3, 2020)

    Pypi

    pip install rastervision==0.12.0
    

    https://pypi.org/project/rastervision/0.12.0/

    Quay.io

    docker pull quay.io/azavea/raster-vision:pytorch-0.12
    
    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Jun 19, 2020)

    Pypi

    pip install rastervision==0.11.0
    

    https://pypi.org/project/rastervision/0.11.0/

    Quay.io

    docker pull quay.io/azavea/raster-vision:pytorch-0.11
    docker pull quay.io/azavea/raster-vision:tf-cpu-0.11
    docker pull quay.io/azavea/raster-vision:tf-gpu-0.11
    

    Examples

    https://github.com/azavea/raster-vision-examples/releases/tag/v0.11.0

    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(Oct 2, 2019)

    Pypi

    pip install rastervision==0.10.0
    

    https://pypi.org/project/rastervision/0.10.0/

    Quay.io

    docker pull quay.io/azavea/raster-vision:pytorch-0.10
    docker pull quay.io/azavea/raster-vision:tf-cpu-0.10
    docker pull quay.io/azavea/raster-vision:tf-gpu-0.10
    

    Examples

    https://github.com/azavea/raster-vision-examples/releases/tag/v0.10.0

    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Jun 12, 2019)

    Pypi

    pip install rastervision==0.9.0
    

    https://pypi.org/project/rastervision/0.9.0/

    Quay.io

    CPU

    docker pull quay.io/azavea/raster-vision:cpu-0.9.0
    

    GPU

    docker pull quay.io/azavea/raster-vision:gpu-0.9.0
    

    Examples

    https://github.com/azavea/raster-vision-examples/releases/tag/v0.9.0

    Source code(tar.gz)
    Source code(zip)
  • v0.8.1(Oct 24, 2018)

    Pypi

    pip install rastervision==0.8.1
    

    https://pypi.org/project/rastervision/0.8.1/

    Quay.io

    CPU

    docker pull quay.io/azavea/raster-vision:cpu-0.8.1
    

    GPU

    docker pull quay.io/azavea/raster-vision:gpu-0.8.1
    

    QGIS Plugin and Examples Repository were not updated for this release.

    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(Oct 20, 2018)

    The first official release of Raster Vision.

    Pypi

    pip install rastervision==0.8.0
    

    https://pypi.org/project/rastervision/0.8.0/

    Quay.io

    CPU

    docker pull quay.io/azavea/raster-vision:cpu-0.8.0
    

    GPU

    docker pull quay.io/azavea/raster-vision:gpu-0.8.0
    

    QGIS Plugin

    https://github.com/azavea/raster-vision-qgis/releases/tag/v0.8.0

    Examples Repository

    https://github.com/azavea/raster-vision-examples/releases/tag/v0.8.0

    Source code(tar.gz)
    Source code(zip)
  • old-object-detection(Mar 29, 2018)

Owner
Azavea
B Corporation that applies geospatial analytics, software, and research for positive civic, social, and environmental impact.
Azavea
Code and model benchmarks for "SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology"

NeurIPS 2020 SEVIR Code for paper: SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology Requirement

USAF - MIT Artificial Intelligence Accelerator 46 Dec 15, 2022
Train a deep learning net with OpenStreetMap features and satellite imagery.

DeepOSM Classify roads and features in satellite imagery, by training neural networks with OpenStreetMap (OSM) data. DeepOSM can: Download a chunk of

TrailBehind, Inc. 1.3k Nov 24, 2022
To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

Kunal Wadhwa 2 Jan 5, 2022
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 360 Dec 10, 2022
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 359 Jan 5, 2023
An algorithm that handles large-scale aerial photo co-registration, based on SURF, RANSAC and PyTorch autograd.

An algorithm that handles large-scale aerial photo co-registration, based on SURF, RANSAC and PyTorch autograd.

Luna Yue Huang 41 Oct 29, 2022
Implementation of "RaScaNet: Learning Tiny Models by Raster-Scanning Image" from CVPR 2021.

RaScaNet: Learning Tiny Models by Raster-Scanning Images Deploying deep convolutional neural networks on ultra-low power systems is challenging, becau

SAIT (Samsung Advanced Institute of Technology) 5 Dec 26, 2022
FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery (TGRS)

FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery by Ailong Ma, Junjue Wang*, Yanfei Zhon

Kingdrone 43 Jan 5, 2023
[CVPR2021] UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles

UAV-Human Official repository for CVPR2021: UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicle Paper arXiv Res

null 129 Jan 4, 2023
Building Ellee — A GPT-3 and Computer Vision Powered Talking Robotic Teddy Bear With Human Level Conversation Intelligence

Using an object detection and facial recognition system built on MobileNetSSDV2 and Dlib and running on an NVIDIA Jetson Nano, a GPT-3 model, Google Speech Recognition, Amazon Polly and servo motors, I built Ellee - a robotic teddy bear who can move her head and converse naturally.

null 24 Oct 26, 2022
A large dataset of 100k Google Satellite and matching Map images, resembling pix2pix's Google Maps dataset.

Larger Google Sat2Map dataset This dataset extends the aerial ⟷ Maps dataset used in pix2pix (Isola et al., CVPR17). The provide script download_sat2m

null 34 Dec 28, 2022
Open source Python module for computer vision

About PCV PCV is a pure Python library for computer vision based on the book "Programming Computer Vision with Python" by Jan Erik Solem. More details

Jan Erik Solem 1.9k Jan 6, 2023
A simple, high level, easy-to-use open source Computer Vision library for Python.

ZoomVision : Slicing Aid Detection A simple, high level, easy-to-use open source Computer Vision library for Python. Installation Installing dependenc

Nurettin Sinanoğlu 2 Mar 4, 2022
MediaPipe is a an open-source framework from Google for building multimodal

MediaPipe is a an open-source framework from Google for building multimodal (eg. video, audio, any time series data), cross platform (i.e Android, iOS, web, edge devices) applied ML pipelines. It is performance optimized with end-to-end on device inference in mind.

Bhavishya Pandit 3 Sep 30, 2022
FFCV: Fast Forward Computer Vision (and other ML workloads!)

Fast Forward Computer Vision: train models at a fraction of the cost with accele

FFCV 2.3k Jan 3, 2023
Open Source Differentiable Computer Vision Library for PyTorch

Kornia is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer

kornia 7.6k Jan 4, 2023
Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data.

Deep Learning Dataset Maker Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data. How to use Down

deepbands 25 Dec 15, 2022
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Hao Su's Lab, UCSD 48 Dec 30, 2022