ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation

Overview

ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation

This repository contains the source code of our paper, ESPNet (accepted for publication in ECCV'18).

Sample results

Check our project page for more qualitative results (videos).

Click on the below sample image to view the segmentation results on YouTube.

Structure of this repository

This repository is organized as:

  • train This directory contains the source code for trainig the ESPNet-C and ESPNet models.
  • test This directory contains the source code for evaluating our model on RGB Images.
  • pretrained This directory contains the pre-trained models on the CityScape dataset
    • encoder This directory contains the pretrained ESPNet-C models
    • decoder This directory contains the pretrained ESPNet models

Performance on the CityScape dataset

Our model ESPNet achives an class-wise mIOU of 60.336 and category-wise mIOU of 82.178 on the CityScapes test dataset and runs at

  • 112 fps on the NVIDIA TitanX (30 fps faster than ENet)
  • 9 FPS on TX2
  • With the same number of parameters as ENet, our model is 2% more accurate

Performance on the CamVid dataset

Our model achieves an mIOU of 55.64 on the CamVid test set. We used the dataset splits (train/val/test) provided here. We trained the models at a resolution of 480x360. For comparison with other models, see SegNet paper.

Note: We did not use the 3.5K dataset for training which was used in the SegNet paper.

Model mIOU Class avg.
ENet 51.3 68.3
SegNet 55.6 65.2
ESPNet 55.64 68.30

Pre-requisite

To run this code, you need to have following libraries:

  • OpenCV - We tested our code with version > 3.0.
  • PyTorch - We tested with v0.3.0
  • Python - We tested our code with Pythonv3. If you are using Python v2, please feel free to make necessary changes to the code.

We recommend to use Anaconda. We have tested our code on Ubuntu 16.04.

Citation

If ESPNet is useful for your research, then please cite our paper.

@inproceedings{mehta2018espnet,
  title={ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation},
  author={Sachin Mehta, Mohammad Rastegari, Anat Caspi, Linda Shapiro, and Hannaneh Hajishirzi},
  booktitle={ECCV},
  year={2018}
}

FAQs

Assertion error with class labels (t >= 0 && t < n_classes).

If you are getting an assertion error with class labels, then please check the number of class labels defined in the label images. You can do this as:

import cv2
import numpy as np
labelImg = cv2.imread(<label_filename.png>, 0)
unique_val_arr = np.unique(labelImg)
print(unique_val_arr)

The values inside unique_val_arr should be between 0 and total number of classes in the dataset. If this is not the case, then pre-process your label images. For example, if the label iamge contains 255 as a value, then you can ignore these values by mapping it to an undefined or background class as:

labelImg[labelImg == 255] = <undefined class id>
Comments
  • how about the performance?

    how about the performance?

    @sacmehta ,thanks very much for your work, and I wonder about the performance(mIOU) on cityscapes validation set of the ESP_C model and ESPnet model? I run your commands, and the mIOU only 41% which is much lower than the performance released on the paper ? Thanks very much !

    opened by wldeephi 39
  • Cityscape Dataset Error

    Cityscape Dataset Error

    Hello I have tried to run your code but I have several problem:

    1. Which cityscapes dataset that you are download? I have download from this link and download leftImg8bit_trainvaltest.zip (11GB) and it's GT gtFine_trainvaltest.zip (241MB)
    2. I got error and realize that in your train.txt and val.txt must change GT path from labelTrainIds to labelIds.
    3. After I change it again I got error said that Some problem with labels. Please check. Again I find that the dataset have bigger class. Therefore in your paper did you only have 20 class?, and could you share a link for cityscape dataset that you download for your paper?
    Labels can take value between 0 and number of classes.
    Some problem with labels. Please check.
    Label Image ID: ./city/gtFine/train/zurich/zurich_000104_000019_gtFine_labelIds.png
    Self class 20
    Max Val 26
    Min_Val 0
    Labels can take value between 0 and number of classes.
    Some problem with labels. Please check.
    Label Image ID: ./city/gtFine/train/zurich/zurich_000098_000019_gtFine_labelIds.png
    Self class 20
    Max Val 33
    Min_Val 1
    
    opened by herleeyandi 19
  • The inference speed on Jetson TX2

    The inference speed on Jetson TX2

    Hello, @sacmehta I run the ESPNet on jetson TX2 and the JetPack SDK verson is 4.1.1, pytorch version is 4.0. I find that when the input image is 360x640, the inference time is about 0.112s which means FPS is less than 10. (I am sure without image loading and image writing time.) In your paper, the inference Speed is more than 16 when the image is 360x640. Can you give me more details about it? Beside, I use the erf_net code to measure the inference time of ESPNet, https://github.com/Eromera/erfnet_pytorch/blob/master/eval/eval_forwardTime.py

    image

    opened by MrLinNing 13
  • ESPNet training

    ESPNet training

    Hello,Thank you for your impressive work! I know the training is followed by two steps.When I set the variable 'decoder' to 'True',there is a problem with '

    RuntimeError: input and target batch or spatial sizes don't match: target [8 x 96 x 192], input [8 x 20 x 768 x 1536] at /opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THCUNN/generic/SpatialClassNLLCriterion.cu:23' Please help me solve doubts.Thank you very much!

    opened by pkuqgg 12
  • Runtime error pytorch 0.3.1,python3

    Runtime error pytorch 0.3.1,python3

    /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [704,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [192,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [576,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [960,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [832,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [448,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [421,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [422,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [942,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [320,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [349,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [313,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [314,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [815,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [821,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [822,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [823,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [699,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [700,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [571,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [64,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [65,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [66,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [67,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [68,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [69,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [70,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [71,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [72,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [73,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [74,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [75,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [76,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [77,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [78,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [79,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [80,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [81,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [82,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [83,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [84,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [85,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [86,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [87,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [88,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [89,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [90,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [91,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [92,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [93,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [94,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [95,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [32,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [33,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [34,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [35,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [36,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [37,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [38,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [39,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [40,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [41,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [42,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [43,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [44,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [45,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [46,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [47,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [48,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [49,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [50,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [51,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [52,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [53,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [54,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [55,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [56,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [57,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [58,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [59,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [60,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [61,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [62,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [63,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [160,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [161,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [162,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [163,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [164,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [165,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [166,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [167,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [168,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [169,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [170,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [171,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [172,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [173,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [174,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [175,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [176,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [177,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [178,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [179,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [180,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [181,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [182,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [183,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [184,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [185,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [186,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [187,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [188,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [189,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [190,0,0] Assertion t >= 0 && t < n_classes failed. /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [9,0,0], thread: [191,0,0] Assertion t >= 0 && t < n_classes failed. Traceback (most recent call last): File "main.py", line 409, in trainValidateSegmentation(parser.parse_args()) File "main.py", line 335, in trainValidateSegmentation train(args, trainLoader_scale1, model, criteria, optimizer, epoch) File "main.py", line 105, in train loss.backward() File "/home/lxt/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 167, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables) File "/home/lxt/anaconda3/lib/python3.6/site-packages/torch/autograd/init.py", line 99, in backward variables, grad_variables, retain_graph) RuntimeError: CUDNN_STATUS_INTERNAL_ERROR

    When I train the model use python main.py --scaleIn 8

    opened by lxtGH 11
  •  how to convert the Cityscapes dataset to 19 categories.

    how to convert the Cityscapes dataset to 19 categories.

    Hi , I am new to segmentation field, and I meet the problem about how to convert the Cityscapes dataset to 19 categories. Look forward to your reply. Thank you !!

    opened by TianMingChen 9
  • How to test data on Jetson TX2?

    How to test data on Jetson TX2?

    Hi, In your paper , you mentioned that you have successfully tested Memory efficiency and Sensitivity to GPU frequency in Jetson TX2. I' m quite new in TX2, and I just wonder that how to do that? BTW, for Power Consumption, did you externally connect a power meter to the TX2 or just use some magical commands to test?

    opened by JingliangGao 9
  • Some problem with labels. Please check.

    Some problem with labels. Please check.

    Labels can take value between 0 and number of classes. Some problem with labels. Please check. unique_values=[ 0 1 2 4 5 7 8 10 11 13 14 255]

    Is it important?

    opened by engineer1109 8
  • Evaluate PASCAL VOC 2012 test set

    Evaluate PASCAL VOC 2012 test set

    Hi! How do you evaluate the VOC 2012 test set? How did you test the segmentation results of VOC2012 test data set on PASCAL VOC official website? Looking forward to your reply. Thanks!

    opened by InstantWindy 7
  • RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight'

    RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight'

    I run by this:python3 main.py --scaleIn 8 --p 2 --q 8 --onGPU True then get: Total network parameters: 344841 /home/hh/.local/lib/python3.5/site-packages/torch/nn/modules/loss.py:206: UserWarning: NLLLoss2d has been deprecated. Please use NLLLoss instead as a drop-in replacement and see http://pytorch.org/docs/master/nn.html#torch.nn.NLLLoss for more details. warnings.warn("NLLLoss2d has been deprecated. " Data statistics [103.45756 101.83934 101.728714] [69.51785 67.936035 64.71613 ] [4.2652993 1.5139731] Learning rate: 0.0005 Traceback (most recent call last): File "main.py", line 408, in trainValidateSegmentation(parser.parse_args()) File "main.py", line 334, in trainValidateSegmentation train(args, trainLoader_scale1, model, criteria, optimizer, epoch) File "main.py", line 97, in train output = model(input_var) File "/home/hh/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/hh/programs/ESPNet_hh/train/Model.py", line 286, in forward output0 = self.level1(input) File "/home/hh/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/hh/programs/ESPNet_hh/train/Model.py", line 33, in forward output = self.conv(input) File "/home/hh/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/hh/.local/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 301, in forward self.padding, self.dilation, self.groups) RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight' I followed by your code,and there is only one class for my dataset to segment. What happened to this porblem?How to change?

    opened by berylyellow 7
  • Inference speed measurement

    Inference speed measurement

    Hello, @sacmehta Thank you for your impressive work. However, I don't know how to measure the inference speed of the network properly. I created the [1, 3, 512, 1024] size of input and tried to measure the only model execution time except the first iteration. When I used torch.cuda.synchronize() or cudnn.benchmark=True which are used in eval_forwardTime.py of ERFNet, the speed was not even close to 112. Can you share the codes that you used to measure the inference speed?

    P.S.: I use python 3.7, PyTorch 0.4.1, CUDA 9.0, cudnn 7.1, Titan X gpu.

    opened by Jason93K 6
  • Inverse class probability weights

    Inverse class probability weights

    ** also: if class# range from 0 to 19 (20 classes), it would be hist = np.histogram(label_img, range=[0,self.classes-1],self.classes)[0] **

    I think there are issues with the calculation of the class weights?

    In train/loadData.py, I don't think "hist = np.histogram(label_img, self.classes)" has the desired result. In the absence of range, np.histogram will use the min and max value of the array...

    Should it not be "hist = np.delete(np.histogram(label_img, range=[0,self.classes],self.classes+1)[0],[0])" ? (i.e. for 20 classes, range is [0,20] with 21 bins, and bin "0" gets discarded)

    I'd be glad to understand why I'm wrong otherwise :)

    Nicolas.

    opened by cyclonico 0
  • Comparison of ENet and ERFNet Model

    Comparison of ENet and ERFNet Model

    Hello Good Work ! can you please explain the implementation of ENet using pytorch? In my experience using Pytorch, the inference speed of ENet is even slower than ERFNet. I know it shouldn't be the case.

    opened by Abdul-Nasir11 0
  • add ignored_Id

    add ignored_Id

    Thanks for @sacmehta 's talent work, I have made one change for easy using of this project,

    main.py : 1. add ignored_Id argument to notice which ID (in the label file) should be ignored.

    2. the default value of ignored_Id is 255 (for cityscapse trainID of )

    3. using torch.nn.crossEntryLoss to replace the user defined crossEntryLoss entity, and pass ignored_Id to ignore_Index parameter of torch.nn.crossEntryLoss.

    loaddata.py 4. when check the max_value in the label file, compare it with ignored_Id first, then print some information to notice the ignored_Id in the label file has been ignored

    these changes have been verified in ubuntu 16.04 , cuda 9.0 cudnn 7.0, pytorch 1.0.0 post2, Thanks for your time and looking for your reply.

    opened by xufeifeiWHU 0
Owner
Sachin Mehta
Research Scientist at Apple and Affiliate Assistant Professor at UW
Sachin Mehta
a reimplementation of Optical Flow Estimation using a Spatial Pyramid Network in PyTorch

pytorch-spynet This is a personal reimplementation of SPyNet [1] using PyTorch. Should you be making use of this work, please cite the paper according

Simon Niklaus 269 Jan 2, 2023
Adaptive Pyramid Context Network for Semantic Segmentation (APCNet CVPR'2019)

Adaptive Pyramid Context Network for Semantic Segmentation (APCNet CVPR'2019) Introduction Official implementation of Adaptive Pyramid Context Network

null 21 Nov 9, 2022
TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, CVPR2022

TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation Paper Links: TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentati

Hust Visual Learning Team 253 Dec 21, 2022
Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)

Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation) Download Synthia dataset The model uses

null 32 Sep 21, 2022
EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network

EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network This repo contains the official Pytorch implementaion code and conf

Hu Zhang 175 Jan 7, 2023
Dilated RNNs in pytorch

PyTorch Dilated Recurrent Neural Networks PyTorch implementation of Dilated Recurrent Neural Networks (DilatedRNN). Getting Started Installation: $ pi

Zalando Research 200 Nov 17, 2022
Official code for "Stereo Waterdrop Removal with Row-wise Dilated Attention (IROS2021)"

Stereo-Waterdrop-Removal-with-Row-wise-Dilated-Attention This repository includes official codes for "Stereo Waterdrop Removal with Row-wise Dilated A

null 29 Oct 1, 2022
PyTorch version repo for CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

Study-CSRNet-pytorch This is the PyTorch version repo for CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

null 0 Mar 1, 2022
Predicting Semantic Map Representations from Images with Pyramid Occupancy Networks

This is the code associated with the paper Predicting Semantic Map Representations from Images with Pyramid Occupancy Networks, published at CVPR 2020.

Thomas Roddick 219 Dec 20, 2022
(IEEE TIP 2021) Regularized Densely-connected Pyramid Network for Salient Instance Segmentation

RDPNet IEEE TIP 2021: Regularized Densely-connected Pyramid Network for Salient Instance Segmentation PyTorch training and testing code are available.

Yu-Huan Wu 41 Oct 21, 2022
Polyp-PVT: Polyp Segmentation with Pyramid Vision Transformers (arXiv2021)

Polyp-PVT by Bo Dong, Wenhai Wang, Deng-Ping Fan, Jinpeng Li, Huazhu Fu, & Ling Shao. This repo is the official implementation of "Polyp-PVT: Polyp Se

Deng-Ping Fan 102 Jan 5, 2023
Text to Image Generation with Semantic-Spatial Aware GAN

text2image This repository includes the implementation for Text to Image Generation with Semantic-Spatial Aware GAN This repo is not completely. Netwo

CVDDL 124 Dec 30, 2022
Official implementation of "SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers"

SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers Figure 1: Performance of SegFormer-B0 to SegFormer-B5. Project page

NVIDIA Research Projects 1.4k Dec 31, 2022
An efficient 3D semantic segmentation framework for Urban-scale point clouds like SensatUrban, Campus3D, etc.

An efficient 3D semantic segmentation framework for Urban-scale point clouds like SensatUrban, Campus3D, etc.

Zou 33 Jan 3, 2023
Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data Christoph Reich, Tim Prangemeier, Özdemir Cetin & Heinz Koeppl | Pr

Christoph Reich 23 Sep 21, 2022
Pytorch implementation of SenFormer: Efficient Self-Ensemble Framework for Semantic Segmentation

SenFormer: Efficient Self-Ensemble Framework for Semantic Segmentation Efficient Self-Ensemble Framework for Semantic Segmentation by Walid Bousselham

null 61 Dec 26, 2022
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Jiwoon Ahn 337 Dec 15, 2022
The open source code of SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation.

SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation(ICPR 2020) Overview This code is for the paper: Spatial Attention U-Net for Retinal V

Changlu Guo 151 Dec 28, 2022
PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers

CvT: Introducing Convolutions to Vision Transformers Pytorch implementation of CvT: Introducing Convolutions to Vision Transformers Usage: img = torch

Rishikesh (ऋषिकेश) 193 Jan 3, 2023