Hi,
thank you for your impressive work!
Could you please explain the required dimensionality of outputs and targets to use it with cudnn.SpatialCrossEntropyCriterion()
as in your Yandex repo? Segnet model input is [batchSize, numChannels, H, W]
and the output is [batchSize, numClasses, H, W]
. But cudnn.SpatialCrossEntropyCriterion()
requires the target to be of size [batchSize, H, W]
. So it looks like that the model outputs numClasses
separate masks for each class while the criterion expects a single mask with all the classes within it.
I can't get through this issue while reproducing the model with a toy example.