Overview: Depth is the hallmark of DNNs. But more depth means more sequential computation and higher latency. This begs the question -- is it possible to build high-performing ``non-deep" neural networks? We show that it is. We show, for the first time, that a network with a depth of just 12 can achieve top-1 accuracy over 80% on ImageNet, 96% on CIFAR10, and 81% on CIFAR100. We also show that a network with a low-depth (12) backbone can achieve an AP of 48% on MS-COCO.
If you find our work useful, please consider citing it:
@article{goyal2021nondeep,
title={Non-deep Networks},
author={Goyal, Ankit and Bochkovskiy, Alexey and Deng, Jia and Koltun, Vladlen},
journal={arXiv:2110.07641},
year={2021}
}
I am very interested in your research, when will the code of the model be released? I saw on October 23rd that you said it would be released in 4 weeks
I am very interested in your work and would like to further study. I hope you can release the code as soon as possible in your busy schedule. Thank you!
Hello. Thank you for your great study. I wonder the meaning of 'Shuffle' of fusion block in Fig. A1.
Is it pixel shuffle layer?
Please let me know the meaning of that.
Hi. Figure 2b shows that there's one 1x1conv in a branch of SSE, how to match the channel of output by 1x1conv with the channel of input after shortcut? If I set the output channel of 1x1conv the same as input, the channels of the outputs by RepVGG block and SSE will not match.
Hello, my friend, appreciate for your great work! I have tested the code on https://github.com/Pritam-N/ParNet by Pritam-N and change the ResNet code in my model by using your ParNet , but the actual time is quite slow than the paper said. My block size is [64, 128, 256, 512, 2048], and the time of "forward()" is more than 5s average while the Resnet is 0.02s in my device. I have use the time function for every line in the forward(), find that the encode stuff is the main reason. I continue write time.perf_counter() in the encode stuff, find that the "self.stream2_fusion" and "self.stream3_fusion" is the most time user. Do you know why ?
what is your model architecture in cifar-100? I just changed front two downsample modules based on the ParNet for Imagenet in the paper. But the accuracy is lower. And How do you set the LR, MILESTONES and NUM_EPOCH to meet high accuracy?
opened by qq769852576 2
Releases(v.0.1.0)
v.0.1.0(Dec 24, 2021)
Preliminary version containing code for the imagenet dataset.
This is the official code release for the paper Shape and Material Capture at Home. The code enables you to reconstruct a 3D mesh and Cook-Torrance BRDF from one or more images captured with a flashlight or camera with flash.