This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intention of Apex is to make up-to-date utilities available to users as quickly as possible.
It is a interesting work. I have some questions about your paper.
In your work. When inference, my understanding is to build edge table at first, then use this table when inference. But How to decide which width network a patch choose? The others methods in Figure 7 is clear. But I can't know how ARM-FSRCNN choose width. It is confuse me (
In Table 1&2, ARM-L/ARM-M/ARM-S means: fixed width no matter what the input patch is, right?
About ARM-L/ARM-M/ARM-S, If just fix width. May be you should compare with ClassSR use your width choose policy insted of fixing width?
By the way. In table 1 (row: Module FSRCNN, column: FLOPS) is 0%, may be error. :)
In Fig.4.(a), I find different models will cause different PSNR lookup results. How to get Spearman correlation (0.85) coefficient picture (Fig.4.(a)). What model do you use to generate psnr values. (complex model will cause more pricise PSNR value)