Hi,
Thanks for uploading great work. It is really exiting to hear your self-supervised learning outperforms MAML on mini-ImageNet!
I'm trying to replicate your results in the paper. Following the instruction, however, I got slightly degraded accuracy.
For 1-shot,
$ python prototransfer/eval.py --dataset miniimagenet --eval_ways 5 --eval_support_shots 1 --eval_query_shots 15 --sup_finetune --ft_freeze_backbone --load_path prototransfer/checkpoints/proto_miniimagenet_conv4_euclidean_1supp_3query_50bs.pth.tar
Used device: cuda
Supervised data loader for miniimagenet:test.
Loaded checkpoint 'prototransfer/checkpoints/proto_miniimagenet_conv4_euclidean_1supp_3query_50bs.pth.tar' (epoch 1489)
Evaluating prototransfer/checkpoints/proto_miniimagenet_conv4_euclidean_1supp_3query_50bs.pth.tar...
Test loss 4.5765 and accuracy 44.41 +- 0.77
For 5-shot,
$ python prototransfer/eval.py --dataset miniimagenet --eval_ways 5 --eval_support_shots 5 --eval_que
ry_shots 15 --sup_finetune --ft_freeze_backbone --load_path prototransfer/checkpoints/proto_miniimagenet_conv4_euclidean_1supp_3query_50bs.pth.tar
Used device: cuda
Supervised data loader for miniimagenet:test.
Loaded checkpoint 'prototransfer/checkpoints/proto_miniimagenet_conv4_euclidean_1supp_3query_50bs.pth.tar' (epoch 1489)
Evaluating prototransfer/checkpoints/proto_miniimagenet_conv4_euclidean_1supp_3query_50bs.pth.tar...
Test loss 1.5548 and accuracy 61.38 +- 0.74
Is it suppose to be similar to the last row in Table 3. (Testing ProtoTune w/ FT)?
The training stops at epoch 1689, seems a bit early.
$ python prototransfer/train.py --dataset miniimagenet --train_support_shots 1 --train_query_shots 3 --no_aug_support Used device: cuda
Save path is: prototransfer/checkpoints/proto_miniimagenet_conv4_euclidean_1supp_3query_50bs
Early stopping with patience 200 epochs
Setting:
Namespace(backbone='conv4', batch_size=50, datapath='../few_data/', dataset='miniimagenet', distance='euclidean', epochs=10000, iterations=100, learn_temperature=False, load_best=False, load_last=False, load_path='', lr=0.001, lr_decay_rate=0.5, lr_decay_step=25000, merge_train_val=False, n_classes=None, n_images=None, no_aug_query=False, no_aug_support=True, num_data_workers_cpu=0, num_data_workers_cuda=8, patience=200, save=True, save_path='', self_sup_loss='proto', train_query_shots=3, train_support_shots=1)
Training ...
Epoch 1690, loss 0.0877, accuracy 0.9701: 17%|██████████▋ | 1689/10000 [2:56:48<14:30:17, 6.28s/epochs]
Early stopping at epoch 1689, because there was no improvement for 200 epochs
Epoch 1690, loss 0.0877, accuracy 0.9701: 17%|██████████▋ | 1689/10000 [2:56:48<14:30:03, 6.28s/epochs]
-------------- Best validation loss 0.0756 with accuracy 0.9760
Any suggestions?
Best regards,
Hwidong Na