Thanks for your great work! When I run the code with the hyperparameters you provided in your paper, eg.:
# config.py
dataset = "cora"
num_owners = 3
delta=20
num_samples = [5,5]
batch_size = 64
latent_dim=128
steps=10
epochs_local=1
lr=0.001
weight_decay=1e-4
hidden=32
dropout=0.5
gen_epochs=20
num_pred=5
hidden_portion=0.278
epoch_classifier=50
classifier_layer_sizes=[64,32]
I get the results:
FedSage+ end!
1/9 [==>...........................] - ETA: 3s - loss: 0.7960 - acc: 0.8281
3/9 [=========>....................] - ETA: 0s - loss: 0.7997 - acc: 0.8316
7/9 [======================>.......] - ETA: 0s - loss: 0.8128 - acc: 0.8331
9/9 [==============================] - 1s 18ms/step - loss: 0.8080 - acc: 0.8358
1/9 [==>...........................] - ETA: 3s - loss: 0.1631 - acc: 0.9688
4/9 [============>.................] - ETA: 0s - loss: 0.2067 - acc: 0.9469
7/9 [======================>.......] - ETA: 0s - loss: 0.2316 - acc: 0.9389
9/9 [==============================] - 1s 18ms/step - loss: 0.2372 - acc: 0.9366
Global model
Global Test Set Metrics:
loss: 0.2478
acc: 0.9317
I think the test acc for FedSage+ is 0.8358, and the test acc for GlobSage is 93.17 (Correct me if I misunderstand), which has a gap to the reported results in your paper. Therefore, could you share a more detailed hyperparameter setting to reproduce the reported results? (eg., epochs_local, classifier_layer_sizes, gen_epochs, etc.)