(Apologies for creating multiple issues in a row -- it seemed more clean to keep them separate.)
I downloaded the data from DeepOBS_Baselines, and attempted to run example_analyze_pytorch.py
. Unfortunately DeepOBS seems to look for keys in the JSON files that don't exist:
$ python example_analyze_pytorch.py
/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:144: RuntimeWarning: Metric valid_accu
racies does not exist for testproblem quadratic_deep. We now use fallback metric valid_losses
default_metric), RuntimeWarning)
/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:229: RuntimeWarning: All settings for
/scratch/local/ssd/user/data/deepobs/quadratic_deep/SGD on test problem quadratic_deep have the same
number of seeds runs. Mode 'most' does not make sense and we use the fallback mode 'final'
.format(optimizer_path, testproblem_name), RuntimeWarning)
{'Performance': 127.96759578159877, 'Speed': 'N.A.', 'Hyperparameters': {'lr': 0.01, 'momentum': 0.9
9, 'nesterov': False}, 'Training Parameters': {}}
/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:144: RuntimeWarning: Metric valid_accu
racies does not exist for testproblem quadratic_deep. We now use fallback metric valid_losses
default_metric), RuntimeWarning)
/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:229: RuntimeWarning: All settings for
/scratch/local/ssd/user/data/deepobs/quadratic_deep/SGD on test problem quadratic_deep have the same
number of seeds runs. Mode 'most' does not make sense and we use the fallback mode 'final'
.format(optimizer_path, testproblem_name), RuntimeWarning)
/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:150: RuntimeWarning: Cannot fallback t
o metric valid_losses for optimizer MomentumOptimizer on testproblem quadratic_deep. Will now fallba
ck to metric test_losses
testproblem_name), RuntimeWarning)
/users/user/miniconda3/lib/python3.7/site-packages/numpy/core/_methods.py:193: RuntimeWarning: inva$
id value encountered in subtract
x = asanyarray(arr - arrmean)
/users/user/miniconda3/lib/python3.7/site-packages/numpy/lib/function_base.py:3949: RuntimeWarning:
invalid value encountered in multiply
x2 = take(ap, indices_above, axis=axis) * weights_above
Traceback (most recent call last):
File "example_analyze_pytorch.py", line 17, in <module>
analyzer.plot_optimizer_performance(result_path, reference_path=base + '/deepobs/baselines/quad$
atic_deep/MomentumOptimizer')
File "/users/user/Research/deepobs/deepobs/analyzer/analyze.py", line 514, in plot_optimizer_perfo
rmance
which=which)
File "/users/user/Research/deepobs/deepobs/analyzer/analyze.py", line 462, in _plot_optimizer_perf
ormance
optimizer_path, mode, metric)
File "/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py", line 206, in create_setting_
analyzer_ranking
setting_analyzers = _get_all_setting_analyzer(optimizer_path)
File "/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py", line 184, in _get_all_settin
g_analyzer
setting_analyzers.append(SettingAnalyzer(sett_path))
File "/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py", line 260, in __init__
self.aggregate = aggregate_runs(path)
File "/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py", line 101, in aggregate_runs
aggregate['optimizer_hyperparams'] = json_data['optimizer_hyperparams']
KeyError: 'optimizer_hyperparams'
One of the JSON files in question looks like this (data points snipped for brevity):
{
"train_losses": [353.9337594168527, 347.5994306291853, 331.35902622767856, 307.2468915666853, ... 97.28871154785156, 91.45470428466797, 96.45774841308594, 86.27237701416016],
"optimizer": "MomentumOptimizer",
"testproblem": "quadratic_deep",
"weight_decay": null,
"batch_size": 128,
"num_epochs": 100,
"learning_rate": 1e-05,
"lr_sched_epochs": null,
"lr_sched_factors": null,
"random_seed": 42,
"train_log_interval": 1,
"hyperparams": {"momentum": 0.99, "use_nesterov": false}
}
The obvious key seems to be hyperparams
as opposed to optimizer_hyperparams
; this occurs only for some JSON files.
Edit: Having fixed this, there is a further key error on training_params
. Perhaps these were generated with different versions of the package.