Describe your changes
SKLearnEvaluationLogger decorator wraps telemetry log_api functionality and allows to generate logs for sklearn-evaluation as follows:
@SKLearnEvaluationLogger.log(feature='plot')
def confusion_matrix(
y_true,
y_pred,
target_names=None,
normalize=False,
cmap=None,
ax=None,
**kwargs):
pass
this will generate the following log:
{
"metadata": {
"action": "confusion_matrix"
"feature": "plot",
"args": {
"target_names": "None",
"normalize": "False",
"cmap": "None",
"ax": "None"
}
}
}
** since y_true
and y_pred
are positional arguments without default values it won't log them
we can also use pre-defined flags when calling a function
return plot.confusion_matrix(self.y_true, self.y_pred, self.target_names, ax=_gen_ax())
which will generate the following log:
"metadata": {
"action": "confusion_matrix"
"feature": "plot",
"args": {
"target_names": "['setosa', 'versicolor', 'virginica']",
"normalize": "False",
"cmap": "None",
"ax": "AxesSubplot(0.125,0.11;0.775x0.77)"
}
},
Queries
Run queries and filter out sklearn-evaluation events by the event name: sklearn-evaluation
Break these events by feature
('plot', 'report', 'SQLiteTracker', 'NotebookCollection')
Break events by actions (i.e: 'confusion_matrix', 'roc', etc...) and/or flags ('is_report')
Errors
Failing runnings will be named: sklearn-evaluation-error
Checklist before requesting a review
- [X] I have performed a self-review of my code
- [X] I have added thorough tests (when necessary).
- [] I have added the right documentation (when needed). Product update? If yes, write one line about this update.