Reporting
During task execution, some interesting events are to be reported to Supervisely, and other events may be reported.
For the reporting, Supervisely logger (based on python-json-logger) is used, so don't replace formatters or handlers of the logger.
The following events should be reported:
-
when inference task is finished — use
report_inference_finished
func. -
when import task is finished — use
report_import_finished
func. -
when model checkpoint is stored — just use
TrainCheckpoints
class, it will do the work.
The following things may be reported:
-
progress of some task or subtask — see Progress.
-
metrics calculated during training or validation (now for training tasks only) — see Metrics reporting.
report_inference_finished¶
def report_inference_finished():
Reports that inference task has been finished successfully. It is required to call it after inference when all input data has been processed.
report_import_finished¶
def report_import_finished():
Reports that import task has been finished successfully. It is required to call it after import when all input data has been processed.
Metrics reporting¶
Pass values you want to collect as metrics, e.g. loss, IoU, accuracy, lunar month. Please note that now only following metrics will be displayed on charts: loss
, accuracy
, dice
.
report_metrics_training¶
def report_metrics_training(epoch, metrics):
Report metrics from NN training.
epoch
— current epoch (float).metrics
— dictionary like{metric_name: metric_value_float}
.
report_metrics_validation¶
def report_metrics_validation(epoch, metrics):
Report metrics from NN validation (as part of a training task).
epoch
— current epoch (float).metrics
— dictionary like{metric_name: metric_value_float}
.