Evaluation metrics¶
Metrics are calculated for each round of training.
When the session is complete, you can see a set of metrics for all rounds of training, as well as metrics for the final model.
Retrieve Metrics for a Session¶
Use the SessionMetrics
class of the API to store and retrieve metrics for a session. You can retrieve the model performance metrics as a dictionary (Dict), or plot them. See the API Class Reference for details.
Typical usage example:
client = connect("token")
already_trained_session_id = "<sessionID>"
session = client.fl_session(already_trained_session_id)
# retrieve the metrics for the session as a dictionary
metrics = session.metrics.as_dict()
Authenticate to and connect to the integrate.ai client.
Provide the session ID that you want to retrieve the metrics for as the `already_trained_session_id``.
Call the
SessionMetrics
class.
Available Metrics¶
The Federated Loss value for the latest round of model training is reported as the global_model_federated_loss(float)
attribute for an instance of SessionMetrics
.
This is a model level metric reported for each round of training. It is a weighted average loss across different clients, weighted by the number of examples/samples from each silo.
See the metrics by machine learning task in the following table:
Classification and Logistic |
Regression and Normal, Tweedie (power = 0) |
Poisson, Gamma, Tweedie (power > 0), Inverse Gaussian |
---|---|---|