Strategy Library¶
Reference guide for available training strategies.
FedAvg¶
Federated Averaging strategy implemented based on https://arxiv.org/abs/1602.05629
Parameters¶
fraction_fit (float, optional): Fraction of clients used during training. Defaults to 0.1.
fraction_eval (float, optional): Fraction of clients used during validation. Defaults to 0.1.
min_fit_clients (int, optional): Minimum number of clients used during training. Defaults to 1.
min_eval_clients (int, optional): Minimum number of clients used during validation. Defaults to 1.
min_available_clients (int, optional): Minimum number of total clients in the system. Defaults to 1.
eval_fn (Callable[[Weights], Optional[Tuple[float, float]]], optional): Function used for validation. Defaults to None.
on_fit_config_fn (Callable[[int], Dict[str, Scalar]], optional): Function used to configure training. Defaults to None.
on_evaluate_config_fn (Callable[[int], Dict[str, Scalar]], optional): Function used to configure validation. Defaults to None.
accept_failures (bool, optional): Whether or not accept rounds containing failures. Defaults to True. initial_parameters (Weights, optional): Initial global model parameters.
FedAvgM¶
Federated Averaging with Momentum (FedAvgM) strategy https://arxiv.org/pdf/1909.06335.pdf
Parameters¶
Uses the same parameters as FedAvg
as well as the following:
initial_parameters (Weights, optional): Initial global model parameters.
server_learning_rate (float): Server-side learning rate used in server-side optimization. Defaults to 1.0, which is the same as the vanilla FedAvg
server_momentum (float): Server-side momentum factor used for FedAvgM. Defaults to 0.0.
nesterov (bool): Enables Nesterov momentum. Defaults to False.
FedAdam¶
Adaptive Federated Optimization using Adam (FedAdam) https://arxiv.org/abs/2003.00295
Parameters¶
Uses the same parameters as FedAvg
as well as the following:
initial_parameters (Weights, optional): Initial global model parameters.
eta (float, optional): Server-side learning rate. Defaults to 1e-1.
beta_1 (float, optional): Momentum parameter. Defaults to 0.9.
beta_2 (float, optional): Second moment parameter. Defaults to 0.99.
tau (float, optional): Controls the degree of adaptability for the algorithm. Defaults to 1e-3.
FedAdagrad¶
Adaptive Federated Optimization using Adagrad (FedAdagrad) strategy https://arxiv.org/abs/2003.00295
Parameters¶
Uses the same parameters as FedAvg
as well as the following:
initial_parameters (Weights, optional): Initial global model parameters.
eta (float, optional): Server-side learning rate. Defaults to 1e-1.
beta_1 (float, optional): Momentum parameter. Defaults to 0.0. Note that typical AdaGrad does not use momentum, thus usually beta_1 is kept 0.0
tau (float, optional): Controls the degree of adaptability for the algorithm. Defaults to 1e-3. Smaller tau means higher degree of adaptability of server-side learning rate.
FedYogi¶
Federated learning strategy using Yogi on server-side https://arxiv.org/abs/2003.00295v5
Parameters¶
initial_parameters (Weights, optional): Initial global model parameters.
eta (float, optional): Server-side learning rate. Defaults to 1e-1.
beta_1 (float, optional): Momentum parameter. Defaults to 0.9.
beta_2 (float, optional): Second moment parameter. Defaults to 0.99.
tau (float, optional): Controls the degree of adaptability for the algorithm. Defaults to 1e-3.