Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating

NeurIPS 2019

Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating

Dec 14, 2019
|
39 views
|
Details
Federated Averaging (FedAvg) serves as the fundamental framework in Federated Learning (FL) settings. However, we argue that 1) the multiple steps of local updating will result in gradient biases and 2) there is an inconsistency between the target distribution and the optimization objectives following the training paradigm in FedAvg. To tackle these problems, we first propose an unbiased gradient aggregation algorithm with the keep-trace gradient descent and gradient evaluation strategy. Then we introduce a meta updating procedure with a controllable meta training set to provide a clear and consistent optimization objective. Experimental results demonstrate that the proposed methods outperform compared ones with various network architectures in both the IID and non-IID FL settings. Speakers: Xin Yao

Comments
loading...