Skip to content

Latest commit

 

History

History
40 lines (34 loc) · 5.11 KB

communication-efficiency.md

File metadata and controls

40 lines (34 loc) · 5.11 KB

Communication Efficiency

  • Communication-Efficient Learning of Deep Networks from Decentralized Data [Paper] [Github] [Google] [Must Read]
  • Robust and Communication-Efficient Federated Learning from Non-IID Data [Paper]
  • FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization [Paper]
  • FedBoost: Communication-Efficient Algorithms for Federated Learning [Paper] [ICML20]
  • FetchSGD: Communication-Efficient Federated Learning with Sketching [Paper] [ICML20]
  • Throughput-Optimal Topology Design for Cross-Silo Federated Learning [Paper] [NIPS20]
  • Two-Stream Federated Learning: Reduce the Communication Costs [Paper] [2018 IEEE VCIP]
  • PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization [Paper] [NIPS 2019], Thijs Vogels, Sai Praneeth Karimireddy, and Martin Jaggi.
  • The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication [Paper] Sebastian U Stich and Sai Praneeth Karimireddy, 2019.
  • A Communication Efficient Collaborative Learning Framework for Distributed Features [Paper] [NIPS 2019 Workshop]
  • Active Federated Learning [Paper] [NIPS 2019 Workshop]
  • Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction [Paper] [NIPS 2019 Workshop]
  • FedSCR: Structure-Based Communication Reduction for Federated Learning [Paper] [TPDS 2020]

Compression

  • Robust and Communication-Efficient Federated Learning from Non-IID Data [Paper], 2019
  • Expanding the Reach of Federated Learning by Reducing Client Resource Requirements [Paper] Sebastian Caldas, Jakub Konecny, H Brendan McMahan, and Ameet Talwalkar, 2018
  • Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training [Paper] [ICLR 2018] Yujun Lin, Song Han, Huizi Mao, Yu Wang, and William J Dally
  • Federated Learning: Strategies for Improving Communication Efficiency [Paper] [NIPS2016 Workshop] [Google]
  • Natural Compression for Distributed Deep Learning [Paper] Samuel Horvath, Chen-Yu Ho, Ludovit Horvath, Atal Narayan Sahu, Marco Canini, and Peter Richtarik, 2019.
  • Gradient Descent with Compressed Iterates [Paper] [NIPS 2019 Workshop]
  • FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization [Paper], 2019
  • ATOMO: Communication-efficient Learning via Atomic Sparsification [Paper] [NIPS 2018], H. Wang, S. Sievert, S. Liu, Z. Charles, D. Papailiopoulos, and S. Wright.
  • vqSGD: Vector Quantized Stochastic Gradient Descent [Paper] Venkata Gandikota, Raj Kumar Maity, and Arya Mazumdar, 2019.
  • QSGD: Communication-efficient SGD via gradient quantization and encoding [Paper] [NIPS 2017], Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic.
  • cpSGD: Communication-efficient and differentially-private distributed SGD [Paper]
  • Federated Optimization: Distributed Machine Learning for On-Device Intelligence [Paper] [Google]
  • Distributed Mean Estimation with Limited Communication [Paper] [ICML 2017], Ananda Theertha Suresh, Felix X. Yu, Sanjiv Kumar, and H Brendan McMahan.
  • Randomized Distributed Mean Estimation: Accuracy vs Communication [Paper] Frontiers in Applied Mathematics and Statistics, Jakub Konecny and Peter Richtarik, 2016
  • Error Feedback Fixes SignSGD and other Gradient Compression Schemes [Paper] [ICML 2019], Sai Praneeth Karimireddy, Quentin Rebjock, Sebastian Stich, and Martin Jaggi.
  • ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning [Paper] [ICML 2017], H. Zhang, J. Li, K. Kara, D. Alistarh, J. Liu, and C. Zhang.

Important-Based Updating

  • eSGD: Communication Efficient Distributed Deep Learning on the Edge [Paper] [USENIX 2018 Workshop (HotEdge 18)]
  • CMFL: Mitigating Communication Overhead for Federated Learning [Paper] [ICDCS 2019]