Skip to content

Latest commit

 

History

History
93 lines (78 loc) · 10.2 KB

statistical-heterogeneity.md

File metadata and controls

93 lines (78 loc) · 10.2 KB

Statistical Heterogeneity

  • Federated Learning with Non-IID Data [Paper]
  • Federated Optimization for Heterogeneous Networks [Paper]
  • Performance Optimization for Federated Person Re-identification via Benchmark Analysis [Paper] [ACMMM20] [Github]
  • Characterizing Impacts of Heterogeneity in Federated Learning upon Large-Scale Smartphone Data [Paper]

Distributed Optimization

  • Asynchronous Federated Optimization [Paper]
  • Agnostic Federated Learning [Paper] (ICML 2019)
  • High Dimensional Restrictive Federated Model Selection with multi-objective Bayesian Optimization over shifted distributions [Paper]
  • FedSplit: an algorithmic framework for fast federated optimization [Paper] [Berkeley] [NIPS20]
  • Federated Accelerated Stochastic Gradient Descent [Paper] [Github] [Stanford] [NIPS20]
  • Distributionally Robust Federated Averaging [Paper]
  • Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization [Paper] [CMU] [NIPS20]
  • Federated Bayesian Optimization via Thompson Sampling [Paper] [NUS] [MIT] [NIPS20]

Model Aggregation

  • Federated Learning with Matched Averaging [Paper] (ICLR 2020)
  • Minibatch vs Local SGD for Heterogeneous Distributed Learning [Paper] [Toyota] [NIPS20]
  • An Efficient Framework for Clustered Federated Learning [Paper] [Berkeley] [NIPS20]
  • Robust Federated Learning: The Case of Affine Distribution Shifts [Paper] [MIT] [NIPS20]
  • Bayesian Nonparametric Federated Learning of Neural Networks [Paper] (ICML 2019)
  • Asynchronous Federated Learning for Geospatial Applications [Paper] [ECML PKDD Workshop 2018]
  • Adaptive Federated Learning in Resource Constrained Edge Computing Systems [Paper] [IEEE Journal on Selected Areas in Communications, 2019]
  • Towards Faster and Better Federated Learning: A Feature Fusion Approach [Paper] [ICIP 2019]
  • Split Learning: Distributed and collaborative learning [Paper]
  • Multi-objective Evolutionary Federated Learning [Paper]

Knowledge Distillation

  • Distributed Distillation for On-Device Learning [Paper] [Stanford]
  • Ensemble Distillation for Robust Model Fusion in Federated Learning [Paper]
  • Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge [Paper] [USC]
  • One-Shot Federated Learning [Paper]

Personalization & Meta Learning

  • Personalized Federated Learning with Moreau Envelopes [Paper] [NIPS20]
  • Lower Bounds and Optimal Algorithms for Personalized Federated Learning [Paper] [KAUST] [NIPS20]
  • Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach [Paper] [MIT] [NIPS20]
  • Improving Federated Learning Personalization via Model Agnostic Meta Learning [Paper] [NIPS 2019]Workshop)
  • Federated Meta-Learning with Fast Convergence and Efficient Communication [Paper]
  • Federated Meta-Learning for Recommendation [Paper]
  • Adaptive Gradient-Based Meta-Learning Methods [Paper]

Multi-task Learning

  • MOCHA: Federated Multi-Task Learning [Paper] [NIPS 2017] [Slides]
  • Variational Federated Multi-Task Learning [Paper]
  • Federated Kernelized Multi-Task Learning [Paper]
  • Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints [Paper] [NIPS 2019 Workshop]

Others

  • Distributed Training with Heterogeneous Data: Bridging Median- and Mean-Based Algorithms [Paper] [NIPS20]
  • Distributed Fine-tuning of Language Models on Private Data [Paper] [ICLR 2018]
  • Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating [Paper] [NIPS 2019 Workshop]
  • The Non-IID Data Quagmire of Decentralized Machine Learning [Paper]
  • Robust and Communication-Efficient Federated Learning from Non-IID Data [Paper] [IEEE transactions on neural networks and learning systems]
  • FedMD: Heterogenous Federated Learning via Model Distillation [Paper] [NIPS 2019 Workshop]
  • First Analysis of Local GD on Heterogeneous Data [Paper]
  • SCAFFOLD: Stochastic Controlled Averaging for On-Device Federated Learning [Paper] - ICML20
  • On the Convergence of FedAvg on Non-IID Data [Paper] [OpenReview]
  • Agnostic Federated Learning [Paper] (ICML 2019)
  • Local SGD Converges Fast and Communicates Little [Paper]
  • Federated Adversarial Domain Adaptation [Paper] (ICLR 2020)
  • LoAdaBoost: Loss-Based AdaBoost Federated Machine Learning on Medical Data [Paper]
  • On Federated Learning of Deep Networks from Non-IID Data: Parameter Divergence and the Effects of Hyperparametric Methods [Paper] [Rejected in ICML 2020]
  • Overcoming Forgetting in Federated Learning on Non-IID Data [Paper] [NIPS 2019 Workshop]
  • FedMAX: Activation Entropy Maximization Targeting Effective Non-IID Federated Learning [Video] [NIPS 2019 Workshop]
  • Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification [Paper] [NIPS 2019 Workshop]
  • Fair Resource Allocation in Federated Learning [Paper]
  • Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data [Paper]
  • Think Locally, Act Globally: Federated Learning with Local and Global Representations [Paper] [NIPS 2019 Workshop]
  • A Linear Speedup Analysis of Distributed Deep Learning with Sparse and Quantized Communication [Paper] [NIPS 2018]
  • SCAFFOLD: Stochastic Controlled Averaging for On-Device Federated Learning [Paper]
  • On the Convergence of FedAvg on Non-IID Data [Paper] [OpenReview]
  • Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent [Paper] [NIPS 2017]
  • Communication Efficient Decentralized Training with Multiple Local Updates [Paper]
  • First Analysis of Local GD on Heterogeneous Data [Paper]
  • MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling [Paper]
  • Local SGD Converges Fast and Communicates Little [Paper]
  • SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum [Paper]
  • Adaptive Federated Learning in Resource Constrained Edge Computing Systems [Paper] [IEEE Journal on Selected Areas in Communications, 2019]
  • Parallel Restarted SGD with Faster Convergence and Less Communication: Demystifying Why Model Averaging Works for Deep Learning [Paper] [AAAI 2018]
  • On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization [Paper] [ICML 2019]
  • Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data [Paper]
  • Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data [Paper] [NIPS 2019 Workshop]