Skip to content

Latest commit

 

History

History
143 lines (107 loc) · 11 KB

trustworthiness.md

File metadata and controls

143 lines (107 loc) · 11 KB

Trustworthiness

  • SecureBoost: A Lossless Federated Learning Framework [Paper]
  • Fair Resource Allocation in Federated Learning [Paper]
  • How To Backdoor Federated Learning [Paper] [[Github]](https://github.com/ebagdasa/
  • Inverting Gradients - How easy is it to break privacy in federated learning? [Paper] [NIPS2020]

Fairness

  • Fair Resource Allocation in Federated Learning [Paper]

Adversarial Attacks

  • How To Backdoor Federated Learning [Paper] [Github]
  • Can You Really Backdoor Federated Learning? [Paper]
  • Model Poisoning Attacks in Federated Learning [Paper] [NIPS workshop 2018]
  • Inverting Gradients - How easy is it to break privacy in federated learning? [Paper] [NIPS2020]
  • Attack of the Tails: Yes, You Really Can Backdoor Federated Learning [Paper] [NIPS2020]
  • Membership Inference Attacks Against Machine Learning Models [Paper] [Github] [Cornell]

Data Privacy and Confidentiality

  • Election Coding for Distributed Learning: Protecting SignSGD against Byzantine Attacks [Paper] [NIPS2020]
  • Distributed Newton Can Communicate Less and Resist Byzantine Workers [Paper] [Berkeley] [NIPS2020]
  • Byzantine Resilient Distributed Multi-Task Learning [Paper] [NIPS2020]
  • A Scalable Approach for Privacy-Preserving Collaborative Machine Learning [Paper] [USC] [NIPS2020]
  • Gradient-Leaks: Understanding and Controlling Deanonymization in Federated Learning [Paper] [NIPS 2019 Workshop]
  • Quantification of the Leakage in Federated Learning [Paper]
  • Federated learning: distributed machine learning with data locality and privacy [Blog]
  • A Hybrid Approach to Privacy-Preserving Federated Learning [Paper]
  • Analyzing Federated Learning through an Adversarial Lens [Paper]
  • Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attack [Paper]
  • Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning [Paper]
  • Analyzing Federated Learning through an Adversarial Lens [Paper]
  • Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning [Paper]
  • Protection Against Reconstruction and Its Applications in Private Federated Learning [Paper]
  • Boosting Privately: Privacy-Preserving Federated Extreme Boosting for Mobile Crowdsensing [Paper]
  • Privacy-Preserving Collaborative Deep Learning with Unreliable Participants [Paper]
  • Biscotti: A Ledger for Private and Secure Peer-to-Peer Machine Learning [Paper] [ICLR 2021]
  • Dancing in the Dark: Private Multi-Party Machine Learning in an Untrusted Setting [Paper]
  • Privacy-Preserving Deep Learning via Weight Transmission [Paper]
  • Learning Private Neural Language Modeling with Attentive Aggregation [Paper], IJCNN 2019 [Code]
  • Exploiting Unintended Feature Leakage in Collaborative Learning, an attack method related to membership inference [Paper] [Github]

Courses

  • Applied Cryptography [Udacity]
    • Cryptography basics

Differential Privacy

  • A Brief Introduction to Differential Privacy [Blog]
  • Deep Learning with Differential Privacy [Paper]
    • Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang.
  • Differentially-Private Federated Linear Bandits [Paper] [Slides] [MIT] [NIPS2020]
  • Learning Differentially Private Recurrent Language Models [Paper]
  • Federated Learning with Bayesian Differential Privacy [Paper] [NIPS 2019 Workshop]
  • Private Federated Learning with Domain Adaptation [Paper] [NIPS 2019 Workshop]
  • cpSGD: Communication-efficient and differentially-private distributed SGD [Paper]
  • Federated Learning with Bayesian Differential Privacy [Paper] [NIPS 2019 Workshop]
  • Differentially Private Data Generative Models [Paper]
  • Differentially Private Federated Learning: A Client Level Perspective [Paper] [Github] [ NIPS2017 Workshop]

PATE

  • Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data [Paper]
    • Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian J. Goodfellow, and Kunal Talwar.
    • Private Aggregation of Teacher Ensembles (PATE)
  • Scalable Private Learning with PATE [Paper]
    • Extension of PATE

Secure Multi-party Computation

Secret Sharing

  • Simple Introduction to Sharmir's Secret Sharing and Lagrange Interpolation [Youtube]
  • Secret Sharing, Part 1 [Blog]: Shamir's Secret Sharing & Packed Variant
  • Secret Sharing, Part 2 [Blog]: Improve efficiency
  • Secret Sharing, Part 3 [Blog]

SPDZ

  • Basics of Secure Multiparty Computation [Youtube]: based on Shamir's Secret Sharing

  • What is SPDZ?

    • Part 1: MPC Circuit Evaluation Overview [Blog]
    • Part 2: Circuit Evaluation [Blog]
  • The SPDZ Protocol [Blog]: implementation codes included

Advance (Not Recommended For Beginners)
  • Multiparty Computation from Somewhat Homomorphic Encryption [Paper]
    • SPDZ introduction
  • Practical Covertly Secure MPC for Dishonest Majority – or: Breaking the SPDZ Limits [Paper]
  • MASCOT: Faster Malicious Arithmetic Secure Computation with Oblivious Transfer [Paper]
  • Removing the crypto provider and instead letting the parties generate these triples on their own
  • Overdrive: Making SPDZ Great Again [Paper]
  • Safetynets: Verifiable execution of deep neural networks on an untrusted cloud

Build Safe AI Series

  • Building Safe A.I. [Blog]

    • A Tutorial for Encrypted Deep Learning
    • Use Homomorphic Encryption (HE)
  • Private Deep Learning with MPC [Blog]

    • A Simple Tutorial from Scratch
    • Use Multiparty Compuation (MPC)
  • Private Image Analysis with MPC [Blog]

    • Training CNNs on Sensitive Data
    • Use SPDZ as MPC protocol

MPC related Paper

Helen: Maliciously Secure Coopetitive Learning for Linear Models [Paper] [NIPS 2019 Workshop]

Privacy Preserving Machine Learning

  • Privacy-Preserving Deep Learning [Paper]
  • Privacy Partition: A Privacy-Preserving Framework for Deep Neural Networks in Edge Networks [Paper]
  • Practical Secure Aggregation for Privacy-Preserving Machine Learning [Paper] (Google)
    • Secure Aggregation: The problem of computing a multiparty sum where no party reveals its update in the clear—even to the aggregator
    • Goal: securely computing sums of vectors, which has a constant number of rounds, low communication overhead, robustness to failures, and which requires only one server with limited trust
    • Need to have basic knowledge of cryptographic algorithms such as secret sharing, key agreement, etc.
  • Practical Secure Aggregation for Federated Learning on User-Held Data [Paper] (Google)
    • Highly related to Practical Secure Aggregation for Privacy-Preserving Machine Learning
    • Proposed 4 protocol one by one with gradual improvement to meet the requirement of secure aggregation propocol.
  • SecureML: A System for Scalable Privacy-Preserving Machine Learning [Paper]
  • DeepSecure: Scalable Provably-Secure Deep Learning [Paper]
  • Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications [Paper]
  • Contains several MPC frameworks