PROJECT 3

Reconciling Privacy and Fairness in Federated Learning

Abstract: Federated learning (FL) is becoming a popular way to train machine learning models using partial data obtained from user devices. At the same time, there’s increasing concern and evidence that such models may give disparate treatment to minorities or marginalized groups or may amplify existing inequalities. In traditional machine learning, to detect and correct these “unfair” situations, the model creator often requires access to information about the individuals’ group membership or sensitive attributes. However, and particularly in the context of FL, the individuals may be reluctant to reveal such information to the model creator. In this project, we aim to enable the training and auditing of fairness-aware FL models while providing differential privacy (DP) guarantees for the sensitive attributes. We will build on recent advances in the hybrid- and shuffle- models of differential privacy to develop and empirically quantify the power of algorithms using sensitive attributes with strong privacy guarantees towards debiasing federated learning.

 
project3.png