PROJECT 3

Federated learning with secure aggregation: Accessing and improving its privacy

Project Leader: Konstantinos Psounis

Website: https://sites.usc.edu/kpsounis/

Abstract: Federated learning (FL) has attracted growing interest for enabling privacy-preserving machine learning on data stored at multiple users while avoiding moving the data off-device. However, while data never leaves users’ devices, privacy still cannot be guaranteed since significant computations on users’ training data are shared in the form of trained local models. As a remedy, Secure Aggregation (SA) has been developed as a framework to preserve privacy in FL, by guaranteeing the sever can only learn the global aggregated model update but not the individual model updates. While SA ensures no additional information is leaked about the individual model update beyond the aggregated model update, there are no formal guarantees on how much privacy FL with SA can actually offer, as information about the individual dataset can still potentially leak through the aggregated model computed at the server.The first task of this proposal is derive formal tight upper bounds on how much information about each user's dataset can leak through the aggregated model update. Intuitively, the larger the number of users participating in SA the lower the privacy leak as other users' data can be perceived as “noise" with respect to the user under consideration. Second, we will investigate whether FL with SA may offer meaningful differential privacy guarantees. Third, we will propose novel ways to improve privacy under FL with SA while minimizing the loss of accuracy. At a high level, we will investigate adding noise to a user's gradient based on how likely it is for certain gradient updates to leak user data.In summary, our work aims to be the first to analytically study the level of privacy of FL with SA, and to propose novel approaches which enhance the privacy of the system

 

PROJECT LEADER

Konstantinos Psounis