PROJECT 4

Fast Fair Decentralized Learning

Abstract: Decentralized machine learning, such as federated averaging, offers numerous advantages over traditional machine learning, including low-cost model fitting and greater data privacy. Often, however, these learning methods do not adequately address biases against protected groups, such as African Americans or women, nor do they fully address model accuracy loss from data heterogeneity. While numerous methods have been proposed to address biases in machine learning, several challenges occur when mapping these methods to decentralized learning. For example, many traditional fairness approaches need direct access to large quantities of client data, which is not feasible in this domain. Current federated learning methods, meanwhile, might aim to reduce over-representation of particular clients when training models, but many do not address biases against protected groups. We propose three complimentary methods that extend on our previous work of fairness in AI to decentralized learning. The first method we propose will fine-tune models in the central server to particular demographics, thus improving model accuracy for heterogeneous and data-poor classes. In the second and third method, we will apply kernel distance-based techniques to reduce non-linear biases in features. One method will apply these to individual client data to reduce temporal heterogeneity, thus improving commercially available decentralized learning methods. The other method will selectively sample data from clients, learn to remove biases from features, and feed these feature debiasing models back to clients. Our previous work shows that these proposed methods are both realistic to execute within a year and can substantially improve the fairness of several AI systems. Within 12 months, we will create open-source deliverables based on the developed methods.

 
project4.png

PROJECT LEADER

PROJECT LEADER