Aug 14 – 18, 2023
Europe/Berlin timezone

Trilemma of Federated Learning: Privacy, Accuracy and Fairness

Aug 17, 2023, 2:30 PM
30m
Hörsaal

Hörsaal

Electrical/Electronics Engineering & Information Technology [EI5] Technologies and environments for Web 3.0

Speaker

Kangsoo Jung (Inria)

Description

Federated learning (FL) presents a framework for training machine learning models in a distributed and collaborative manner where participating clients process their data locally, sharing only updates of the training process. FL, aiming to optimize a statistical model’s parameters by minimizing a cost function of a collection of datasets stored locally by a set of clients, was proposed as a stepping-stone
towards privacy-preserving machine learning. Despite its privacy-aware approach, the process of model training via FL has been shown to expose the clients to issues like leakage of private information, lack of personalization of the model, and the possibility to have a trained model that is fairer to some groups of clients than to others. To ameliorate the privacy risks, differential privacy, and its variants have been extensively studied and applied as cutting-edge standards for formal privacy guarantees. However, often in FL, the clients hold very diverse datasets representing heterogeneous communities. Therefore, while one of the recent focuses of research in the FL community is to build a framework of personalized models representing the users’ diversity, it is of utmost importance to protect the clients’ sensitive and personal information against potential threats to violate their privacy and, at the same time, to ensure that the trained model upholds the aspect of fairness for the users. In this paper, we analyze the trade-off between privacy, accuracy, and fairness in differentially private federated learning and suggest a solution to solve this problem based on the personalization technique.

References

[1] McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.:
Communication-efficient learning of deep networks from decentralized data. In:
Artificial Intelligence and Statistics, pp. 1273–1282 (2017). PMLR

[2] Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity
in private data analysis. In: Halevi, S., Rabin, T. (eds.) Theory of Cryptography,
pp. 265–284. Springer, Berlin, Heidelberg (2006)

[3] Mansour, Y., Mohri, M., Ro, J., Suresh, A.T.: Three approaches for personal-
ization with applications to federated learning. arXiv preprint arXiv:2002.10619
(2020)

Keywords federated learning, privacy, differential privacy, fairness

Primary author

Presentation materials

There are no materials yet.