Abstract:We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model. This result is a consequence of a more general statement on accuracy conditioned on an arbitrary event (such as membership to a sensitive group), which
Read more
Tags: Privacy, LG, Models, More, event, accuracy, Differential Privacy, study, General, arxiv, impact, Model, Classification, Updated
Related Posts
- Regression with Label Differential Privacy. (arXiv:2212.06074v2 [cs.LG] UPDATED)a
- Privacy-Aware Compression for Federated Learning Through Numerical Mechanism Design. (arXiv:2211.03942v3 [cs.LG] UPDATED)a
- FairDP: Certified Fairness with Differential Privacy. (arXiv:2305.16474v2 [cs.LG] UPDATED)a
- Considerations on the Theory of Training Models with Differential Privacy. (arXiv:2303.04676v2 [cs.LG] UPDATED)a
- (Local) Differential Privacy has NO Disparate Impact on Fairness. (arXiv:2304.12845v2 [cs.LG] UPDATED)a