Abstract: Conventional matrix factorization relies on centralized collection of users' data for recommendation, which might introduce an increased risk of privacy leakage especially when the recommender is untrusted. Existing differentially private matrix factorization methods either assume the recommender is trusted, or can only provide a uniform level of privacy protection for all users and items with untrusted recommender. In this
Read more
Tags: methods, Protection, All, LG, Updated, risk, data, decentralized, Matrix, and, Privacy, Recommendation, Privacy protection, arxiv, Differential Privacy
Related Posts
- Trajectory Data Collection with Local Differential Privacy. (arXiv:2307.09339v1 [cs.DB])a
- Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach. (arXiv:2309.08569v1 [cs.LG])a
- Active Membership Inference Attack under Local Differential Privacy in Federated Learning. (arXiv:2302.12685v2 [cs.LG] UPDATED)a
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples. (arXiv:2305.09241v3 [cs.LG] UPDATED)a
- Towards Privacy-Aware Causal Structure Learning in Federated Setting. (arXiv:2211.06919v2 [cs.LG] UPDATED)a