Score:0

security of different federated learning schemes

cn flag

all, I am working on federated learning and here goes my question:

Suppose there are two participants to do the federated learning. For some of the models (e.g. Logistic Regression models, assume one party has some features $X_1$ and the label $y$, the other has some other features $X_2$. The coefficients are denoted as $W_1$ and $W_2$, respectively), the schemes use "full encryption/mask", which encrypt/secret share all the intermediate results of the training process.

However, there are some schemes which merely hides parts of the intermediate results, for the most "aggressive" one, only $W_2X_2$ is encrypted or secret shared. Then, it is sent to the other party, and the other party decrypts/recovers $W_2X_2$ and continues computation as in plaintext. These schemes argue that the exposure of $W_2X_2$ will not further reveal information of $X_2$ and is considered secure.

My question is how to evaluate the security of the later one, are there any possible attacks to recover the original data or to build a new model which is almost the same as the final federated model by only one of the participants?

Score:1
vn flag

Here's one attack : https://arxiv.org/pdf/2011.09290.pdf
Hope that helps.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.