Score:1

Introducing differential privacy in two different ways

li flag

I would like to investigate if it is possible to introduce Differential Privacy (DP) to a model via both adding Laplacian noise to the training data and then training with DP-SGD updates. Is it a valid way to introduce DP ?

In other words, if we separately applied Laplacian noise to the data the system would be assigned with (ε1,0)-DP per epoch and if we trained with DP-SGD it would be assigned with (ε2,δ2)-DP per epoch. What would be the (ε,δ) values if we applied both ? Is this an application of the composition theorem ?

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.