Score:1

Differential Privacy: Gaussian Mechanism when $\epsilon >1$, Laplace Mechanism when $\epsilon = 0$

cn flag

In Differential Privacy resources, the limiting cases of $\epsilon, \delta$ are not justified well enough.

For example, on Wikipedia, it is said that Gaussian mechanism only works when $\epsilon < 1$. However, any Gaussian mechanism that satisfies, e.g., $(0.1, \delta)$-differential privacy, already satisfies $(1, \delta)$-differential privacy, or $(5^{100}, \delta)$-differential privacy, am I correct?

Similarly, in some resources, the definition of DP is for $\epsilon \geq 0 $, but then it is claimed that the Laplace mechanism achieves $(\epsilon, 0)$-differential privacy for any $\epsilon$. However, what about $\epsilon = 0$? Laplace distribution with density $\propto 1/\epsilon$ is not defined in this case. Do we have even have any additive mechanism that satisfies $(0,0)$-differential privacy?

Edit: My understanding is the following. There is no additive noise mechanism that can achieve DP with $\epsilon = 0 , \delta = 0$. This is simply impossible since we add some noise (of course, assuming the sensitivity is not $0$ in which case we don't even need to add a noise). Moreover, Laplace mechanism achieves DP with $\epsilon>0,\delta = 0$, meaning that also any $\epsilon>0,\delta \geq 0$ will be possible. On the other hand, Gaussian mechanism requires $\epsilon, \delta > 0$, so this does not generalize anything in the Laplace case in terms of feasibility (i.e., what is achievable, what is not achievable). So I think the only ambiguity is the following: Do we have an additive mechanism that achieves DP with $\epsilon = 0$ and any $\delta > 0$?

Score:2
ru flag

In the Gaussian mechanism case, it is important to distinguish the use of $\epsilon$ to parameterise the Gaussian distribution and its use to quantify the level of differential privacy. For any $0<\epsilon<1$ and $0<\delta<1$ we can construct the mechanism which adds noise distributed $$\mathcal N(0,2\log(5/4\delta)(\Delta f)^2/\epsilon^2)$$ and then we have a statistical guarantee that this provides $(\epsilon,\delta)$-differential privacy and indeed $(\epsilon',\delta')$-differential privacy for any $\epsilon'\ge\epsilon$ and $\delta'\ge\delta$. However, if (for example) we take, say, $\epsilon=2$ and $\delta=1/2$ although we can still construct a noise function $$\mathcal N(0,2\log(5/2)(\Delta f)^2/4),$$ we cannot use the theorem to say that we have $(2,0.5)$-differential security. Wikipedia is trying to express the limitations of what is provable using the Gaussian construction rather than limit the range of meaning of differential privacy.

Similarly, in the Lagrange case the construction is not defined for $\epsilon=0$ and so cannot be employed with this parameter. Again this is a limitation on the Lagrange construction rather than a limit of the range of meaning of differential privacy.

In terms of $(0,0)$-differential privacy, this would mean that our algorithm $\mathcal A$ produces identically distributed outputs for all datasets. This means that $\mathcal A$ is dataset independent and cannot be modelled by adding noise to a dataset dependent algorithm.

independentvariable avatar
cn flag
Thanks for your great reply! Overall, do we have mechanisms that work with $\epsilon = 0$ and $\delta > 0$? Or does it fully depend on the structure of the dataset?
Daniel S avatar
ru flag
We're getting to the limits of my knowledge here. Obviously, any mechanism provides is $(0,1)$-differential privacy. For intermediate values of $\delta$ I suspect (but do not know) that this will be very dependent $\mathcal A$ and its interactions with datasets.
independentvariable avatar
cn flag
exactly that was my intuition, too, many thanks!
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.