Score:0

Partition/Range wise privacy

ma flag

Consider two data streams $a_1,\cdots, a_n \in [a_{min}, a_{max}]$ and $b_1,\cdots, b_n \in [b_{min}, b_{max}]$, Such that $[a_{min}, a_{max}]$ and $[b_{min}, b_{max}]$ do not overlap.

A Differential private mechanism (Laplacian with privacy budget $\epsilon$) with zero mean and scale parameter $\beta$ is applied to these streams with different ranges to generate the DP-induced output streams $a^{\prime}_1,\cdots, a^{\prime}_n$ and $b^{\prime}_1,\cdots, b^{\prime}_n$. From what I understood, in differential privacy, the partitions themselves do not need to be indistinguishable from each other. However, It seems like with a sufficient amount of privacy budget, indistinguishability can be achieved. For example, if $a_{min} = 0, a_{max} = 200, b_{min} = 201, b_{max} = 400$ and we can make the scale parameter significantly higher (maybe $\geq$ 1000) then the high magnitude of noises can be used to hide the original range of the input streams. This approach will definitely compromise the utility of the system, but that can be controlled with application-specific post-processing operations.

Is it feasible to achieve such indistinguishability using the said approach, or is there exists any other privacy-preserving mechanism that can help to achieve the goal?

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.