Suppose we define privacy as a game where a machine $M$ has a coin $b$, and on input $M_0, M_1$ always replies with encrypted $M_0$ if $b=0$ and encrypted $M_1$ if $b=1$. The adversary can send as many pairs $M_0, M_1$ as he wants. The goal is to guess $b$. If he cannot do better than guessing at random (i.e., with probability $1/2$), then privacy holds (Simplified Benaloh's ballot privacy definition, if anyone is curious).
Out toy machine $M$, works as follows:
- samples a random permutation $P$.
- applies $P$ to the base mapping ["Yes" = $A$; "No" = $B$;"Maybe" = $C$].
- replies with a new code corresponding to an answer.
Example: if $P = [2,3,1]$, then new map is: "Yes" = $C$, "No" = $A$ and "Maybe" = $B$ and $M$ will reply with $C$ for "Yes", $A$ for "No" and $B$ for "Maybe".
We know $M$ will use one of $3!=6$ possible permutations, which means that "Yes" in $2/3$ of cases will not be $A$. However, I cannot construct a distinguisher for the privacy game from this fact.
So, on the one hand, intuition says that toy machine $M$ leaks information - looking at the code, we can make a reasonable guess. However, I cannot formally prove it's not private under the simplified definition.
I don't know if it's the (simplified) definition's drawback, faulty intuition, or my poor understanding of conditional probabilities. Any ideas?
Edit: No leakage will be if we cannot distinguish between 'real usage' ($N$ different replies with (possibly) unequal prior-known probabilities of distribution among "Yes"/"No"/"Maybe") and 'ideal case' (where $M$ outputs $N$ codes $A$, $B$ or $C$ at random regardless any probability distribution among answers).