Score:1

Is there a winning strategy based on this coin-flipping protocol?

cn flag

Given the coin flipping protocol:

  • A chooses $a \in_R \{0,1\}$ and computes $commit(a,r)$. She sends $commit(a,r)$ to B.
  • B chooses $b \in_R \{0,1\}$ and sends $b$ to A.
  • A sends $open(a,r)$ and B checks if the opening is valid.
  • Both output $coin = a \oplus b$

where $commit$ is the commitment scheme used by Alice.

I am trying to understand how it is possible that one of the sides has a winning strategy that ensures the outcome is 0. I'm assuming this has to do with some form of dishonesty on one of the sides but I cannot see how this can be accomplished. I realize that Alice can tell whether she likes the outcome or not before producing her output, but then what enforces the decision to be to her liking?

Also, I understand that each party has to output a bit, but then if indeed one is dishonest, how is the final outcome then determined?

Any advice would be much appreciated.

kodlu avatar
sa flag
provide details! what is com(), the other functions? define them. also type in MathJax instead of putting an image, those links are not permanent.
Anon avatar
cn flag
Apologies - fixed now (new to Cryptography so assumed commitments are a widely known concept, including abbreviations).
us flag
You seem sure that one party can bias the outcome. Where have you gotten that impression?
Anon avatar
cn flag
I have read this in a paper that states: " The bias of a coin-tossing protocol measures the maximum influence of the adversary controlling a subset of malicious parties on the output of the honest parties, where the bias is 0 if the output is always uniformly distributed and the **bias is 1/2 if the adversary can force the output to be always (say) 1 .... naive protocols whose bias is 1/2**." (from: https://eprint.iacr.org/2012/643.pdf) - perhaps I'm mistaken that the protocol I wrote above is considered naive? (the paper doesn't state what's "naive").
Morrolan avatar
ng flag
It's plausible that the paper gives adversaries the ability to abort the protocol, and start a fresh instantiation of it. In this case, as $A$ will learn of the result before $B$, a malicious $A$ could achieve this.
Score:1
us flag

There is a subtle distinction here that must be made explicit. We must consider two flavors of security definitions:

  • "Security with Abort": A corrupt party is allowed to see their output, and then decide whether to allow the honest party to receive their output. (I'm describing what happens in the ideal world -- the protocol is secure if every attack against the protocol could have the same effect as some attack in the ideal world.)

    An ideal functionality for "coin-tossing-with-abort" does the following:

    1. toss a fair coin $c \gets \{0,1\}$;
    2. give $c$ to the adversary;
    3. wait for a bit $d$ ($d$ = "deliver") from the adversary
    4. if $d=0$, then give output $\bot$ to the honest parties; otherwise if $d=1$ then give output $c$ to the honest parties.
  • "Guaranteed Output": A corrupt party cannot prevent an honest party from receiving an output.

    An ideal functionality for "coin-tossing-with-guaranteed-output" does the following:

    1. toss a fair coin $c \gets \{0,1\}$;
    2. give $c$ to all parties;

The protocol you described doesn't say what should happen if a corrupt Alice fails to open her commitment to Bob's satisfaction. If we tell Bob to abort in that case, we do indeed get a protocol that achieves security with abort. So in that sense, your intuition is correct.

But no matter what you add the protocol, it will not achieve security with guaranteed output. This is the issue that is discussed in the paper that you mentioned.

For the sake of your question, we define a "naïve" protocol as one with bias 1/2 -- i.e., an adversary can always force a certain outcome with certainty. You described a protocol in your question and ask whether it is naïve according to this definition. However, your protocol is underspecified --- it does not describe what an honest Bob should do if Alice's fails (or chooses not) to open her commitment. So it's not possible to answer whether this protocol is naïve according to this definition.

On page 3 of that paper, they consider the protocol you described, but with the following condition added: "If Alice aborts or does not open the commitment correctly, then Bob should sample his own random bit and use that bit as his output." They show how the resulting protocol has bias 1/4 and also state that 1/4 is the best attack bias for this resulting protocol. So the resulting protocol is not naïve according to our definition.

Score:1
in flag
  • Alice chooses a bit $a$ and sends $c = commit(a)$ to Bob.

  • Bob chooses and returns $b$ to Alice.

  • Alice computes $a\oplus b$ and, if it is $0$, she continues the protocol by sending $open(c)$. If it is not $0$, Alice aborts and restarts the protocol.

Alice has complete control over the protocol and can ensure that $a\oplus b$ is any value she chooses.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.