Score:0

Question on Simulation based security proof for Oblivious Transfer (OT) again semi-honest adversaries

us flag

I'm currently reading this How To Simulate It – A Tutorial on the Simulation Proof Technique.

On p. 10, there is a proof using simulation for 1/2-OT, against semi-honest adversaries. Briefly, the player $P_1$ holds the messages $b_0$, $b_1$ and the player $P_2$ hold the choice bit $σ$. Since $P_1$ has no output, it creates a simulator (p.11 bottom - p.12 middle) that simulates the $P_1$'s view. For $P_2$ it creates again a simulator (see p.12 bottom - p.14 middle) but while simulating the view it cannot output exactly $B(α,x_{1-σ}) \oplus b_{1-σ}$ so it just outputs B(α,x_{1-σ}). Then it proves that the distributions of the last two terms are computationally indistinguishable. It assumes that there is a distinguisher $D$ (see p. 13 middle - p. 14 middle) and that given $D$ an efficient algorithm $A$ can be created that can break find the pre-image of $B$ (which is a hardcore predicate) with negligible probability.

Question : Based on the following definitions. On p.13 it describes that assumed distinguisher $D$ as follows :

Assume by contradiction that there exists a non-uniform probabilistic polynomial time distinguisher $D$, a polynomial $p(·)$ and an infinite series of tuples $(σ, b_σ, n)$ such that $$Pr[D(σ, r_0, r_1; α,(B(α, x_σ) \oplus b_σ, B(α, x_{1−σ}))) = 1] − Pr[D(σ, r_0, r_1; α,(B(α, x_σ) \oplus b_σ, B(α, x_{1−σ}) \oplus 1)) = 1] ≥ \dfrac{1}{p(n)}$$ .

On p. 13 middle it describes algorithm $\mathcal{A}$ :

Algorithm $\mathcal{A}$ is given $σ$, $b_σ$ on its advice tape, and receives $(1^n, α, r)$ for input. $\mathcal{A}$’s aim is to guess $B(α, f^{−1}_{α}(S(α; r))$. In order to do this, implicitly and without knowing its actual value, $\mathcal{A}$ sets $x_{1−σ} = f^{−1}_{α} (S(α; r))$ by setting $r_{1−σ} = r$ (from its input). Next, algorithm $\mathcal{A}$ chooses a random $r_σ$, and computes $x_σ = S(α; r_σ)$ and $β_σ = B(α, x_σ) \oplus b_σ$. Finally, $\mathcal{A}$ chooses a random $β_{1−σ}$, invokes $D$ on input $(σ, r_0, r_1; α,(β_σ, β_{1−σ}))$ and outputs $β_{1−σ}$ if $D$ outputs 1, and $1−β_1−σ$ otherwise. Observe that if $\mathcal{A}$ guesses $β_{1−σ}$ correctly then it invokes $D$ on $(σ, r_0, r_1; α,(B(α, x_σ) \oplus b_σ, B(α, x_{1−σ})))$, and otherwise it invokes $D$ on $(σ, r_0, r_1; α,(B(α, x_σ) \oplus b_σ, B(α, x_{1−σ}) \oplus 1))$. Thus, if D outputs 1, then $\mathcal{A}$ assumes that it guessed $β_{1−σ}$ correctly (since $D$ outputs 1 with higher probability when given $B(α, x_{1−σ})$ than when given $B(α, x_{1−σ}) \oplus 1)$.

Why while $\mathcal{A}$ chooses a random $β_{1-σ}$, when it guesses incorrectly it is like $D$ is invoked with $(σ, r_0, r_1; α,(B(α, x_σ) \oplus b_σ, B(α, x_{1−σ}) \oplus 1))$? Why is $B(α, x_{1−σ}) \oplus 1)$ equivalent to a random $β_{1-σ}$?

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.