Score:1

Provable security: impossible reduction when messages are encrypted/semantic security with function depending on the output of adversary

us flag

I've a problem with a protocol for which I can prove the security if the messages sent by the adversary are sent in clear, but I can't prove the security anymore if the messages sent by the adversary are encrypted... and this is a bit strange since I expect the protocol to also be secure in that second case.

More precisely, I'm considering a protocol for which a server Bob receives a message $k$ from Alice (this can be seen as an encryption of a bit string $d_0$ under the secret public key $t_k$) and sends back a message $y$ (which can also be seen as an encryption of some messages $x$ under the same public key $k$ of Alice: not that due to the shape of the protocol I really need to use this exact same key). In the security proof, I'd like to show that no malicious Bob can find a secret value $\theta_\pi=f(x,d_0)$ that should look random otherwise.

The good news is that if I know $x$, I can invert $f$ and find $d_0$. Therefore if a malicious Bob knows $x$, I could do a reduction to prove that if Bob can find $\theta_\pi$ they Bob also knows $d_0 = f^{-1}(x,\theta_\pi)$: this is a contradiction since the encryption is IND-CPA secure.

Unfortunately, this reduction does not work because the malicious Bob may not "know" $x$: they only send an encrypted version $y$ of it: and during the reduction I can't decrypt this $y$ since the key is only known to Alice (this is unavoidable in my case)...

Is there some methods to still prove security in that case? I guess I need something like semantic security but where the function that is computed is not efficiently computable (this is already the case in the current definition) and also depends on some outputs of the adversary.

EDIT

To answer to Mikero, I think that in your argument we need to say something like that:

enter image description here

If we manage to get the right box, then I agree that the proof follows easily. But I'm not sure how to prove the equivalence between these two scenarios. I don't think this is a direct reduction to IND-CPA security, because the distinguisher also has access to $\theta_\pi$ which is linked with some information about a decoding of $y$. I guess it would also be nice to show that the first diagram is equivalent to

enter image description here

but not sure how to prove that (of course $Ideal Func_1$ is distinguishable from $Ideal Func_3$, the question is when we add the simulator on top).

us flag
It sounds like Alice is honest in this situation, so the simulator should generate Alice's public key. Then the simulator would be able to decrypt ciphertexts encrypted under that public key and do your usual reduction. BTW, what happens if Bob simply echoes back the same encryption of $d_0$?
cn flag
Alternatively, the standard way to ensure that the adversary knows the plaintext would be to add a proof of knowledge to the ciphertext.
Léo Colisson avatar
us flag
@Maeher Thanks for your answer. Unfortunately this cannot work in my case because I'm in a quantum setting: basically, for the adversary to learn $x$, it needs to destroy its state (and then, the protocol is not useful anymore). I could of course do a "cut and choose" approach by asking from time to time do destroy the state and from time to time to continue, but then I only get polynomial security :-( I also don't think that the approach proposed by Mikero works, I'll try to ellaborate in another message.
Léo Colisson avatar
us flag
@Mikero Thanks for your answer, but I'm not sure if it works (at least in my case). If the simulator picks the public key of Alice, we need an argument to say that the simulator can encrypt a random $d_0'$ in place of $d_0$ (because $d_0$ should not leak to the simulator) without being detected. However, I don't think this is directly implies by IND-CPA (see my edit). Am I missing something?
Léo Colisson avatar
us flag
@Mikero also, if Bob echos back $y$, then it would basically set $x = 00$ (which is perfectly fine).
cn flag
I'm not sure I understand how the quantum setting is an issue. Are you saying that y is in fact *not* a ciphertext, but an arbitrary superposition of ciphertexts? First, why would you allow the adversary to do that? And second, then as far as I can see you can just as well compute the proof of knowledge on that superposition.
Léo Colisson avatar
us flag
@Maeher It's because the goal of the protocol is to create a quantum state unknown to Bob. Basically to obtain the $y$, Bob is supposed to create a superposition $\sum_x | x \rangle | g(x) \rangle$. The function $g$ is 2-to-1: so after measuring the second register we get $| x \rangle+|x' \rangle$ on the first register for the 2 preimages of $g$. From this state we can obtain a new quantum state that will be used in other protocols (the adversary cannot fully describe this state since it can't invert $g$). If Bob wanted to learn one $x$ ,he could measure it... but it would destroy it.
Léo Colisson avatar
us flag
So $y$ is a classical cipher text, but it is determined using a superposition that is still useful after the end of the protocol.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.