Score:1

What is a reasonable statistical distance bound in a SZK construction?

jp flag
Lev

Many works, such as [YCX21] cite that $2^{-40}$ is a reasonable statistical distance for zero-knowledge proof based signatures, even when the security level is $\lambda = 128$.. I was wondering if there is any concrete analysis which motivates this decision.

An even more peculiar situation is for soundness parameters. In some works, they do not use a soundness error of $2^{-\lambda}$, but some larger quantity (such as $2^{-80}$ for 128 bit security). This seems contradictory to the very property of the protocols, as in signatures, for instance.

I would have thought for clarity, security level for the scheme should be consistent with the security of the soundness and zero-knowledge.

EDIT: To be clear, I am asking for what justifies the use of 40-bit statistical zero-knowledge in the setting of >128-bit security protocols.

Wilson avatar
se flag
For more information on the difference between statistical and computational security parameter. https://crypto.stackexchange.com/questions/91814/statistical-security-parameter-information-theoretically-secure
Wilson avatar
se flag
I still think you may be confusing statistical and computational security parameter. The typical "128" bits number comes from the computational security parameter, while in the academic literature, ~40 bits has been an accepted value for statistical. Why these numbers have been chosen is another story.
Lev avatar
jp flag
Lev
Well, I think there are really 3 different parameters here. There is the security level of the underlying relation, the security level of the proof's soundness, and the security level of the (statistical) zero-knowledge. You could have 3 differing parameters. Why these numbers have been chosen is the question. In my naive opinion, I would have thought that for clarity, you would keep these consistent with one another. It's not clear to me why 40 bits of security for SZK is sufficient.
Wilson avatar
se flag
Although you can parameterize your distributions by a large finite number of parameters, in most works, there are effectively only a couple parameters. In particular, the reason why some works distinguish two parameters (a statistical and computational) is because the efficiency and security of the algorithms is only a direct function of those two parameters. Separating them, allows different performance to security trade offs. Having a 128 bit statistical parameter (while already having a 128 bit computational) may dramatically reduce efficiency.
Lev avatar
jp flag
Lev
I think you may be confused here. These are definitely three distinct parameters, all of which are important. However, the hardness of the relation should be at least as hard as forging (soundness).
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.