Score:4

Signature size discrepancy for MPC in the Head signature

in flag

I have been puzzled by the following for a long time. In the (very well know in the field) paper: Improved Non-Interactive Zero Knowledge with Applications to Post-Quantum Signatures , the authors introduce the so-called cut and choose technique to produce zk proofs. In particular, they show how to prove that $y=f(x)$ where x is secret. So for example, one can prove $\operatorname{AES}_k(m)=c$ keeping $k$ secret.

They claim that for the parameters $m=631,n=64,\tau=23$ you can achieve $128$-bits of security (actually you achieve more). They also claim that for a function that can be expresed with $1000$ multiplication gates (addition gates are free) they produce a signature of size $37$KB.

The puzzling thing, is that when applying the formula they give (here $\vert x\vert$ is the size of the secret, usually $16$ bytes): $$ \text{signature size} =2\kappa +3\kappa\tau\log\frac{m}{\tau} +\tau(\kappa\log(n)+\vert x\vert+ 2000 +2\kappa) $$

You get $14.3$KB much less than they claim! I have run this computation a hundred times. Since no-one would want to include larger signature sizes than what strictly necessary, what am I missing?

fgrieu avatar
ng flag
I find in the paper is for the "communication complexity" on the bottom right of page marked 530, and equivalent (after reordering as in the question) to$$2\kappa+3\,\kappa\,\tau\log\frac M\tau+\tau\,(\kappa\,\log n+2\vert C\vert+\vert w\vert+2\kappa)$$ which is quite different from the question's formula as it was [originally](https://crypto.stackexchange.com/revisions/103305/1), and not only in the naming of the variables and of the result. In particular there's a $\kappa$ term on the left of $\log n$. It does not account for the whole of the discrepancy, though.
miraunpajaro avatar
in flag
@fgrieu Thanks for taking a look. You are right, I copied it wrong on the question, but I used the right formula on my calculations. (Here $w=128$ and $\vert C \vert=1000$). So the discrepancy is still a major issue.
Geoffroy Couteau avatar
cn flag
I am very familiar with the paper, but don’t have the answer to that (even after pondering for a few minutes about it). At this stage, I doubt anyone will have the answer on top of their head unless they have the exact calculations in mind. I would suggest emailing the authors of the paper (and posting an answer to your own question here once they answer).
miraunpajaro avatar
in flag
@GeoffroyCouteau Sorry for the extremenly late reply. I'll do that now!
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.