Score:1

How to speed up Shamir secret share generation?

sy flag

Let us say we have to generate Shamir's secret share for n data points. Is there a way to speed up the implementation apart from using Horner's rule for the polynomial evaluation?

SEJPM avatar
us flag
Parallelization and vectorization should also be possible if you want to invest enough effort (with parallelization probably being easier thanks to the parallel nature of this task).
Score:1
ru flag

If you use the typical set up where the shares are $f(1), f(2),\ldots f(n)$ where $f(x)=c_kx^k+\cdots+c_0$ is a polynomial of degree $k$, then you can use the calculus of finite differences. Skipping the initialisation step for the moment

for j=1,...,n 
    update f = (f + Deltaf[1]) mod p
    for i=1,..., k-1 
        update Deltaf[i] = (Deltaf[i] + Deltaf[i+1]) mod p
    output the variable f as f(j)
END

The main loop takes $kn$ additions and an increment (you can save $O(k^2)$ additions if you're feeling really stingy) which is very efficient.

The initial value of $f$ is $f(0)$. The initial values of $\Delta^if$ are a bit messy: $\Delta^if(0)=i!\sum_{j=i}^kc_jS(j,i)$ where $S(j,i)$ is a Stirling number of the second kind. Alternatively you can evaluate the first $k$ terms of $f$ and directly compute the iterated differences to compute the next $n-k$.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.