Score:3

dimension of Goppa codes

in flag

For the McEliece/Niederreiter cryptosystems, an efficient seemingly secure choice of code is an irreducible binary Goppa code, defined by an irreducible $g(x)\in GF(2^m)[x]$ of degree $t$ and a support vector $L=(\alpha_0,\ldots,\alpha_{n-1})$ with distinct $\alpha_i\in GF(2^m)$.

The code itself is the $GF(2)$-valued vectors in the kernel of the parity-check matrix $$ H=\left( \begin{array}{cccc} g(\alpha_0)^{-1}&g(\alpha_1)^{-1}&\ldots&g(\alpha_{n-1})^{-1}\\ g(\alpha_0)^{-1}\alpha_0&g(\alpha_1)^{-1}\alpha_1&\ldots&g(\alpha_{n-1})^{-1}\alpha_{n-1}\\ \vdots&\vdots&\ldots&\vdots\\ g(\alpha_0)^{-1}\alpha_0^{t-1}&g(\alpha_1)^{-1}\alpha_1^{t-1}&\ldots&g(\alpha_{n-1})^{-1}\alpha_{n-1}^{t-1}\\ \end{array}\right). $$ Note that $H$ is full-rank. To produce a parity-check matrix $H'$ over $GF(2)$, one can replace the entries of $H$ with column vectors in $GF(2)$ (using some basis for the extension $GF(2^m)/GF(2)$).

Almost all sources I consult list the resulting code as $[n,k]=[n, n-mt]$, but the general construction (say for alternant codes) gives $k=n-mt$ as a lower bound for the dimension $k$ of the resulting subspace code.

My questions are:

  1. How often is the resulting rank $k=n-mt$? In the AG setup, I guess this is a Riemann-Roch dimension so maybe an algebraic geometer can answer.
  2. Does it matter if we have redundant rows in the parity-check $H'$? Does this affect implementations of the cryptosystem?

I guess this addressed in the key generator from https://eprint.iacr.org/2017/595.pdf (section 5.2), if only to return a failure and restart the key generation process when rref isn't achieved; they give 29% as a probability of success based on the density of $GL_{mt}(GF(2))$ in $Mat_{mt\times mt}(GF(2))$, i.e. the asymptotic density is $$ \prod_{i=1}^{\infty}\left(1-\frac{1}{2^i}\right)=0.288\ldots. $$


On second thought regarding 1), I guess it's more of a question about when the kernel of a linear map is defined over a subfield (e.g. the kernel of $x-\sqrt{2}y$ has dimension one over $\mathbb{Q}(\sqrt{2})$ but dimension zero when restricted to $\mathbb{Q}$).

Daniel S avatar
ru flag
https://eprint.iacr.org/2017/595.pdf could improve their success rate by allowing their systematisation routine to swap columns if necessary. As it is they’re checking for matrices where the first $mt$ columns have full row rank, which is strictly rarer than the full matrix having full row rank.
Score:2
ru flag
  1. From a heuristic point of view (for parameters of cryptographic interest), it's pretty unlikely. Binary $n\times(n-r)$ matrices with random entries have a less than $2^{-n+r}$ chance of being rank deficient. There may be a more precise geometric answer.

  2. The code is probably being calculated by putting using row transformations to put $H'$ in systematic form $(I|A)$ and then taking the binary Goppa code as $(I|A^t)$. If $H'$ is rank deficient then the implementation will need to have a means of detecting that full systematic form has not been achieved and junking the remaining rows of $H'$. If the rank of $H'$ is $r$ then our binary Goppa code generator matrix will be $n\times(n-r)$ which is probably larger than the memory allocated (and adds to the already painful bandwidth issues). Some rows of the generator matrix will now have to be junked. We want to avoid redundant columns in order to keep the information set decoding hard, so we'll probably want to randomly unsystematise the generator matrix before deleting the rows, permuting the columns and resystematising. There's then the interesting issue that garbles of deleted rows could be presented to our decoder and be successfully ungarbled to something that is not in the span of our generator matrix, but our decoder will likely use our resystematised matrix to map it to the span of the generator matrix. Implementations are unlikely to be ready for this and could exhibit unpredictable behaviour. All-in-all it would probably be better to pick a new $g(X)$ and start over.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.