Score:2

Paper "How to Meet Ternary LWE Keys": What is t and how is it used

cn flag

I have read again and again this paper from A. May, but, probably because I am new to this field, I don't succeed in understanding the MEET-LWE part.

In particular, in part 5 it states to choose a "randomly chosen target vector" $t ∈ \mathbb{Z}_r^q$. Later, is is said that a value of $s1$ satisfying $π_r (As1 + e1) = t $ may be the solution of the LWE system, and something similar for s2. Thus my questions are:

  • Is $t$ really completely random or did I miss something? Why is it of size $r$?
  • How can we reduce the search space of $s$ using the mentionned equation and its equivalent for $s2$?
Score:1
cn flag

So after some weeks I succeeded in finding some answers to my question.

Is t really completely random or did I miss something?

Yes, t is here to reduce the ambiguity of the search space. Everything starts from the idea that for example: $ \begin{bmatrix} 1 \\ -1 \\ 0 \\ -1 \\ 1 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ -1 \\ 0 \\ 0 \\ \end{bmatrix} + \begin{bmatrix} 0 \\ -1 \\ 0 \\ 0 \\ 1 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} 1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \\ 0 \\ -1 \\ 1 \\ 0 \\ \end{bmatrix} $

These 2 combinations of vectors lead to the same result. But browsing both is useless as their combination is the same. Thus, it is possible to reduce the search space by ignoring some redundant combinations, and to do that we set some values as constants in the right part of the LWE equation (i.e. $As$ and not $s$) because of some mathematical properties (homomorphism of the projection A. May introduces).

Why is it of size $r$?

$r=floor(log_q(R))$ where $R$ is the size of the representation space of the vector we look for. In the above example, $R\geq2$ because we found 2 ways to represent the vector. Thus, the $log_q$ of that $R$ is the number of coordinates in the space $\mathbb{Z}_r^q$ that we can set to a fixed value to reduce the search space. However, this number of coordinates must be an integer so we take the floor-rounding of what we obtained.

How can we reduce the search space of $s_1$ using the mentionned equation and its equivalent for $s_2$?

A way to exploit it is to apply the match and filter approach:

  1. Generate all the possible halves of $s_1$ as A. May describes
  2. Combine them and filter, that is reject the obtained $s_1$ if $\pi_r(As_1) \ne t$ or if it is not a valid ternary vector with the good Hamming weight
  3. Do the equivalent for $s_2$ (note the equation with $t$ is not the same as we are in the right part of the tree)
  4. Combine these vectors with Odlyzko hashing, and check the LWE equation

Of course this has to be adapted for bigger trees.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.