Score:2

Trapdoor recovery from lattice-based preimage sampling

in flag

[GPV] and [MP] (references below) give constructions of the trapdoor function defined by $$ f_{\mathbf A} (\mathbf x) = \mathbf A \mathbf x, $$ where $\mathbf A \in \mathbb Z_q^{n \times m}$ is uniformly random, and the domain is $\{ \mathbf x \in \mathbb Z^m \mid \lVert x \rVert \le \beta\}$. Given any $\mathbf y \in \mathbb Z_q^n$, the secret trapdoor allows for computing a preimage $\mathbf x$ of $\mathbf y$, i.e. $\mathbf A \mathbf x = \mathbf y$ and $\lVert \mathbf x \rVert \le \beta$.

The trapdoor in [GPV] is a full-rank set $\mathbf S \in \mathbb Z^{m \times m}$ such that $\mathbf A \mathbf S = 0$. The trapdoor in [MP] is $\mathbf R$ such that $\mathbf A \begin{bmatrix} \mathbf R \\ \mathbf I \end{bmatrix} = \mathbf G$, where $\mathbf G$ is some special public gadget matrix. In any case, each trapdoor has some associated quantity $s$ that describes its quality. For $\mathbf S$, $s =$ max norm of columns of Gram-Schmidt $\tilde{\mathbf S}$ of $\mathbf S$. For $\mathbf R$, $s = $ largest singular value of $\mathbf R$.

Given any $\mathbf y$, a trapdoor of quality $s$ allows for sampling from the discrete Gaussian distribution over $\{\mathbf{x} \mid \mathbf A\mathbf x = \mathbf y\}$ of width $\sigma \ge s\cdot\omega(\sqrt{\log m})$, which gives $\lVert x \rVert \le \sigma\sqrt m$ (with overwhelming probability).

My question is about the reverse. Assume we have an oracle for Gaussian preimage sampling that outputs solutions for $\mathbf A \mathbf x = \mathbf y$ with $\lVert x \rVert \le \beta$ for arbitrarily chosen $\mathbf y$. Can we recover some kind of trapdoor for $\mathbf A$ that lets us compute a preimage $\mathbf x'$ of a random $\mathbf y'$ that's not queried to the oracle, with non-negligible probability?

One obvious attempt is to query for $\mathbf A \mathbf x = \mathbf 0$ to try recover a "short" full-rank set $\mathbf S$ such that $\mathbf A \mathbf S = \mathbf 0$. But this $\mathbf S$ doesn't seem short enough to compute solutions of length $\le \beta$. We can try lattice-reduction. But I don't know what the state-of-the-art lattice reduction is for this problem trying to get shorter basis from an already short basis.

  1. [GPV] https://eprint.iacr.org/2007/432
  2. [MP] https://eprint.iacr.org/2011/501
Score:2
in flag

The idea you describe (almost) works, but as you noticed, the “quality” of the resulting trapdoor is somewhat worse than what we seem to need to generate preimages of norm $\leq \beta$. However, the quality is good enough to generate preimages of norm, say, $\leq \beta \sqrt{m}$. This can suffice for applications if parameters are set up correctly.

For instance, this idea is worked out formally and used for “delegating” trapdoors in the CHKP’10 “bonsai trees” paper, and also in [MP] for its style of trapdoor.

It’s unknown whether what you have asked about can be done without any loss in quality, but if so it would be very surprising; I think most experts wouldn’t expect it to be possible.

In order for lattice reduction to work for the stated purpose, one would want to reduce the given vectors of norm $< \beta$ into vectors of norm $< \beta/ \sqrt{m}$ or so. This appears to be a very hard problem. Indeed, even just halving the lengths of some given vectors seems hard, for the intuitive reason that we could then halve again, and again, until the procedure stopped working, giving us a near-shortest lattice vector. In fact this is how some (exponential-time) shortest-vector algorithms work, by iterative halving.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.