Score:3

Solving subset sum via the LLL algorithm

ru flag

I wrote code that solves the subset sum problem via the LLL algorithm, as given in chapter three of the Handbook of Applied Cryptography https://cacr.uwaterloo.ca/hac/

I ran the code on ten random sets, each with positive integers from one to $2^n$, each with a random subset adding up to a target integer. The code found the solution ten out of ten times when $n=10$.

However when I ran the code on ten random sets, each with positive integers from one to $4^n$, each with a random subset adding up to a target integer, the code found the solution only one time out of ten times when $n=10$.

My question is shouldn’t it be the other way around, since the sets with positive integers from one to $4^n$ have a lower density than those with positive integers from one to $2^n$ and the algorithm is supposed to have a higher probability of finding a solution for lower density instances?

What might explain this output?

Score:4
ng flag

Caveat: I'm not an expert on these types of attacks.

If a subset-sum instance is of the form $\sum_{i=1}^n a_i x_i = t$, i.e. $n$ is the length of the sum, then density is defined to be $n / \max_i\log a_i$. It follows that

  1. the density of your first subset-sum is $\approx 10/n$
  2. the density of your second subset-sum is $\approx 5/n$.

This is lower density (as you claim). That being said, both are actually higher density than is typically required for LLL to solve things. These notes of Peikert state that density $2/n$ is typically the threshold for things to start working.

So based on the theory I believe neither should (guaranteed to) work with high probability, and instead the threshold for things to work well should be somewhere around $2^{5n}\approx 32^n$ (for your parameterization of things).

Craig Feinstein avatar
ru flag
Actually the results I got were only for $n=10$. When I went higher, the results got worse.
kodlu avatar
sa flag
@CraigFeinstein, I guess it might be an implementation problem possibly then? Just a blind guess.
Craig Feinstein avatar
ru flag
It could be. I am going to go over things again today.
Craig Feinstein avatar
ru flag
Note that there is no theory guaranteeing that the LLL must find solutions with high probability for low density instances. It is only claimed from experiments that it does happen to find solutions in the 1985 paper by Odlyzko and Lagarias. I still have not been able to replicate these but perhaps they were using a different implementation of the algorithm? I do not know.
Mark avatar
ng flag
Yes there is, see the notes I have linked. LLL (provably) can be used to find a short vector within a factor $2^{n/2}\lambda_1(L)$. With high probability (for appropriate parameters) there is a single such vector, corresponding to the knapsack solution.
Craig Feinstein avatar
ru flag
@mark You are correct. I should have qualified that I was only talking about the papers by Odlyzko and Lagarias in which the density does not depend on n. I am only interested in these types of problems.
Craig Feinstein avatar
ru flag
I am now wondering whether the problem was that I tested my program for too small a number of n. The result in the Odlyzko Lagarias paper is asymptotic and they tested it on larger n.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.