Score:2

I have an idea but I don't know if it will work. For the appropriate $$p$$ it is easy to find $$n$$ linearly independent $$x_i$$. Then we compute the inner product between the $$x_i$$. I think the information is enough to recover the $$\boldsymbol{v_1},...,\boldsymbol{v_n}$$, because, by matrix it can be written as $$\boldsymbol{X}=\boldsymbol{T}*\boldsymbol{V}$$, where $$\boldsymbol{T}\in \{-1,1\}^{n \times n}$$ and $$V$$ denoted $$[\boldsymbol{v_1},...,\boldsymbol{v_n}]$$. Wwe need $$n^2$$ bits to store the $$\boldsymbol{T}$$, and we have $$\frac{n^2-n}{2}$$ inner products, each inner products need $$log(n-1)$$ bits to represent.(because $$\boldsymbol{X}$$ is invertible and $$\left\langle\boldsymbol{x_i,x_j}\right\rangle \equiv n$$ mod $$2$$ ), so the information in the inner products are $$log(n-1)*\frac{n^2-n}{2}$$ bits. But I don't know if there's redundancy in it and how to eliminate it. I think the information is enough to recover the $$\boldsymbol{T}$$ up to a signed permutation when $$n$$ is large enough. Then by taking the inverse we can get $$\boldsymbol{V}$$ up to a signed permutation.

Assume that $$\boldsymbol{T'}$$ is the matrix we found, so
$$\boldsymbol{T}=\boldsymbol{T'}*\boldsymbol{S}$$, where $$\boldsymbol{S}$$ is a signed permutation, so
$$\boldsymbol{T'^{-1}}*\boldsymbol{X}=\boldsymbol{S^{-1}}*\boldsymbol{V}$$
As for how to use the information to solve $$\boldsymbol{T'}=[\boldsymbol{t'_1},...,\boldsymbol{t'_n}]$$, I try a heuristic algorithm: without loss of generality，let $$\boldsymbol{t'_1}=[1,...,1]^{t}$$, then we can know the number of $$1$$ in $$\boldsymbol{t'_2}$$ by $$\left\langle\boldsymbol{x_1,x_2}\right\rangle$$. Then we can let $$\boldsymbol{t'_2}=[1,1,1,...,1,-1,...,-1]$$. In a similar way, we can compute $$\boldsymbol{t'_i}$$ by $$\left\langle\boldsymbol{x_j,x_i}\right\rangle,1\leq{j}<{i}$$. But it does not work even in low dimension.

What is the question? Why is this problem useful?
sorry, the problem should give a basis of$[v_1,...,v_n]\mathbb{Z}^n$
Sorry for the digression, but per what convention is $[n]$ meaning the integers form $1$ to $n$?
@fgrieu it shows up in discrete math "a fair bit", see e.g. this [math.SE post](https://math.stackexchange.com/questions/1460930/notation-for-the-set-1-2-k) and its linked questions for references to books. But once (or rather, if!) you leave that area behind, I wouldn't be surprised if one could build a career in cryptography without encountering this notation anymore.
I think that annoying notation originated in CS.
Score:4

Edited in response to clarification

You are assuming that you have

• A basis of your lattice (equivalently, for $$\mathbf{V} = [\vec v_1,\dots, \vec v_n]$$, and $$\mathbf{U}\in\mathsf{SL}_n(\mathbb{R})$$, a matrix $$\mathbf{V}\mathbf{U}$$). In your particular case, you can equivalently write this basis as $$\mathbf{O}\mathbf{U}$$ for $$\mathbf{O}$$ is orthogonal.
• samples $$\vec t_i = \mathbf{V}\vec r_i$$ where $$\vec r_i\in\{-1,1\}^n$$.

You then want to recover the basis $$\mathbf{V}$$ of the lattice.

This is (sort of) a variant of the (inhomogeneous) SIS problem, where given a random lattice $$L$$ and target $$\vec t$$, you want to find a short linear combination $$\vec l$$ such that $$L\vec l = \vec t$$. The differences are

1. your $$\vec r$$ can't be captured solely by a "shortness" constraint (if $$\vec r\in\{-1,0,1\}^n$$, they could, via the constraint $$\lVert \vec r\rVert_\infty \leq 1$$. This does not seem like a large deal to me though)

2. your random lattice is from a different distribution than normal (rotation of $$\mathbb{Z}^n$$ rather than application of $$q$$-ary construction A to a random $$q$$-ary code)

3. you admit multiple samples. If $$p(n)$$ is fixed a priori we could rewrite your problem to be 1 sample though (essentially by taking direct sums of your lattice with itself many times).

Of these, #2 is the biggest thing that could make this problem unexpectedly easy. I don't know how stable the SIS problem is to varying the underlying distribution of lattices.

Finally, I will briefly say that one must be somewhat very careful when proposing "hard" random lattice distributions that rely on lattices with orthonormal basis. This is because in the algebraically structured setting there can be devestating attacks, based on the Gentry-Szydlo algorithm. You don't indicate that you are interested in this setting, but I thought it best to include a healthy word of caution.

I guess the problem does not provide the coefficient vector $r$.
yes, the problem does not provide the $r$
Hi Mark-- Gentry-Szydlo seems unrelated to this problem to me. (I don't see how G-lattices apply.)
They are currently unrelated, but if the OP decides that "things seem fine in the unstructured setting, so I will work with an algebraically structured form of this for efficiency" then Gentry-Szydlo might apply.
Guess you've gotta dodge G-S in the structured setting.. =)
Score:3

I think that there will be (with high probability) enough information to solve the question, but I cannot yet think of a solution that takes less than exponential time in $$n$$.

Given $$\mathbf x_i$$ and $$\mathbf x_j$$, I can form sum and difference averages $$\frac12(\mathbf x_i+\mathbf x_j)$$ and $$\frac12(\mathbf x_i-\mathbf x_j)$$ whose entries are all in $$\{-1,0,1\}$$. I can even work out how many non-zero entries there are as the length of the sum/difference vector will be $$\sqrt w$$ where $$w$$ is the number of non-zeroes. With high probability, I can find a collection of these that form an $$n$$-dimensional parallelepiped with volume 1 and which therefore generate the same lattice as the $$\mathbf v_i$$ (I can even bias my choice so that my collection is of vectors of somewhat low weight). I then solve the short vector problem for this basis to find one or more of the $$\mathbf v_i$$, eliminate this from the $$\mathbf x_j$$s by seeing whether adding or subtracting $$\mathbf v_i$$ makes the length $$\sqrt{n-1}$$ and iterate.

Our best methods for solving short vector problems take time exponential in $$n$$, but this is an exceptionally nice lattice where we are guaranteed an orthogonal basis and multiple short vectors of the same length. There might be some advantage to this situation, but I can't think of any subexponential method.

Being guaranteed multiple short vectors of the same length doesn't really help with these kinds of things iirc. When talking about local minima of the packing density function on the space of lattices ("perfect lattices"), this is a common reduction that iirc is claimed WLOG (to "well-rounded lattices"). Additionally, ideal lattices have this property, and I don't believe it has been leveraged for attacks.
additionally, orthogonality doesn't always help. The lattice isometry problem (given lattice basis $\mathbf{B}_1, \mathbf{B}_2$, finding an orthogonal matrix $\mathbf{O}$ such that $\mathbf{O}\mathbf{B}_1 = \mathbf{B}_2$) is not known to be significantly easier in this setting. There is one non-trivial thing you can do using "characteristic vectors" (roughly the result of [this paper](https://arxiv.org/abs/math/9906019), but it doesn't seem to significantly speed things up, see e.g. [this paper](https://arxiv.org/abs/1910.03838).
I sit in a Tesla and translated this thread with Ai: