Yes, it is still hard via a simple hybrid argument.
Essentially, for $i\in[k]$ define the "mixed distribution"
$$H_i = (A, A\vec s_1 + \vec e_1,\dots, A\vec s_i + \vec e_i, \vec u_{i+1},\dots, \vec u_k).$$
Then, the problem of distinguishing between $H_i$ and $H_{i+1}$ can be seen to be reducible to the LWE problem.
When using this to concretely analyze things, this allows one to bound the advantage of distinguishing between $H_0$ and $H_k$ by $k$ times the advantage of an LWE distinguisher.
This argument (and more generally technique of reusing $A$) dates back to at least Lossy Trapdoor Functions and Their Applications by Peikert and Waters in 2008.
It has some mild plausible benefits, namely:
- one could in principle standardize a single matrix $A$ that all users use (similar to how DDH groups were standardized), or even
- one could "reuse" a single $A$ over a relatively short, but still non-trivial time period, say 1 hour.
It isn't generally appealed to much anymore though.
This is for two main reasons
- one can get comperable reductions in the size of $A$ by appealing to structured versions of LWE (while improving the efficiency of relevant operations), and
- in practice one does not often send $A\in\mathbb{Z}_q^{n\times m}$ at the cost of $nm\log_2q$ bits (which is large, leading to one searching for amortization arguments like the one you propose). Instead, you can simply send a "seed" $\{0,1\}^\lambda$, which is expanded into a random matrix $A$ using an extensible output function at the destination. Most NIST PQC candidates use this approach.
It is also worth mentioning that the above idea of a "standardized LWE instance" has a few practical reasons why it is perhaps not smart over long time-scales, namely
it opens you up to precomputation attacks (similarly to other DDH-group standardizations, say the LogJam attack), and more importantly
one can construct "backdoored LWE instances" --- roughly a distribution of random matrices $A$ that are computationally indistinguishable from random, but have a "trapdoor" that allows one to break LWE.
The backdoored LWE instance is relatively straightforward (I do not remember who to attribute it to unfortunately).
Recall that the NTRU assumption generates keys a public key $h$, and secret key $f$, such that $hf = g$ is "small".
By using an appropriate "matrix" form of things, we get matrices $H, F$ such that:
- $HF = G$ is small, and
- $H$ is computationally indistinguishable from uniformly random.
Then, if we use $H^t$ as the random matrix of an LWE instance, e.g. get a sample $(H^t, H^t s + E)$, we can easily break the LWE assumption using this random matrix, as $F^t H^t s + F^t E = Gs + F^t E$ is "small" (I believe). This is all with the matrix $H$ being computationally indistinguishable from random under NTRU, e.g. this backdooring of $H$ is hard to detect.