Bigger errors in LWE make the assumption more secure --- it also makes it harder (and eventually impossible!) to build cryptography from it.
First, recall that the LWE assumption can be phrased as distinguishing samples of the form
$$(\vec a, \langle \vec a, \vec s\rangle + e)\in\mathbb{Z}_q^n\times\mathbb{Z}_q$$
from samples of the form
$$(\vec a, u)\in\mathbb{Z}_q^n\times\mathbb{Z}_q,$$
where
- $\vec a\in\mathbb{Z}_q^n$ is uniform
- $\vec s$ is a fixed vector (typically uniform, or from the same distribution as $\vec e$), i.e. is consistent across all samples
- $e\in\mathbb{Z}_q$ is "small". Say it is $\mathcal{N}(0, \sigma_0^2)$.
- $u\in\mathbb{Z}_q$ is uniform, independent of all the above quantities.
Note that in isolation (i.e. when $a$ is not given to an adversary) $b := \langle \vec a, \vec s\rangle + \vec e$ can be shown to be very close to uniform in a statistical sense. So the main novelty in the LWE assumption is that when this $\vec a$ is leaked, it is still hard to distinguish $\langle \vec a, \vec s\rangle + \vec e$ from unifom without access to $\vec s$.
Note that given access to a sample of the above form, an adversary may add independent noise $\mathcal{N}(0, \sigma_1^2)$ to the second component.
This will
- Uniform Case: Yield uniformly random samples, and
- LWE Case: Yield LWE samples with error drawn from $\mathcal{N}(0, \sigma_0^2+\sigma_1^2)$.
i.e. solving large-error LWE is not easier than solve small-error LWE.
So why do we want to keep the LWE error small?
Typically, to encrypt with LWE, we encode messages $m\in\{0,1\}$ via the error-tolerant encoding $m\mapsto (q/2)m$.
This is such that we can decode
$$(q/2)m+e\mapsto m$$
provided that $|e| < q/4$.
So if our error is too large, despite being more secure, we can no longer decrypt correctly.
In particular, if we added uniform noise to the second component of $(\vec a, b)$, we would be information theoretically secure, yet we also could not decrypt correctly anymore.
Error often grows fairly quickly with homomorphic operations.
So starting with initially small error can be quite important to building a cryptosystem with more homomorphic capabilities.