One can say something stronger than Daniel's answer.
Let $D_1$ be the joint distribution of $(A, Ar)$, where
- $A$ is sampled uniformly at random, and
- $r$ is random (technically it suffices for it to have "high enough min-entropy", i.e. it doesn't have to be uniform, but it might as well be chosen this way).
Let $D_2$ be the joint distribution of $(A, u)$, where
- $A$ is sampled uniformly at random, and
- $u$ is uniformly random, independently of $A$
Then for large enough $m$, the distributions $D_1, D_2$ are statistically indistinguishable. As a result (for large enough $m$, which is used in cryptography) it is impossible to efficiently distinguish between samples from the two distributions.
This result is typically known as the "Leftover Hash Lemma". It's worth mentioning that it's only known to hold for algebraically unstructured LWE. For Ring Learning with errors, there are some things you can do, but they have many more caveats.
As a result, you cannot hope to map from $(A, Ar)$ to $r$. If you could, you could distinguish the two distributions. As the distributions are statistically close (by the LHL), this is impossible.
Note that these LHL-type arguments do have a downside though --- they require the parameter $m \geq \Omega(n\log q)$ to be somewhat large. There are other ways of building public-key encryption for lattices that avoid this argument (they "use the LWE assumption again" instead), allowing one to choose $m$ smaller. This is to say that LHL arguments aren't necessairly optimal, and in fact were rarely used in the NIST PQC competition.