It is not even theoretically feasible.
$hash()$ functions take any arbitrary data (for our argument, let us imagine all data is 1024 bits), and output a 128 bit hash.
This means that mathematically, each 128-bit output $y$ is guaranteed to have multiple 1024-bit $x_1, ..., x_n$ where each $x_i$ hash to $y$. This is a fact that is independent of any sort of computation.
However even the best reversible computer, given $y$, can only return to you any of the $x_i$s.
This becomes significant in password-based authentication scenarios.
Let us say a user logs in with password $x_1$, and you store $hash(x_1) = y$ in your database. The database leaks, an attacker uses $y$, the knowledge of what hash function you used, and a reversible computer and gets, say $x_3$.
They can still get access to your account, as the server only compares the hashes, and $hash(x_1) == y == hash(x_3)$.
However, in compression, it wouldn't be useful if you compressed $x_1$ and on decompression, you got $x_3$ back.
Regarding your reddit comment, It is also not possible to encode which $x$ it would decompress into, as then you would need knowledge of what exactly $x$ is, which would mean your compression program only works for one value, and is thus not a useful one.
Thought process
It is useful to have a basic idea of information theory.
Compression is a way to exploit the fact that the 1024-bit data in question, may not actually hold 1024 bits of information. For example, consider the compression scheme "run-length encoding".
The string 1... (1024 times) will be compressed to be 11024
, and the string 1.. (512 times) 0... (512 times) will be compressed to 15120512
. Notice that the latter compressed value is larger.
In the first one, the "information" stored in the string is simply "how many times 1 repeats". However, in the second case, there is more information, how many times 1 repeats, and how many times 0 repeats.
You can never recover lost information, so in order for decompression to work, the compression scheme outputs (as it must) a string with more bits in it.
As a side note that may be interesting to you, machine learning works exactly like this. The dataset $(1, 2), (4, 8), ...$ may be huge, but the information contained is simply the linear equation $y = 2x$. Linear regression is one algorithm that is able to compress ("learn") this dataset and output the - much tinier - linear equation.
In hashing, the constant-size output means that is an orthogonal concept to compression in these regards.