This is more an answer to why the uniqueness of the sums effects the size so much that the case for $52485332$ becomes trivial (its way to long for a comment).
When all sums must be unique, then they must result in different integers.
Because there are $500^{1000}$ possible sums, there are also $500^{1000}$ different integer results for that. the lowest case would be all integers from $0$ to $500^{1000}-1$.
For Example,
$S_1 = \{0, 1, 2, ..., 499\}$
$S_2 = \{0, 500, 1000, ..., 249500\}$
$S_3 = \{0, 250000, 500000, ..., 124750000\}$
...
$S_{1000} = \{0, 500^{999}, 2*500^{999}, ..., 499*500^{999}\}$
would be a way to ensure the uniqueness of the result. As you can see, The Numbers get really large really fast.
In this particular example its easy to find the result (just always choose the largest number that fits from last to first set). Even most numbers is $S_3$ are greater than $52485332$ and can therefore be ignored.
You would probably want relatively random values in your sets. In this case the range of the values have to be at least slightly larger.
However, it is highly unlikely that any value is lower or equal to $52485332$ (when you uniformly choose $500000$ values out of $500^{1000}$)
Dynamic programming, as @poncho suggested, really only works for small numbers and its performance is not that much better than exhaustive search (linear difference in the number of sets), because the sub-sums, that can be reused are unique, the advantage of not looking at other possibilities is not there.
Runtime should be same order as exhaustive search. Only improvement is to aboard when values are to large or small to reach the target, but for reasonable targets that is not much of an advantage.
On could easily reduce the subset sum problem or knapsack problem to this one by just using the same set as many times as the number you want to sum to. Problem with this is that this is not a polynomial time reduction ant therefore not sufficient to proof if problem is NP hard.