This is routinely done for the largest factorization records of RSA-type moduli (two primes of roughly the same size), although using a classical algorithm called the Number Field Sieve.
Out of its four main steps, the first two (polynomial selection and sieving) are very amenable to distributing among many different computers, even if they're not connected together.
The last two steps (linear algebra and square root) need to be performed on conventional HPC hardware, such as a cluster of computers with very fast interconnects. However, the majority of computing power is spent on the sieving step, and it parallelizes very well.
Here are two examples of such large computations, taken from the linked Wikipedia article: a 795-bit integer and an 829-bit integer. However, as you can see, these are very far away from moduli sizes used in practice: 2048-bit and 4096-bit integers.
Note that NFS has superpolynomial (indeed, subexponential) computational complexity, so moduli used in practice are completely, absolutely out of reach of NFS, even with very large budgets. A 1024-bit integer factorization would make for a seminal paper, given that it was a size used in the real world not too long ago, yet you don't see anyone publishing that paper yet because it would take simply too many resources.
And so that’s the lure of quantum computers: Shor’s algorithm can factor in polynomial time on a quantum computer, while the best we can do on a classical computer is superpolynomial time. Very roughly speaking, it’s about as easy to factor an RSA modulus in a quantum computer than it is to generate an RSA key, or encrypt/sign/decrypt/verify using RSA on a classical computer.