Quantify what acceptable performance would be. Perhaps downloading all of a small sized project takes no more than one or two seconds. Having performance objectives defined by user experience makes for well defined goals.
Review how the files are being stored. Tens of thousands of files approaches the worst case, with lots of IOs for files and metadata. Databases or archives would be better, packaging up into larger bundles with fewer IOs. Version control systems and archive tars in other words, especially when dealing with code over time.
This being Linux, developers love to reinvent the wheel. So there are many block cache implementations, the most maintained are possibly lvmcache and bcache. At least, both of those are mainline kernels, resulting in comparison tests like this. Although looks like RHEL is not ready to support bcache.
It is not possible to make a hybrid block device as fast or easy to use as an all flash setup. There will be cache misses. There will be failures of the cache device, at which point you better know if it is in writethrough or writeback mode, and if recovery involves data loss. Those are the tradeoffs for a less expensive storage overall.
These being block devices they are a level below a file system, and unaware of small files. However, depending on how deep you want to get into tuning, they may be able to detect sequential block I/Os. Which may be an acceptable proxy, depending on how fragmented files are.
A distro with good storage documentation will cover lvmcache. Here is lvmcache examples from RHEL 9. You would want type cache, just writes via writecache will not be a sufficient boost.
Beware, the underlying dm-cache tunables mention "sequential_threshold" but this does not have an effect. Modern kernels replaced the cache replacement policy with a faster one, but without the knobs.
Just block caches do not have a prefetch mechanism, especially not for a targeted subset of files. Again, block layer, doesn't know about files. Something would need to do I/O for them to know that something is hot. Digging through Server Fault archives, some have pre-warmed caches by reading files.
Note that RAM is still faster than solid state, and Linux is always maintaining a file cache. More RAM would increase the working set of this cache, although note that it would need to be slow at first, until the hit ratio improved. I recommend investing in all flash storage before throwing excess RAM at this problem, however.