I dont really agree this is a huge concern in most workloads due to caching, readahead policies and disk elevators you can use but this is possible with caveats.
The best way to tackle this would be to physically partition the media you want to allocate off into 'chunks' you consider each end of the disk you want to separate.
Something like this
Number Start End Size Type File system Flags
1 1049kB 1075MB 1074MB primary ext4 boot
2 1075MB 4TB 4TB primary lvm # My fast partition
3 4TB 8TB 4TB primary lvm # My slow partition
Then, create a volume group(s). In this example I'm using one volume group but it might be easier to have a 'slow' VG and a 'fast' VG instead..
# pvcreate /dev/sda2
# pvcreate /dev/sda3
# vgcreate vg /dev/sda2 /dev/sda3
Then allocate your LVs out of the said physical volumes..
# lvcreate -n myFastLV -L1TB vg /dev/sda2
# lvcreate -n mySlowLV -L1TB vg /dev/sda3
Caveats here being, bad sectors can be silently replaced by the disk controller with a 'reserve' often located elsewhere (which is entirely manufacturer independent). Also some fancier disks may internally remap the sectors in that is logically consistent to the claims offered but are physically not in the place you expected them to be.
Finally, the problem workload you're suggesting (pipelining huge files) really is a very sequential workload issue that would have bigger gains from using preallocation techniques on the files written (to keep them contiguous and not fragment).
Then setting aggressive readahead policies to read whole swathes of adjacent/upcoming sectors which are most likely going to be contiguous to the file you're reading.
A finer grained approach could also be achieved using dmsetup
to map the physical sectors into whatever order and fashion you wanted, but this wouldn't be too portable and probably more effort than its worth in the long run (you'd need a script to rebuild the mapping on boot for example).