I'm working on a Postgres upgrade with pg_upgrade, and the meat of the process is copying the database's datafiles [unmodified] from the old cluster directory to the new. In order to not bloat the data volume I've attached a second EBS volume to the instance. Also, in order to get the upgrade completed quickly, I've set the throughput to its maximum value [1000MiB/s] and left the IOPS as default [4000] for both volumes, waiting for the volume to report that the "optimization" is completed.
However, during the upgrade process I've noticed that neither the throughput nor IOPS come close to the configured limits, even though the operations are copying large, contiguous files. Below is a snapshot of the monitoring for the volumes showing two different runs of the process.
The OS is Rocky Linux 8.5, the instance is a freshly-made m5a.2xlarge instance using the AMI built by Rocky, and the volumes are formatted as ext4. The instance CPU usage over the same period is as below, though the OS stats showed a fair amount of IOwait.
Would there be a parameter that I should tweak regarding these volumes, or the instance, or some OS config that I'm missing? Or is this just a symptom of the EBS backing store being too busy to actually service my needs?