First of all, please be noted that if you are making a disk image (or even a partition image), you can't just clone how much is used on filesystem level (i.e. Used
as shown in df
).
Besides, even if by "used" you are referring to the size of the partition(s), if you are not making a full disk image, you should probably make image(s) of the partition(s) instead (and optionally back up the partition table / MBR to a separate file, especially with the case of GPT -- there's sgdisk
).
If you really can't afford to actually make disk/partition image(s) because of shortage of spare storage, you can consider:
- make "filesystem clone", with tool like partclone or upstream / official approach for certain types of filesystem, such as
btrfs send
.
dd
with conv=sparse
, which could avoid blocks (in the sizebs=
, I think) that are completely zero from taking up as much space (See this for more details.)
- shrink the filesystem(s) as much as you can so that you can then resize the partition(s) before cloning. Some types of filesystems do not support shrinking though.
I'm not actually experienced with conv=sparse
btw. Also how well it works could depend on a few things, such as the type of the filesystem that the images are written to / stored, and/or whether the source drive is an SSD that is at least partially RZAT ("read zero after trim"), etc.
Finally, just to talk about dd
, there's iflag=count_bytes
, which would allow you to use count=
to determine how many bytes (instead of blocks in the size of (i)bs=
) to clone.
bs=4k
is often good enough / the best to use, not because that might be the physical block size of the hard drive, but because it is the typical page size. Although size like 128k or 512k could work even better when reading from certain flash memory storage devices.