I"m in the process of replacing my M.2 SSD with a new and larger one. I want to save an image of my existing partitions before I take out my old SSD so that I can refer to it when setting up my new install and make sure I don't lose anything. I only have one SSD slot in my machine so I can't use the SSD itself, so that's what the image files are for.
I thought this would be fairly straightforward, but I'm running into a weird problem. The image creation appears to execute without a problem and I can loop-mount the image without a problem. But after poking around the mounted filesystem for a bit, I get all sorts of errors, such as
"ls: reading directory <directory> : Bad message"
I created the images with a simple dd command:
dd if=/dev/nvme0n1p5 of=backup.img bs=64kb status=progress
When they were finished being created, I checked the images without mounting them:
fsck.ext4 backup.img
which then reported that the image was clean.
I then mount the image:
mount backup.img <mnt_pnt>
I then do an 'ls -R' command, and I get the "bad message" error mentioned above.
If I then unmount the filesystem and run fsck again, it reports all sorts inode errors.
If I mount the actual partition directly, there are no errors and the files that were listed with the "Bad message" error have no error.
Base system is a Kubuntu 20.04. I made sure to force an fsck on all partitions before beginning the backup. Backups are being placed on a NAS drive mounted via CIFS.
Procedure above is being performed on a Kubuntu 21.20 Live USB system.
What am I doing wrong?
EDIT: I've looked at CloneZilla and fsarchiver. They seem to promise a lot, but my issue is that I don't want to -restore- these filesystems after I've imaged them. I just want to be able to MOUNT them on a new, fresh ubuntu install and extract and files I need. I tried CloneZilla's instructions for doing so (which they themselves state is a work-around hack) and it couldn't mount the image after I decompressed it. fsarchiver I'm not sure about yet.