Score:2

AWS: Can't mount my restored volume (EBS snapshot)

I restored an EBS volume from AWS Backup and attached it to a new EC2 instance. When I lsblk I can see it under the name /dev/nvme1n1.

More specifically the output of lsblk is:

NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0         7:0    0   25M  1 loop /snap/amazon-ssm-agent/4046
loop1         7:1    0 55.4M  1 loop /snap/core18/2128
loop2         7:2    0 61.9M  1 loop /snap/core20/1169
loop3         7:3    0 67.3M  1 loop /snap/lxd/21545
loop4         7:4    0 32.5M  1 loop /snap/snapd/13640
loop5         7:5    0 55.5M  1 loop /snap/core18/2246
loop6         7:6    0 67.2M  1 loop /snap/lxd/21835
nvme0n1     259:0    0    8G  0 disk 
└─nvme0n1p1 259:1    0    8G  0 part /
nvme1n1     259:2    0  100G  0 disk 
# parted -l /dev/nvme1n1 print
Model: Amazon Elastic Block Store (nvme)
Disk /dev/nvme0n1: 8590MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  8590MB  8589MB  primary  ext4         boot


Error: /dev/nvme1n1: unrecognised disk label
Model: Amazon Elastic Block Store (nvme)                                  
Disk /dev/nvme1n1: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags: 

As you can see nvme1n1 has no partitions, although AWS states that

Volumes that were created from snapshots likely have a file system on them already;

As a result, when I try to mount it on a folder with:

sudo mkdir mount_point
sudo mount /dev/nvme1n1 mount_point/

I get

mount: /home/ubuntu/mount_point: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.

The volume has data inside:

/dev/nvme1n1: data

Using sudo mkfs -t xfs /dev/nvme1n1 to create a filesystem is not an option as Amazon states that:

Warning Do not use this command if you're mounting a volume that already has data on it (for example, a volume that was created from a snapshot). Otherwise, you'll format the volume and delete the existing data.

Indeed I tried it with a second dummy ebs snapshot that I recovered and all I got is a dummy lost+found linux folder.

This EBS recovered snapshot has useful data inside, how can I mount it without destroying them?

If that helps:

sudo dmesg|tail && tail -40 /var/log/syslog
[259510.079807] squashfs: Unknown parameter 'nouuid'
[259510.081864] fuseblk: Unknown parameter 'nouuid'
[259530.034094] squashfs: Unknown parameter 'nouuid'
[259530.036337] fuseblk: Unknown parameter 'nouuid'
[259588.954190] squashfs: Unknown parameter 'nouuid'
[259588.956302] fuseblk: Unknown parameter 'nouuid'
[259618.283956] squashfs: Unknown parameter 'nouuid'
[259618.286027] fuseblk: Unknown parameter 'nouuid'
[259790.237677] squashfs: Unknown parameter 'nouuid'
[259790.239915] fuseblk: Unknown parameter 'nouuid'
Nov  8 12:11:22 ip-10-71-0-16 dbus-daemon[467]: [system] Successfully activated service 'org.freedesktop.PackageKit'
Nov  8 12:11:22 ip-10-71-0-16 systemd[1]: Started PackageKit Daemon.
Nov  8 12:16:28 ip-10-71-0-16 PackageKit: daemon quit
Nov  8 12:16:28 ip-10-71-0-16 systemd[1]: packagekit.service: Succeeded.
Nov  8 12:17:01 ip-10-71-0-16 CRON[25118]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Nov  8 12:18:07 ip-10-71-0-16 multipath: nvme1n1: failed to get udev uid: Invalid argument
Nov  8 12:18:07 ip-10-71-0-16 multipath: nvme1n1: uid = nvme.1d0f-766f6c3066643562613336373562646331303430-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 (sysfs)
Nov  8 12:18:21 ip-10-71-0-16 kernel: [259240.333109] EXT4-fs (nvme1n1): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Nov  8 12:21:22 ip-10-71-0-16 kernel: [259420.897564] pci 0000:00:1f.0: [1d0f:8061] type 00 class 0x010802
Nov  8 12:21:22 ip-10-71-0-16 kernel: [259420.897692] pci 0000:00:1f.0: reg 0x10: [mem 0x00000000-0x00003fff]
Nov  8 12:21:22 ip-10-71-0-16 kernel: [259420.898762] pci 0000:00:1f.0: BAR 0: assigned [mem 0x80000000-0x80003fff]
Nov  8 12:21:22 ip-10-71-0-16 kernel: [259420.898899] nvme nvme2: pci function 0000:00:1f.0
Nov  8 12:21:22 ip-10-71-0-16 kernel: [259420.898931] nvme 0000:00:1f.0: enabling device (0000 -> 0002)
Nov  8 12:21:22 ip-10-71-0-16 kernel: [259420.911982] nvme nvme2: 2/0/0 default/read/poll queues
Nov  8 12:21:22 ip-10-71-0-16 multipath: nvme2n1: failed to get udev uid: Invalid argument
Nov  8 12:21:22 ip-10-71-0-16 multipath: nvme2n1: uid = nvme.1d0f-766f6c3063336165616261376163396164323232-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 (sysfs)
Nov  8 12:21:25 ip-10-71-0-16 kernel: [259424.348064] EXT4-fs error (device nvme1n1): __ext4_find_entry:1524: inode #2: comm lsblk: reading directory lblock 0
Nov  8 12:21:57 ip-10-71-0-16 kernel: [259456.068352] Aborting journal on device nvme1n1-8.
Nov  8 12:21:57 ip-10-71-0-16 kernel: [259456.070935] Buffer I/O error on dev nvme1n1, logical block 13139968, lost sync page write
Nov  8 12:21:57 ip-10-71-0-16 kernel: [259456.080345] JBD2: Error -5 detected when updating journal superblock for nvme1n1-8.
Nov  8 12:22:15 ip-10-71-0-16 kernel: [259474.291166] EXT4-fs warning (device nvme1n1): htree_dirblock_to_tree:993: inode #2: lblock 0: comm rm: error -5 reading directory block
Nov  8 12:22:15 ip-10-71-0-16 kernel: [259474.291181] EXT4-fs warning (device nvme1n1): htree_dirblock_to_tree:993: inode #2: lblock 0: comm rm: error -5 reading directory block
Nov  8 12:22:19 ip-10-71-0-16 kernel: [259478.174816] EXT4-fs warning (device nvme1n1): htree_dirblock_to_tree:993: inode #2: lblock 0: comm ls: error -5 reading directory block
Nov  8 12:22:22 ip-10-71-0-16 kernel: [259481.580825] EXT4-fs warning (device nvme1n1): htree_dirblock_to_tree:993: inode #2: lblock 0: comm ls: error -5 reading directory block
Nov  8 12:22:49 ip-10-71-0-16 kernel: [259508.057079] squashfs: Unknown parameter 'nouuid'
Nov  8 12:22:49 ip-10-71-0-16 kernel: [259508.059713] fuseblk: Unknown parameter 'nouuid'
Nov  8 12:22:51 ip-10-71-0-16 kernel: [259510.079807] squashfs: Unknown parameter 'nouuid'
Nov  8 12:22:51 ip-10-71-0-16 kernel: [259510.081864] fuseblk: Unknown parameter 'nouuid'
Nov  8 12:23:11 ip-10-71-0-16 kernel: [259530.034094] squashfs: Unknown parameter 'nouuid'
Nov  8 12:23:11 ip-10-71-0-16 kernel: [259530.036337] fuseblk: Unknown parameter 'nouuid'
Nov  8 12:24:10 ip-10-71-0-16 kernel: [259588.954190] squashfs: Unknown parameter 'nouuid'
Nov  8 12:24:10 ip-10-71-0-16 kernel: [259588.956302] fuseblk: Unknown parameter 'nouuid'
Nov  8 12:24:39 ip-10-71-0-16 kernel: [259618.283956] squashfs: Unknown parameter 'nouuid'
Nov  8 12:24:39 ip-10-71-0-16 kernel: [259618.286027] fuseblk: Unknown parameter 'nouuid'
Nov  8 12:25:25 ip-10-71-0-16 multipath: nvme2n1: failed to get udev uid: Invalid argument
Nov  8 12:25:25 ip-10-71-0-16 multipath: nvme2n1: uid = nvme.1d0f-766f6c3063336165616261376163396164323232-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 (sysfs)
Nov  8 12:26:01 ip-10-71-0-16 multipath: nvme2n1: failed to get udev uid: Invalid argument
Nov  8 12:26:01 ip-10-71-0-16 multipath: nvme2n1: uid = nvme.1d0f-766f6c3063336165616261376163396164323232-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 (sysfs)
Nov  8 12:27:31 ip-10-71-0-16 kernel: [259790.237677] squashfs: Unknown parameter 'nouuid'
Nov  8 12:27:31 ip-10-71-0-16 kernel: [259790.239915] fuseblk: Unknown parameter 'nouuid'
sa flag
Possible duplicate of https://serverfault.com/questions/948408/mount-wrong-fs-type-bad-option-bad-superblock-on-dev-xvdf1-missing-codepage
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.