Score:2

Landing in systemd emergency prompt after lengthy boot repair process. Problem: Missing FS in fstab flile

de flag

I realize there's been several questions by people who've had issues booting already, but I think mine is a rather particular case, so I'm posting yet another question in hopes of addressing some new issues.

I've been repairing the boot process of a VM that had an initramfs (initrd.img and vmlinuz files in /boot) from kernels that were no longer installed, and was trying to still boot from them.

I am very close to being finished, but it keeps rebooting into systemd's emergency mode (where it says: )

You are in emergency mode. After logging in, type "journalctl -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" or "exit" to boot into default mode.
Give root password for maintenance
(or press Control-D to continue):

I booted from a live CD, mounted the 3 pertinent partitions to /mnt, chrooted to /mnt:

mount /dev/sda3 /mnt
mount /dev/sda2 /mnt/boot
mount /dev/sda1 /mnt/boot/efi
for i in proc dev dev/pts sys tmp run; do mount --bind /$i /mnt/$i; done
chroot /mnt

Did my repairs and rebooted.

Now my fstab is not mounting my partitions. I thought it was correctly configured - UUIDs are copied directly from blkid | grep /dev/sda. I didn't think it was missing anything.

Here are the errors I'm seeing right before getting to the emergency mode prompt:

[FAILED] Failed to mount /boot
See 'systemctl status boot.mount' for details.
[DEPEND] Dependency failed for Local File Systems
[DEPEND] Dependency failed for Unattended Upgrades Shutdown
[DEPEND] Dependency failed for /boot/efi

So, of course I looked at systemctl status boot.mount, but it's active (green) and says it's loaded, even though my /boot folder is empty unless I manually mount /dev/sda2.

Seems very strange. Why would boot.mount say it's loading /boot partition if it's clearly not?

Organic Marble avatar
us flag
I enjoyed reading this, but to fit the format of this site it needs to be a q&a. Most of this reads more like an answer. It's fine to answer your own question; consider an edit to make this a q&a.
Artur Meinild avatar
vn flag
Hello. You might have more luck if you could be more specific about what exact problem you need an answer to. There is a lot of text, and it's hard for me to see which parts are relevant for your specific problem.
AveryFreeman avatar
de flag
Hi, yeah that was kind of the point. Not at first, but then I fixed the problem while I was asking the question. I will try and succinct it up a bit.
Organic Marble avatar
us flag
Close vote retracted.
Score:2
de flag

So I actually figured out the issue while I was writing the question. As you can see from what I wrote in the beginning, it was a very long process (I had been working on it for about 2 days before I got to the point of wanting to ask for help).

If you look at the very end of the Q, I had received this message from dmesg during the boot process:

[FAILED] Failed to mount /boot
See 'systemctl status boot.mount' for details.

So, of course I tried systemctl status boot.mount to see what it said, but it said boot.mount is active (green), it's loaded and functioning properly, even though /boot was empty unless I manually mounted /dev/sda2 (which is exactly the opposite of what I would expect).

So I started thinking something might be wrong with the service. I disabled boot.mount even though it said it was working properly:

systemctl disable --now boot.mount

I tried to re-enable it, but got an error:

systemctl enable --now boot.mount
Failed to enable unit: Unit /run/systemd/generator/boot.mount is transient or generated

OK, that makes sense, it's triggered through the boot process and cannot be invoked through a user command. So I tried to re-mount all devices with:

mount -a

And saw that there was an error in the /etc/fstab file:

error: rw,relatime is not a valid file system

(or something to that effect).

The key here is, if I hadn't tried mounting the filesystem manually, I would have never received that feedback. The error message from mount -a one gets when fstab contains improper syntax is incredibly helpful. A lot more helpful than:

[FAILED] Failed to mount /boot
See 'systemctl status boot.mount' for details.

... and then seeing a "working" systemd unit for boot.mount when /boot is not mounting (even though it did get me to the right place eventually).

So I edited the fstab and entered the filesystem info for the /boot partition that failed to mount, then I re-ran mount -a (which essentially does the same thing as boot.mount) and got a positive response.

Now the two partitions are mounting properly after a reboot, and all is good in the land of horseradish and marmalade.

If this does not address any of your issues, here are some additional notes of the process I went through before getting to the point above where I was looking for help (feel free to stop reading after you get to your problem):

The original issue I was having two days ago was the system trying to boot from kernels no longer on the system. So, after booting with the live CD, I deleted the /boot folder's contents (where all the initrd files are located).

I figured I would just re-create the initramfs using update-initramfs -c -k all from the current kernels I had installed, but then I learned that I could not re-create the config or System.map files with depmod alone. This turned out to be a little more troublesome than I had bargained for.

I found the easiest way to re-generate or acquire all these files is to:

  1. delete all contents of /boot,
  2. uninstall any linux-image, linux-header and linux-modules files I had no intention of using,
  3. delete all residual directories in /usr/lib/modules, and then
  4. re-install linux-image, linux-modules and linux-headers files I intended on using (the most current generic two versions)

Note: Re-installing these 3 types of files all at the same time was how I managed to get the /boot/System.map and /boot/config files back - before only re-installing the linux-image files did not do it. It's possible they're included with modules (modules would make sense), or headers packages, but this is what worked for me.

  1. Then I ran update-grub after re-installing those files and confirming /boot was populated correctly.
  2. I also ran bootctl install and /etc/kernel/postinst.d/zz-udpate-systemd-boot, so I would have systemd-boot installed as a fallback.

At one point after a reboot, I had to re-configure system.target to multi-user.target instead of graphical.target, probably due to having chrooted with all those mounts in a graphical live CD to run the boot-repair program a couple days ago, which requires graphics (and I believe /dev/pts /tmp and /run were required to get display :0.0 to work):

systemctl set-default multi-user.target

Ok that's about it. Hope this helps someone.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.