Score:1

Forcefully forget / unmount ZFS pool after drives disconnected

in flag

First off, I made a mistake & I acknowledge that, but now I'm stuck with a "broken" ZFS driver and want to restore it without rebooting my machine.

How I can say ZFS on Linux to just forget the existence of a pool (forcefully unmount & "export" it) while all drives are disconnected?

While I had a running ZFS pool with two drives (as mirror), I disconnected both of them (more specific: their power) at the same time without unmounting or exporting it beforehand. When replugging the power, the kernel recognized them as new drives (before they were called sdb & sdc, now they are called sdd & sde) hence ZFS is not able to "restart" the pool on its own. I do not want to call commands like zpool replace as they seem to expect that the new drive is completely new & and can be overwritten, what is not the case (in fact, they are the same, just available under a new name/path). I tried some other commands (see below), but they didn't worked as well. So I disconnected them again but now I'm stuck with a ZFS driver not being able to forcefully unmount the pool, so I could just trying to reimport it. As during the power fail no writes should have happened, I just expect to be able to reimport it afterwards just fine.

What I tried so far while the drives were connected under the new name:

  • zpool offline pool1 sdb: "cannot offline sdb: pool I/O is currently suspended"
  • zpool online pool1 sdd: "cannot online /dev/sdd: pool I/O is currently suspended"

What I tried while the drives were disconnected:

  • zpool clear pool1
  • zpool export -f pool1
  • zpool destroy -f pool1

Just in case its important: Debian GNU/Linux bookworm, Linux 6.1.0-6-amd64

Score:0
in flag

While writing this exhaustive error description, I got it back up running by re-connecting the drives in the correct order because Linux new decided to use their old names again (so ZFS can find them at the same path), ran zfs clear pool1 and they were up again.

If you connect them in the wrong order (e.g. old sdc shows now up as sdb), after running clear, ZFS will show a CORRUPTED error. If that is so, I was able to just disconnect the drive, connect the other one (so the correct one shows up as sdb again), and ZFS recognized the first drive after zpool clear again. The second drive was then recognized immediately on reconnecting it.

I decided to still post this as my Google foo didn't brought up any solution working for me and because one question is still open to me: Is it possible to "restart" the pool with the different drive names or do I have to have luck that the kernel will give them the same names again (please answer in a comment or another answer)?

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.