Follow

Seems I'm spending my evening rebuilding a linux md raid superblock using dd and hope...

· · Web · 1 · 0 · 0

@quixoticgeek Good luck!

Is now a bad time to tell you about our lord and savior ZFS?

@quixoticgeek That has not been my experience across a few hundred terabytes of data, on Linux/OpenSolaris/macOS. What problems have you run into?

@pooserville When zfs fails. it's incredibly hard to recover any data. When a linux software raid1 fails, you can just ignore the raid, mount the filesystem as is, and recover all your data. It fails open.

@quixoticgeek ZFS mirrors fail open the same way as any RAID 1. (I used to make three-disk mirrors, then just yoink one disk for offsite storage when a remote online backup wasn’t an option. Read fine.) ZFS RAIDZ fails the same way as the equivalent RAID5/6 fails, but with a lot more flexibility in resilvering assuming the number of failed disks is less than the number of redundant disks (spares can be different sizes etc.)

@pooserville still a lot of extra complexity over simple mdadm. Also isn't zfs having all sorts of total data loss bugs of late? mdadm is mature...

theregister.com/2023/12/04/two

@quixoticgeek I guess it’s all relative; I was using ZFS before I ever messed with mdadm so to me ZFS seems intuitive and mdadm seems complex. 🤷‍♂️
As for that bug, that’s an interesting one. Looks like it was extremely rare until recent coreutils changes, which is probably why I’ve never encountered it. (And I’ve been out of the sysadmin side of things since 2020 in any case.)

@pooserville I've been using mdadm for two decades. Hundreds of not thousands of arrays. All sorts of sizes. And in some cases, nested arrays. I know it's quirks more than is perhaps sane.

Sign in to participate in the conversation
(void *) social site

(void*)