friendship ended with world’s most scuffed reimplementation of openzfs/zfs#15250. now world’s most scuffed reimplementation of openzfs/zfs#7257 is my best friend
two new disks! unlike the WD80EFZZ, the WD80EFPX doesn’t seem to let you set the idle timer (wdidle3, idle3ctl). will need to keep an eye on those load cycle counts
gonna interleave them with the last pair of disks i added, so we don’t end up with two mirror vdevs having both of their disks bought at the same time
started writing my usual “lost dog, responds to ocean” labels by hand, but @ariashark reminded me we have a label printer now, so i redid them (and then redid ocean4x1 again, because it needs to be ocean4x2 now)
installed! define 7 xl now at 14 disks and two ssds :3
[sheldon smith voice] shuppyco had multiple safeguards in place that could have prevented a data loss incident, such as a dry-and-wet-run system and a safer deletion interface that prompts the operator to confirm each snapshot slated for deletion.
the csb found that delan, the operator on shift at the time, systematically disabled each of those safeguards, citing the pedestrian and familiar nature of the task at hand. it said, “priming my incremental backups is simple, i’ve done this countless times!”
unfortunately, this time it was not simple.
the csb concludes that shuppyco should (a) make the consequences of disabling key data loss safeguards impossible for operators to miss, (b) design and implement a process safety management system,
just set up daily automated zfs backups for three of my machines, including discord logging and ping on failure :D
(sauce)
the trick is adding a “special” vdev 🚄⚡
$ time du -sh /ocean/private
2.5T /ocean/private
11:16.33 total
(send, add special, recv, remove l2arc, reboot)
$ time du -sh /ocean/private
2.5T /ocean/private
1:17.16 total
https://forum.level1techs.com/t/zfs-metadata-special-device-z/159954