one small problem. i have no idea what range of txgs i want to scrub
friendship ended with world’s most scuffed reimplementation of openzfs/zfs#15250. now world’s most scuffed reimplementation of openzfs/zfs#7257 is my best friend
two new disks! unlike the WD80EFZZ, the WD80EFPX doesn’t seem to let you set the idle timer (wdidle3, idle3ctl). will need to keep an eye on those load cycle counts
gonna interleave them with the last pair of disks i added, so we don’t end up with two mirror vdevs having both of their disks bought at the same time
started writing my usual “lost dog, responds to ocean” labels by hand, but @ariashark reminded me we have a label printer now, so i redid them (and then redid ocean4x1 again, because it needs to be ocean4x2 now)
installed! define 7 xl now at 14 disks and two ssds :3
I'm pretty sure at least two other people have made this chost but I'm going to make it too.
If you live in the US, you can get skylake generation quad core office PCs for the same price as a 4GB raspberry pi. When you include the cost of a quality power adapter for it, you start getting into kaby lake or even first gen ryzen systems. Most of these come with 8GB of ram minimum, sometimes 16. They can be upgraded. You can get 1L PCs if size matters, or you can get SFF or Mini Desktop systems if you need expansion. They are way, way cheaper than raspberry pis when you start wanting to attach non USB IO.
They do not have GPIO. They use a lot more electricity. They're not the ideal choice for every situation, and pricing can change a LOT if you live outside of the US. But they are basically scrap on the way to the landfill that can still do a ton of stuff, and do those things for many years to come.
they're all haswell boxes, and they've got more than enough power to host all my shit. if you've ever talked to me on matrix, visited my website, or seen one of my chosts with a funky embed (like this one), you've spoken with them before
not pictured: the way too overkill ryzen build NAS providing storage for them
0 AUD, jane (successor to daria), our opnsense home router, intel 4th gen
71 AUD, tol, our plex server, intel 6th gen (with hardware video encoding!)
100 AUD, smol, our 3d printer server, intel 6th gen
dozens of these things go for <200 AUD every other month at my local auction house. they’re quiet, they’re fast, and they don’t use a ton of power. would recommend.
not pictured: our big nas with 14 drives on the floor
[…] there are literally hundreds of thousands, if not millions of these things that you can get for under a hundred bucks, and in 15 years they will still be stacked to the rafters in ebay seller warehouses. the scale of waste in enterprise computing is literally inconceivable, it is beyond the ability of the human mind to comprehend just how many phenomenally good computers are thrown out every single day.
[sheldon smith voice] shuppyco had multiple safeguards in place that could have prevented a data loss incident, such as a dry-and-wet-run system and a safer deletion interface that prompts the operator to confirm each snapshot slated for deletion.
the csb found that delan, the operator on shift at the time, systematically disabled each of those safeguards, citing the pedestrian and familiar nature of the task at hand. it said, “priming my incremental backups is simple, i’ve done this countless times!”
unfortunately, this time it was not simple.
the csb concludes that shuppyco should (a) make the consequences of disabling key data loss safeguards impossible for operators to miss, (b)designandimplementaprocesssafetymanagementsystem,
18 months ago, i started hosting my websites and other public stuff out of my house. this largely went well, except for the residential power outages thing and the paying 420 aud/month for 1000/400 thing.
1RU with gigabit in a boorloo dc is like 110 aud/month. it’s colo time :)
here is what we’re gonna move the libvirt guests to.
through some wild coincidence, the new server uses the same model mobo as my home server, and its previous life was in the same dc it’s destined for.
dress rehearsal of the migration.
first we take zfs snapshots on the old server (venus), and send them to the new server (colo). then on venus, we suspend the vm (stratus), take snapshots again, send them incrementally to colo, and kick off a live migration in libvirt. finally on colo, we resume stratus, and stratus makes the jump with under 30 seconds of downtime.