cross-posted from: https://programming.dev/post/9319044
Hey,
I am planning to implement authenticated boot inspired from Pid Eins’ blog. I’ll be using pam mount for /home/user. I need to check integrity of all partitions.
I have been using luks+ext4 till now. I am
hesistanthesitant to switch to zfs/btrfs, afraid I might fuck up. A while back I accidently purged ‘/’ trying out timeshift which was my fault.Should I use zfs/btrfs for /home/user? As for root, I’m considering luks+(zfs/btrfs) to be restorable to blank state.
Btrfs is default on OpenSUSE, has worked great for me for 7 years. No issues.
Same here, but for only 1 year on my main machine and 6 years on my laptop. I looove snapper. It saved my ass so many times
Yes it is great. For me snapper rollback was an awesome onboarding experience to linux. Being eager to try things I read online for tweaks and general explorarion it brought me back to a working system after some custom kernel compiling gone awry, or deleting the wrong file etc.
Been using Btrfs for a year, I once had an issue that my filesystem was read only, I went to the Btrfs reddit and after some troubleshooting it turned out that my ssd was dying, I couldn’t believe it at first because my SMART report was perfectly clean and the SSD was only 2 years old, then a few hours later SMART began reporting thousands of dead sectors.
The bloody thing was better than smart at detecting a dying ssd lol.
After 4 years on btrfs I haven’t had a single issue, I never think about it really. Granted, I have a very basic setup. Snapper snapshots have saved me a couple of times, that aspect of it is really useful.
I think zfs is a pretty cool guy. Eh copy on write and doesn’t afraid of anything
I haven’t used them professionally but I’ve been using ZFS on my home router (OPNsense) and NAS (TrueNAS with RAID-Z2) for many years without problem. I’ve used Btrfs on laptops and desktops with OpenSUSE Tumbleweed for the past year and a bit, also without problem. Btrfs snapshots have saved me a couple of times when I messed something up. Both seem like solid filesystems for everyday use.
My experience with btrfs is “oh shit I forgot to set up subvolumes”. Other than that, it just works. No issues whatsoever.
oh shit I forgot to set up subvolumes
lol
I’m also planning on using its subvolume and snapshot feature. since zfs also supports native encryption, it’ll be easier to manage subvolums for backups
Linux does not support ZFS as well as operating systems like OpenBSD or OpenIndiana, but I do use it on my Ubuntu box for my backup array. It is not the best setup: RAID-Z over USB is not at all guaranteed to keep your data safe, but it was the most economical thing I was able to build myself, and it gets the job done well enough with regular scrubbing to give me piece of mind about at least having one other reliable copy of my data. And I can write files to it quickly, and take snapshots of the state of the filesystem if need be.
I used to use Btrfs on my laptop and it worked just fine, but I did have trouble once when I ran out of disk space. A Btrfs filesystem puts itself into read-only mode when that happens, and that makes it tough to delete files to free-up space. There is a magic incantation that can restore read-write functionality, but I never learned what it was, I just decided to stop using it because Btrfs is pretty clearly not for home PC use. Freezing the filesystem in read-only mode makes sense in a data-center scenario, but not for a home user who might want to try to erase data so one can keep using it normally. I might consider using Btrfs in place of ZFS on a file server, though ZFS does seem to provide more features and seems to be somewhat better tested and hardened.
There is also BCacheFS now as an alternative to Btrfs, but it is still fairly new, and not widely supported by default installations. I don’t know how stable it is or how well it compares to Btrfs, but I thought I would mention it.
Been using BTRFS on a couple NAS servers for 4+ years. Also did raid1 BTRFS over two USB hard drives connected to a Pi4 (yes this should be absolutely illegal).
The USB raid1 had a couple checksum errors that were easily fixed via scrub last year and the other two servers have been running without any issues. I assume it’s been fine since they’re all connected to a UPS and since I run weekly scrubs.
I enjoyed CoW and snapshots so much that I’ve been using it on my main Arch install’s (I use Arch btw) root drive and storage drives (in BTRFS raid1) for the last 4 months without issue.

