PSA: OpenZFS for Linux simply corrupts your data if you hibernate. Sigh.
@lukedashjr ...an almost 10 year old issue... :/
FWIW I've used btrfs on hibernating laptops without issues in the past. Though not recently, as qubes doesn't play nice with hibernation.
@lukedashjr I've used it on most of my machines for years, both with and without RAID1. Never had any issues. And it's checksums have saved me from failing hardware on a few occasions.
@lukedashjr How long ago was that?
Many years ago there were performance issues like that. But they seem to have been fixed.
@pete Maybe a year ago.
@pete I also didn't like the fact that when I found a bug, I had to fix it myself. :p
@lukedashjr What hardware setup? Any unusual usage patterns? Use of snapshots heavily?
All the performance sensitive stuff I use it for is on ssd's; I use it on backup drives too. But if they had latency spikes I'd never notice.
@pete Daily snapshots for years.
@lukedashjr That might be it then. I don't keep more than a few dozen snapshots.
@pete @lukedashjr I have to say, I’ve never even considered running ZFS on a laptop because of it’s enormous appetite for RAM.
Love it on my servers though.
i'm using ~1000 concurrent snaps on my zfs installations and never had a problem like that.
@lukedashjr @pete I've had that too, >30s lag for the top 0.1% of writes. Full CPU load all the while.
I think btrfs variable extents just don't scale as well as zfs' more pragmatic block-based approach when fragmentation becomes an issue under high load.
@JuergenStrobel @lukedashjr IIUC btrfs uses less memory than zfs because of those design differences... Which in turn would mean that some usage patterns could probably defeat those optimizations.
The slowest thing I do on a regular basis with btrfs is send/recv snapshots of my Monero node. Monero seems to make it's block database very fragmented due to heavy use of sparseness.
@JuergenStrobel @lukedashjr I do use sparse files with btrfs regularly because I use it for Qubes: the VM's disk images are stored as sparse files. But my overall usage patterns are probably relatively benign as other than write once media all my data easily fits into cache (my desktop has 64GiB, and my laptop, 32GiB).
That actually helps with fragmentation these days due to delayed allocation: where data is written to disk is chosen just prior to flushing.
@pete @lukedashjr I've had good experiences with btrfs personal use. Not so much using it in a write-intensive Postgres (backup) store, it just can't handle the fragmentation well. (Yes I know about tuning and the defrag tool)
@pete btrfs got to the point my IRC connections were timing out because the IRC client was blocking on writing log files...