What filesystem is currently best for a single nvme drive with regard to performance read/write as well as stability/no file loss? ext4 seems very old, btrfs is used by RHEL, ZFS seems to be quite good… what do people tend to use nowadays? What is an arch users go-to filesystem?

  • Dubious_Fart@lemmy.ml
    link
    fedilink
    arrow-up
    23
    ·
    2 years ago

    ext4 being old, and still being the main file system most distros use by default, should be enough alone to tell you being old isnt bad.

    it means its battle tested, robust, stable, and safe. Otherwise it wouldnt be old and still be in widespread use.

    • DaPorkchop_@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      i would generally recommend XFS over ext4 for anything where a CoW filesystem isn’t needed. in my experience, it performs better than ext4 at most workloads, and still supports some nifty features like reflink copies if you want them.

  • whodoctor11@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    2 years ago

    I use btrfs because I like it’s features and I would love to see native encryption on it. I would use zfs if it’s license was GPL.

  • jg1i@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    2 years ago

    Most people should use ext4. Only use something else if you want to tinker and don’t need long term data storage.

  • exi@feddit.de
    link
    fedilink
    arrow-up
    6
    ·
    2 years ago

    If you are planning to have any kind of database with regular random writes, stay away from btrfs. It’s roughly 4-5x slower than zfs and will slowly fragment itself to death.

    I’m migrating a server from btrfs to zfs right now for this very reason. I have multiple large MySQL and SQLite tables on it and they have accumulated >100k file fragments each and have become abysmally slow. There are lots of benchmarks out there that show that zfs does not have this issue and even when both filesystems are clean, database performance is significantly higher on zfs.

    If you don’t want a COW filesystem, then XFS on LVM raid for databases or ext4 on LVM for everything else is probably fine.

    • Laser@feddit.de
      link
      fedilink
      arrow-up
      4
      ·
      2 years ago

      Did you disable CoW for your database with btrfs? E.g. for PostgreSQL, the Arch Wiki states:

      If the database resides on a Btrfs file system, you should consider disabling Copy-on-Write for the directory before creating any database.

      • exi@feddit.de
        link
        fedilink
        arrow-up
        3
        ·
        2 years ago

        From arch wiki:

        Disabling CoW in Btrfs also disables checksums. Btrfs will not be able to detect corrupted nodatacow files. When combined with RAID 1, power outages or other sources of corruption can cause the data to become out of sync.

        No thanks

        • DaPorkchop_@lemmy.ml
          link
          fedilink
          arrow-up
          3
          ·
          2 years ago

          that’s no different than any “normal” filesystem with a traditional block-level RAID solution

          • exi@feddit.de
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            2 years ago

            Not really. You can still use dm-verity for a normal raid and get checksumming and normal performance, which is better and faster than using btrfs.

            But in any case, I’d recommend just going with zfs because it has all the features and is plenty fast.

            • DaPorkchop_@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              2 years ago

              ZFS lacks some features that btrfs has, such as creating CoW clones of individual files (rather than having to snapshot a whole subvolume).

              personally i’ve been using btrfs on pretty much everything for about two years, ranging from multiple >100TB filesystems spanning 8 spinning rust drives to individual flash drives and had very few issues (compared to my experiences with ext4 on mdadm). snapshots/reflink copies have made many of my workflows much easier, adding/removing/replacing devices pretty much Just Work™, and the fact that everything is checksummed gives me a piece of mind i didn’t know i needed. sure, ZFS has pretty much the same featureset, but it’s not in the mainline kernel and seems to lack some of btrfs’ flexibility (from the research i’ve done in the past, like adding/removing disks to an existing pool is still experimental).

              what i’m really excited for is bcachefs, which takes what i consider the best features of both btrfs and ZFS and then steps them up a notch (e.g. ability to configure RAID settings and prefer specific drives on a per-file/per-directory level). as soon as it’s stable enough to be mainlined i’ll definitely be migrating most of my btrfs filesystems to that.

  • ZephyrXero@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 years ago

    I usually just use EXT4, but perhaps you should check out F2FS. It’s designed for solid state storage mediums, as while most were created with traditional spinning hard discs in mind

    • Krik@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      2 years ago

      At the end of the day though after all of our storage tests conducted on Clear Linux, EXT4 came out to being just 2% faster than F2FS for this particular Intel Xeon Gold 5218 server paired with a Micron 9300 4TB NVMe solid-state drive source

      I’ll suggest XFS.

  • Secret300@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 years ago

    I like btrfs cause of transparent compression but I’m pretty sure other filesystems like ZFS have that too

  • donut4ever@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 years ago

    I have two drives in my machine, nvme and a sata. Nvme is my root partition and it is set to btrfs, because I love snapshots, they are just a must for me. The sata is my home partition and it is on ext4. Ext4 and is tried and true and I don’t want to risk losing my personal files.

  • Felix@feddit.de
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    If you really care about high performance on an SSD, use f2fs. This filesystem was made for SSDs specifically. Though ext4 and xfs are also really solid and shouldn’t be that much slower. But if you do care about squeezing out every bit out of performance, f2fs is definitely worth trying.

    It’s a bit more experimental, yet I’ve been daily driving it for maybe a year or so at this point and it never caused trouble once, even with cutting edge mount options.

  • samsy@feddit.de
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    2 years ago

    For what? Client on a laptop or PC? Why not f2fs? On a server just trust good ol ext4 with some flash drive settings.

      • samsy@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        My current setup is fedora for the last 6 months. I started a live session, installed f2fs and then run the installer with a combination of f2fs + encryption. And it runs flawlessly and faster than any setup before.