I’m trying to find a good method of making periodic, incremental backups. I assume that the most minimal approach would be to have a Cronjob run rsync periodically, but I’m curious what other solutions may exist.

I’m interested in both command-line, and GUI solutions.

  • inex@feddit.de
    link
    fedilink
    arrow-up
    16
    ·
    2 years ago

    Timeshift is a great tool for creating incremental backups. Basically it’s a frontend for rsync and it works great. If needed you can also use it in CLI

  • mariom@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 years ago

    Is it just me or the backup topic is recurring each few days on !linux@lemmy.ml and !selfhosted@lemmy.world?

    To be on topic as well - I use restic+autorestic combo. Pretty simple, I made repo with small script to generate config for different machines and that’s it. Storing between machines and b2.

    • grue@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      It hasn’t succeeded in nagging me to properly back up my data yet, so I think it needs to be discussed even more.

      • TheAnonymouseJoker@lemmy.mlBanned
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        2 years ago

        I would argue you need to lose your data once to consider it important over a lot of useless things in your life. Most people are like this.

  • PlexSheep@feddit.de
    link
    fedilink
    arrow-up
    7
    ·
    2 years ago

    I have a bash script that backs all my stuff up to my Homeserver with Borg. My servers have cronjobs that run similar scripts.

  • okda@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    2 years ago

    Check out Pika backup. It’s a beautiful frontend for Borg. And Borg is the shit.

  • elscallr@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    2 years ago

    Exactly like you think. Cronjob runs a periodic rsync of a handful of directories under /home. My OS is on a different drive that doesn’t get backed up. My configs are in an ansible repository hosted on my home server and backed up the same way.

  • to_urcite_ty_kokos@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 years ago

    Git projects and system configs are on GitHub (see etckeeper), the reset is synced to my self-hosted Nextcloud instance using their desktop client. There I have periodic backup using Borg for both the files and Nextcloud database.

  • HughJanus@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    2 years ago

    I don’t, really. I don’t have much data that is irreplaceable.

    The ones that are get backed up manually to Proton Drive and my NAS (manually via SMB).

  • HarriPotero@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 years ago

    I rotate between a few computers. Everything is synced between them with syncthing and they all have automatic btrfs snapshots. So I have several physical points to roll back from.

    For a worst case scenario everything is also synced offsite weekly to a pCloud share. I have a little script that mounts it with pcloudfs, encfs and then rsyncs any updates.

  • KitchenNo2246@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 years ago

    All my devices use Syncthing via Tailscale to get my data to my server.

    From there, my server backs up nightly to rsync.net via BorgBackup.

    I then have Zabbix monitoring my backups to make sure a daily is always uploaded.

  • InFerNo@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    2 years ago

    I use rsync to an external drive, but before that I toyed a bit with pika backup.

    I don’t automate my backup because i physically connect my drive to perform the task.

  • useless@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 years ago

    I use btrbk to send btrfs snapshots to a local NAS. Consistent backups with no downtime. The only annoyance (for me at least) is that both send and receive ends must use the same SELinux policy or labels won’t match.