Which folders and files do I need to exclude from TimeShift?

Also is there a way to also exclude programs installed as .deb ?

I doing this to reduce Backup size as I have limited storage.

100GB - Windows 11
400GB - Storage
400GB - Mint
100GB - TimeShift
  • gpstarman@lemmy.todayOP
    link
    fedilink
    arrow-up
    1
    ·
    9 months ago

    If you only want backups of your files

    Actually, I want to backup my system as I’m new to Linux, there is that I will break the system. So, setting up my system from ground again due to some silly mistake I made is tiresome. Also I have separate Storage drive for personal files.

    BTRFS.

    I’m afraid about the compatibility of BTRFS ( not like system drive need any compatibility ). Does it have a good community support as ext4?

    It’s not very friendly when it’s almost full

    ?

    • tron@midwest.social
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      Snapshotting is a feature of BTRFS file systems. Timeshift will manage said snapshots. BTRFS file system is required.

    • Skull giver@popplesburger.hilciferous.nl
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      Actually, I want to backup my system as I’m new to Linux, there is that I will break the system.

      Then you will need to also include most if not all system packages. You can exclude your home folder if you want.

      Note that there are big differences between how Timeshift works on btrfs or on anything else. BTRFS snapshots allow something called “copy on write”, basically allowing the system to make a “copy” of a file that references the same data as another, and from that moment only tracking changes. That way, you can instantly create snapshots of entire drives, and keep dozens of snapshots without risking running out of storage. I’ve got my system set up to make a snapshot every time software updates are installed and the biggest snapshot I’ve seen was about 45GB (it was a snapshot from before upgrading Ubuntu 22.04 to 24.04 and included a few virtual machine and ML models). My daily apt upgrades generate snapshots of about 100MB but I can still roll back the moment I need to. I can even select an older snapshot in my Grub boot menu in case my bootloader dies.

      The rsync based approach used on ext4 and other filesystems isn’t that flexible. You’ll get snapshots, but each file needs to be processed individually. This means you need to make sure to close down programs before making snapshots or you’ll risk corruption. The disk space savings are also more limited; filesystems like ext4 will still have almost 0 overhead for files that haven’t been changed between two snapshots, but if you change a single bit in the middle of a 100MB file, chances are rsync will create a fresh copy (whereas btrfs would only store an extra extent, usually 4KB, which is the smallest unit the filesystem will let you control).

      Restoring snapshots using rsync requires copying/linking files back, and can’t be done transparently. This means you can’t boot a previous version of the system like on btrfs, you’ll need to boot into recovery mode, restore the snapshot manually, and reboot.

      I’m afraid about the compatibility of BTRFS ( not like system drive need any compatibility ). Does it have a good community support as ext4?

      Support is mature enough that it’ll Just Work in most cases. With some edge cases, like disk usage statistics not being accurate, because tools don’t realise files can be partially deduplicated or compressed, let alone taking subvolume quotas into account. In most cases, the more fancy features you enable, the more older software will act a bit funny.

      I believe Fedora has picked it as the default for a few years now. Haven’t checked, though.

      ?

      Tools like df and other file system monitors only support very rudimentary disk usage statistics, even when certain ext4 features are involved. There are a bunch of methods in which btrfs can seem full, but still have plenty of space, or it can seem like there are gigabytes of space left but the volume is so unbalanced that that space isn’t actually usable. There are also about three types of data stores that can technically get full (though your rarely run into that). Usually, you can claim back a chunk of storage after a year or two by rebalancing and defragmenting the file system, especially if you use advanced features like snapshotting. I believe there are rolls to automatically run these operations as well.

      These issues seem to come up. Most when you’ve nearly filled the FS to the brim. Keeping 10% free seems to prevent most of them from ever popping up.

      There are a few other edge cases as well (like how you should disable compression on swap files and virtual machine images, and how you may want to disable copy on write and snapshots on them as well for performance reasons) but they’re more niche. There are a bunch of really cool features as well (like how you can send a snapshot over the network in the form of only the difference since the version before, making network disk backups tiny). You’re not that likely to run into any of them, but the project website has everything documented if you’re interested.