• 0 Posts
  • 26 Comments
Joined 2 years ago
cake
Cake day: June 5th, 2023

help-circle

  • oranki@sopuli.xyztoSelfhosted@lemmy.worldWhy docker
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Portability is the key for me, because I tend to switch things around a lot. Containers generally isolate the persistent data from the runtime really well.

    Docker is not the only, or even the best way IMO to run containers. If I was providing services for customers, I would definetly build most container images daily in some automated way. Well, I do it already for quite a few.

    The mess is only a mess if you don’t really understand what you’re doing, same goes for traditional services.



  • There was a good blog post about the real cost of storage, but I can’t find it now.

    The gist was that to store 1TB of data somewhat reliably, you probably need at least:

    • mirrored main storage 2TB
    • frequent/local backup space, also at least mirrored disks 2TB + more if using a versioned backup system
    • remote / cold storage backup space about the same as the frequent backups

    Which amounts to something like 6TB of disk for 1TB of actual data. In real life you’d probably use some other level of RAID, at least for larger amounts so it’s perhaps not as harsh, and compression can reduce the required backup space too.

    I have around 130G of data in Nextcloud, and the off-site borg repo for it is about 180G. Then there’s local backups on a mirrored HDD, with the ZFS snapshots that are not yet pruned that’s maybe 200G of raw disk space. So 130G becomes 510G in my setup.



  • I used to run everything with Pis, but then got a x86 USFF to improve Nextcloud performance.

    With the energy price madness last year in Europe, I moved most things to cloud VPSs.

    One Pi is still running Home Assistant, hooked to my heating/ventilation unit via RS485/modbus.

    I had a ZFS backup server with 2 HDDs hooked up over USB to a Pi 8GB. That is just way too unreliable for anything serious, I think I now have a lot of corrupted files in the backups. Looking into getting some Synology unit for that.

    For anything serious that requires file storage, I’d steer clear from USB or SD cards. After getting used to SATA performance, it’s hard to go back anyways. I’d really like to use the Pis, but family photo backups turning gray due to bitflips is unacceptable.

    They are a great entrypoint to self-hosting and the Linux world though!




  • In my limited experience, when Podman seems more complicated than Docker, it’s because the Docker daemon runs as root and can by default do stuff Podman can’t without explicitly giving it permission to do so.

    99% of the stuff self-hosters run on regular rootful Docker can run with no issues using rootless Podman.

    Rootless Docker is an option, but my understanding is most people don’t bother with it. Whereas with Podman it’s the default.

    Docker is good, Podman is good. It’s like comparing distros, different tools for roughly the same job.

    Pods are a really powerful feature though.





  • For a bit enhanced log file viewing, you could use something like lnav, I think it’s packaged for most distributions.

    Cockpit can be useful for journald, but personally I think GUI stuff is a bit clunky for logs.

    Grep, awk and sed are powerful tools, even with only basic knowledge of them. Vim in readonly mode is actually quite effective for single files too.

    For aggregating multiple servers’ logs good ol’ rsyslog is good, but not simple to set up. There are tutorials online.


  • Oh the times when getting GTA from a friend required 30+ 3½" floppy disks IIRC. That plus making 5 or 6 round trips to friend’s house, because one of them almost always got corrupted during the zip process.

    And since no one had the disk space or knowhow to store the zip packets on HDD for the inevitable re-copying, had to redo the whole pack from scratch each time.

    Edit: disk->HDD