Do you host all services just from your root account with docker or do you seperate the services between user accounts with rootless docker?

Do you use podman or docker?

It’s easier to just host everything from root with normal docker, but seperating services into special user account is probably way saver, at least as far as i know. Do you think ist worth going the extra step or do you just trust docker and your containers to not get exploited?

Last but not least do you use an automatic update service for your host system and your containers?

  • witten@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    I use rootless Podman, because security. A container breakout exploit will only impact that one Unix user. Plus no Docker daemon to worry about.

    I don’t seperate services into separate users, although maybe I should. The main impediment with separation is that you give up the conveniences of container networking / container DNS and have to connect everything on the host instead. I don’t know if that’s even possible (conveniently) with a service like Traefik that’s supposed to introspect running containers. Also, with separation by Unix user, there’s not one convenient place to SSH in and run podman ps or docker ps to see all containers. Maybe not a big deal?

    Auto-update of containers: No, I don’t, because updates somtimes break things and I want to be there in case something goes wrong. The one exception is I auto-update the containers I develop myself as the last implicit deployment step of a CI pipeline.

    • oranki@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      +1 for rootless Podman. Kubernetes YAMLs to define pods, which are started/controlled by systemd. SELinux for added security.

      Also +1 for not using auto updates. Using the latest tag has bit me more times I can count, now I only use it for testing new stuff. All the important services have at least the major version as tag.

  • Arcidias@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 year ago

    I keep all my services in one docker-compose yml, and run it from a normal user account added to the docker group.

    I am really conscious of what I expose to the internet though, since I already almost had a security incident.

    I used to run non-standard ssh port to my machine with password authentication enabled.

    Turns out I didn’t know the sonarr/radarr containers came with default users, and a bruteforce attack managed to login to one of them (or something like that anyway,it’s been awhile). Fortunately they have a default home of /sbin/nologin so crisis averted there, but it definitely was a big lesson for me.

    Years later, the current setup is only plex, tautulli, and ombi open to the internet, and to reach everything else I use tailscale. And of course,only key-based authentication.

    Oh and for updates, I run apt upgrade once in a while on the box (Ubuntu server 18.04 LTS) and for the containers, I use watchtower.

  • supersheep@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Currently, I’m just using my root account with Docker and update everything manually. I have dockcheck-web installed to check whether any updates are available (https://github.com/Palleri/DCW). From the outside everything is only accessible using Wireguard and connections have to go through a Caddy proxy in order to reach a container. Curious what other peoples setup is.

  • Kuroshi@lemmy.ramble.moe
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Kubernetes, but I’m getting a bit tired of dealing with it. I might try using microVMs for what I’m currently using Pods, and hopefully make the whole system easier to maintain. The overhead for kubernetes is a heck of a lot more than I anticipated, I had to set up a whole second machine for what I used to be able to do on a single one.

  • Ducks@ducks.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    k3s with rancher. I was using k8s before but redid everything. K3s is overkill for what I do an causes millions of headaches but I enjoy learning through brute force.

    I use k8s at work so it’s good experience to run my own k3s

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Podman managed through Quadlet container files and Systemd. Rootless where easily possible but often that requires a bit more work. Auto updates only when it is unlikely to break.

  • NewDataEngineer@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Rootless docker via Terraform. Can create all my containers with traefik and dashboard configs at the click of a button.

  • ShittyKopper [old]@lemmy.w.on-t.work
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Rootful Podman & podman-compose. Waiting on the version of Podman that supports passt to hit Debian Bookworm or backports to attempt rootless. Deployed with Ansible except a few manual parts like creating the Postgres databases themselves.

    No auto updates or notifications so far, as there seems to be a couple incompatibility issues left with Watchtower & Podman. Although since I switched CrowdSec to monitor journald instead of the Podman socket I don’t really have a reason to keep the daemon running, and I think that’s for the best.

  • easeKItMAn@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I’m using network overlays for individual containers and separation.
    Secondly fail2ban installed on host to secure docker services. Ban FORWARDING chains specific to docker instead of INPUT chains. [fail2ban docker](Configure Fail2Ban for a Docker Container – seifer.guru) Use 2FA for services if available.

    Rootless docker has limitations when it comes to port exposing, storage drivers, network overlays etc.

    The host is auto-updating security batches but rebooted manually only.
    Docker containers are updated manually too. I built all containers from file and don’t pull them because most are modified (plugins, minimizing sizes, dedicated user rights etc.)

  • SheeEttin@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I run docker on almalinux on Proxmox. Nothing is exposed to the Internet. Yes, I do automatic updates for everything, but reboots are manual.

  • Tupcakes@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Nomad, consul, and gluster. Not as easy as a simple docker compose, but definitely not as annoying as kubernetes.