I know, rite??
I know, rite??
Probably because nobody uses RSS. Or websites.
Because the initial startup push is a time-limited effort. Once the company is more established and the risk is lower, why should a founder get to continue reaping outsize rewards off the backs of others’ labor… indefinitely? Surely there comes a point when their initial risk and effort becomes fully repaid and the founder has been made whole.
You can do calendar and contacts separate from email. Try Radicale. I’ve been using it for years.
Another container-based alternative in that space is Mailu.
Look, I appreciate you pushing on the UX aspects of the fediverse here. But let me ask yout something. What’s your email address? Is it Lost_My_Mind? No? Oh, because it’s got an @whatever.com on the end? Why is that? Why don’t we have one global, centralized namespace for email usernames such that there’s only a single Lost_My_Mind in the whole world?
Dude, do you even email?
This guy Overton windows.
People are also expected to understand the concept of manually picking a brand of toothpaste. My point is that if we can’t even expect a little consumer choice (the same consumer choice we have in the real world), then we deserve all the monopolization and centralization we get.
Also, selecting a Mastodon server isn’t like some scary technical choice. It’s like a vibe check and a signup form.
Dude, if you’re having heart palpations, go to fucking urgent care. That shit can be lethal. Atrial fibrillation? Atrial flutter? They can cause blood clots which can cause stroke. Urgent care will know what to do, even if that’s just calling a cardiologist elsewhere to look at your EKG or even stuffing you in an ambulance and driving you to an ER.
Don’t want to take medical advice from a rando on the internet? (You shouldn’t!) Then call your goddamned nurse line. They will sort you out and tell you exactly where to go.
Good luck.
I use Ansible to meet this need. Whenever I want to deploy to one or more remote hosts, I run Ansible locally and it connects via SSH to the remote host(s). There, it can run Docker Compose, configure services, lay down files on the host, restart things, etc.
The site links to a site that accepts payment data. So because the author’s site is http, a MITM attacker could change the payment links from lulu.com to site-that-actually-steals-your-credit-card.com.
That’s one huge thing https provides over http… assurance of unadulterated content, including links to sites that actually deal in sensitive data.
That’s unfortunate about NPM and Proxy Protocol, because plain ol’ nginx does support it.
I hear you about Traefik… I originally came from nginx-proxy (not to be confused with NPM), and it had pretty clunky configuration especially with containers, which is how I ended up moving to Traefik… which is not without its own challenges.
Anyway, I hope you find a solution that works for your stack.
I struggled with this same problem for a long time before finding a solution. I really didn’t want to give up and run my reverse proxy (Traefik in my case) on the host, because then I’d lose out on all the automatic container discovery and routing. But I really needed true client IPs to get passed through for downstream service consumption.
So what I ended up doing was installing only HAProxy on the host, configuring it to proxy all traffic to my containerized reverse proxy via Proxy Protocol (which includes original client IPs!) instead of HTTPS. Then I configured my reverse proxy to expect (and trust) Proxy Protocol traffic from the host. This allows the reverse proxy to receive original client IPs while still terminating HTTPS. And then it can pass everything to downstream containerized services as needed.
I tried several of the other options mentioned in this thread and never got them working. Proxy Protocol was the only thing that ever did. The main downside is there is another moving part (HAProxy) added to the mix, and it does need to be on the host. But in my case, that’s a small price to pay for working client IPs.
More at: https://www.haproxy.com/blog/use-the-proxy-protocol-to-preserve-a-clients-ip-address
I can’t comment on that, but actual Docker Compose (as distinct from Podman Compose) works great with Podman.
Maybe…? I’m not familiar with that router software, but it looks plausible to me…
Since this is on a home network, have you also forwarded port 80 from your router to your machine running certbot?
This is one of the reasons I use the DNS challenge instead… Then you don’t have to route all these Let’s Encrypt challenges into your internal network.
Nope! Borg always requires Borg on the remote side. It’s Borg’s biggest strength and weakness versus competing backup systems IMO. Strength, because it can do pretty smart stuff with its own code running on both sides. Weakness, because it means it doesn’t work natively with cloud object storage like S3. It’s a tradeoff like anything else.
I went down this very same twisty road a while back with rootless Podman. I tried several of the solutions you mentioned. None of them worked. The actual working solution I finally settled on was using Proxy Protocol to pass the original client IP from the host into a container. In my particular case, I’m running a very basic HAProxy config on the host that’s talking Proxy Protocol to Traefik running in a container. And it works great; actual client IPs show up in the logs as expected.
In your particular case, you could probably run HAProxy on the host and have that talk Proxy Protocol to Caddy running in a container.