Get a USB-C DAS (enclosure) for your disks, those use their own power supply. Since it is USB-C performance will be very good and stable and you’ll be happy with it.
Get a USB-C DAS (enclosure) for your disks, those use their own power supply. Since it is USB-C performance will be very good and stable and you’ll be happy with it.
Well… If you’re running a modern version of Proxmox then you’re already running LXC containers so why not move to Incus that is made by the same people?
Proxmox (…) They start off with stock Debian and work up from there which is the way many distros work.
Proxmox has been using Ubuntu’s kernel for a while now.
Now, if Proxmox becomes toxic
Proxmox is already toxic, it requires a payed license for the stable version and updates. Furthermore the Proxmox guys have been found to withhold important security updates from non-stable (not paying) users for weeks.
My little company has a lot of VMware customers and I am rather busy moving them over. I picked Proxmox (Hyper-V? No thanks) about 18 months ago when the Broadcom thing came about and did my own home system first and then rather a lot of testing.
If you’re expecting the same type of reliably you’ve from VMware on Proxmox you’re going to have a very hard time soon. I hope not, but I also know how Proxmox works.
I run Promox since 2009 and until very recently, professionally, in datacenters, multiple clusters around 10-15 nodes each which means that I’ve been around for all wins and fails of Proxmox. I saw the raise and fall of OpenVZ, the subsequent and painful move to LXC and the SLES/RHEL compatibility issues.
While Proxmox works most of the time and their payed support is decent I would never recommend it to anyone since Incus became a thing. The Promox PVE kernel has a lot of quirks, for starters it is build upon Ubuntu’s kernel – that is already a dumpster fire of hacks waiting for someone upstream to implement things properly so they can backport them and ditch their own implementations – and then it is a typically older version so mangled and twisted by the extra features garbage added on top.
I got burned countless times by Proxmox’s kernel. Broken drivers, waiting months for fixes already available upstream or so they would fix their own bugs. As practice examples, at some point OpenVPN was broken under Proxmox’s kernel, the Realtek networking has probably been broken for more time than working. ZFS support was introduced only to bring kernel panics. Upgrading Proxmox is always a shot in the dark and half of the time you get a half broken system that is able to boot and pass a few tests but that will randomly fail a few days later.
Proxmox’s startup is slow, slower than any other solution – it even includes management daemons that are there just there to ensure that other daemons are running. Most of the built-in daemons are so poorly written and tied together that they don’t even start with the system properly on the first try.
Why keep dragging all of the Proxmox overhead and potencial issues, if you can run a clean shop with Incus, actually made by the same people who make LXC?
You may not want to depend on those cloud services and if you need something not static, doesn’t cut it.
Why only email? Why not also a website? :)
“self-hosting both private stuff, like a NAS and also some other is public like websites and whatnot”
Some people do it and to be fair a website is way simpler and less prone to issues than mail.
If you did you would know I wasn’t looking for advice. You also knew that exposing stuff publicly was a prerequisite.
Your billion dollar corporations aren’t running dedicated hardware
You said it, some banks are billion dollar corporations :)
Proxmox will not switch to Incus, they like their epic pile of hacks. However you can switch to Debian + Incus and avoid that garbage all together.
That’s a good setup with multiple IP, but still you’ve a single firewall that might be compromised somehow if someone get’s access to the “public” machine. :)
You’re on a scenario 2.B mostly, same as me. That’s the most flexible yet secure design.
Wow hold your horses Edward Snowden!.. but at the end of the day Qubes is just a XEN hypervisor with a cool UI.
What you’re describing is scenario 2.
Sorry, I misread your first comment. I was thinking you said “VPS”. :)
because you want to learn them or just think they’re neat, then please do! I suspect a lot of people with these types of home setups are doing it mostly for that reason
That’s an interesting take.
Are you sure? A big bank usually does… It’s very common to see groups of physical machines + public cloud services that are more strictly controlled than others and serve different purposes. One group might be public apps, another internal apps and another HVDs (virtual desktops) for the employees.
Kinda Scenario 1 is the standard way: firewall at the perimeter with separately isolated networks for DMZ, LAN & Wifi
What you’re describing is close to scenario 1, but not purely scenario 1. It is a mix between public and private traffic on a single IP address and single firewall that a lot of people use because they can’t have two separate public IP addresses running side by side on their connection.
The advantage of that setup is that it greatly reduces the attack surface by NOT exposing your home network public IP to whatever you’re hosting and by not relying on the same firewall for both. Even if your entire hosting stack gets hacked there’s no way the hacker can get in your home network because they’re two separate networks.
The scenario one describes having 2 public IPs, a switch after the ISP ONT and one cable goes to the home firewall/router and another to the server (or another router / firewall). Much more isolated. It isn’t a simple DMZ, it’s literally the same as two different internet connections for each thing.
If you’re using a VPS from Amazon, Digital Ocean or wtv you’re by definition not self-hosting. Still dependent on some cloud company, so not self-hosting in a pure sense… misread comment.
~~Is that still… self-hosting? In that case you would be hosting in a cloud company so… ~~
misread comment.
I’m curious is there documented attacks that could’ve been prevented by this?
From my understanding CPU pinning shouldn’t be used that much, the host scheduler is aware that your VM threads are linked and will schedule child threads together. If you pin cores to VM’s, you block the host scheduler from making smart choices about scheduling. This is mostly only an issue if your CPU is under constraint, IE its being asked to perform more work than it can handle at once. Pinning is not dedicated, the host scheduler will schedule non-VM work to your pined cores.
I’m under the impression that CPU pinning is an old approach from a time before CPU schedulers were as sophisticated, and did not handle VM threads in a smart manner. This is not the case anymore and might there be a negative performance impact with it.
If there’s an exploit found that makes that setup inherently vulnerable then a lot of people would be way more screwed than I would.
Fair enough ahah
Well, this solves nothing. I don’t really know what’s going on with Thunderbird but it is looking like a piece of crap, the latest UI changes made it worse, a few months after the other revision that was actually much more visually pleasing. Is it that hard to look at what others do instead of adding random boxes everywhere?
Anyways, the worst part is that right now Thunderbird wastes more RAM than RoundCube running inside a browser with the Calendars and Contacts plugins. Makes no sense.