Just because it has a CVE number doesn’t mean it’s exploitable. Of the 800 CVEs, which ones are in the KEV catalogue? What are the attack vectors? What mitigations are available?
The idea that it is somehow possible to determine that for each and every bug is a crazy fantasy by the people who don’t like to update to the latest version.
The fact that you think it’s not possible means that you’re not familiar with CVSS scores, which every CVE includes and which are widely used in regulated fields.
And if you think that always updating to the latest version keeps you safe then you’ve forgotten about the recent xz backdoor.
I am familiar with CVSS and its upsides and downsides. I am talking about the amount of resources required to determine that kind of information for every single bug, resources that far exceed the resources required to fix the bug.
New bugs are introduced in backports as well, think of that Debian issue where generated keys had flaws for years because of some backport. The idea that any version, whether the same you have been using, the latest one or a backported one, will not gain new exploits or new known bugs is not something that holds up in practice.
I don’t know where you got the idea that I’m arguing that old versions don’t get new vulnerabilities. I’m saying that just because a CVE exists it does not necessarily make a system immediately vulnerable, because many CVEs rely on theoretical scenarios or specific attack vectors that are not exploitable in a hardened system or that have limited impact.
And I am saying that that information you are referring to is unknown for any given CVE unless it is unlocked by some investment of effort that usually far exceeds the effort to actually fix it and we already don’t have enough resources to fix all the bugs, much less assess the impact of every bug.
Assessing the impact on the other hand is an activity that is only really useful for two things
- a risk / impact assessment of an update to decide if you want to update or not
- determining if you were theoretically vulnerable in the past
You could add prioritizing fixes to that list but then, as mentioned, impact assessments are usually more work than actual fixes and spending more effort prioritizing than actually fixing makes no sense.
If I had a dollar for the number of BS CVE’s submitted by security hopefuls trying to pad their resumes…
Great reason to push more code out of the kernel and into user land
Is it HURD’n’ time?
I dunno, Stallman, it’s been 30 years, you got something for us?
I’d just like to interject for a moment. What you’re refering to as Linux, is in fact, GNU/LInux, or as I’ve recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.
I think we should just resurrect Plan 9 instead.
Plan 9 is also monolithic, according to wikipedia. For BSD it depends.
I mean, you’re right but I still want to see a modernized plan 9, I just think it would be neat.
that would be Inferno
Latest release was 9 years ago, not exactly what I’m looking for. 9front is probably closer to what I want than inferno.
Ah shit MIT license
Is that bad?
It means anyone including microsoft or apple can use the code contribution or take the entire softwarw and make some modifications and sell it proprietary. Any optimisations or features made by community can be proprietarised
Interesting, but why implement yet another windowing system?
L4. HURD never panned out, and L4 is where the microkernel research settled: Memory protection, scheduling, IPC in the kernel the rest outside and there’s also important insights as to the APIs to do that with. In particular the IPC mechanism is opaque, the kernel doesn’t actually read the messages which was the main innovation over Mach.
Literally billions of devices run OKL4, seL4 systems are also in mass production. Think broadband processors, automotive, that kind of stuff.
The kernel being watertight doesn’t mean that your system is, though, you generally don’t need kernel privileges to exfiltrate any data or generally mess around, root suffices.
If you want to see this happening – I guess port AMDGPU to an L4?
seL4 is the world’s only hypervisor with a sound worst-case execution-time (WCET) analysis, and as such the only one that can give you actual real-time guarantees, no matter what others may be claiming. (If someone else tells you they can make such guarantees, ask them to make them in public so Gernot can call out their bullshit.)
That bit on their FAQ is amusing.
eBPF is looking great.
So what you are saying is “mach was right”?
Everybody knows it was. Even Linus said a microkernel architecture was better. He just wanted something working “now” for his hobby project, and microkernel research was still ongoing then.
Best way I found it running this command
rm -rf /
Then do a reboot just to be sure.
Good luck compromising my system after that.
FYI This is a joke Don’t actually run this command :)
sudo apt-get remove systemd (don’t actually run this)
I ran it and followed a documentation to install Void Linux and now it runs so much smoother!
Jokes on you I use a mac
It won’t work without
--no-preserve-root
good thing that command won’t do anything anymore
That’s a crazy “if”
“if” gcc had a Ken Thompson hack how do you secure checks notes anything
I’m genuinely worried sometimes that a Ken hack has been introduced. I don’t know by who, but possibly some government agency. Then again, we also have a Minix system built into the CPU doing god knows what and we just accept that.
We do?
Ooh that’s not creepy at all. Also, damn you BSD license.
That is actually perfectly reasonable assumption to make in the absence of resources to determine the opposite, which would probably be many times the resources needed to actually fix the bug.
There are lots of things the Kernel controls that can have non security related bugs, e.g. controller with the wrong mapping https://github.com/torvalds/linux/commit/9131f8cc2b4eaf7c08d402243429e0bfba9aa0d6
It’s a wild assumption to claim “All bugs in the Linux kernel are security issues”, without any backing, whoever is making that claim needs to provide evidence since the default position for any program is that there are bugs that are not security issues.
defend one out there assumption with another, i guess.
who can tell if sidewinder force feedback (11684) is a security bug or just one that affects people using old joysticks. better treat it with all the seriousness of xv just to be sure!
Article for the sake of having an article.
Step one: stop listening to anything from Ziff-Davis.
I mean, this isn’t any different for Windows or macos. The difference is the culture around the kernel.
With Linux there are easily orders of magnitude more eyeballs on it than the others combined. And fixes are something anyone with a desire to do so can apply. You don’t have to wait for a fix to be packaged and delivered.
Security is not a binary variable, but managed in terms of risk. Update your stuff, don’t expose it to the open Internet if it doesn’t need it, and so on. If it’s a server, it should probably have unattended upgrades.
If it’s a server, it should probably have unattended upgrades.
Interesting opinion, I’ve always heard that unattended upgrades were a terrible option for servers because it might randomly break your system or reboot when an important service is running.
There are two schools of thought here. The “never risk anything that could potentially break something” school and the “make stuff robust enough that it will deal with broken states”. Usually the former doesn’t work so well once something actually breaks.
That only applies to unstable distros. Stable distros, like debian, maintain their own versions of packages.
Debian in particular, only includes security patches and changes in their packages - no new features at all.* This means risk of breakage and incompatibilitu is very low, basically nil.
*exceot for certain packages which aren’t viable to maintain, like Firefox or other browsers.
Both my Debian 12 servers run with unattended upgrades. I’ve never had anything break from the changes in packages, I think. I tend to use docker and on one even lxc containers (proxmox), but the lxc containers also have unattended upgrades running.
Do you just update your stuff manually or do you not update at all? I’m subscribed to the Debian security mailing list, and they frequently find something that means people should upgrade, recently something with the glibc.
Debian especially is focused on being very stable, so updating should never break anything that wasn’t broken before. Sometimes docker containers don’t like to restart so they refuse, but then I did something stupid.
I used to check the cockpit web interface every once in a while, but I’ve tried to enable unattended updates today. It doesn’t actually seem to work, but I planned on switching to Nix anyway.
I don’t use Cockpit, I just followed the Debian wiki guide to enabling unattended upgrades. As fast as I remember you have to apt install something and change a few lines in the config file.
It’s also good to have SMTP set up, so your server will notify you when something happens, you can configure what exactly.
Not having automated updates can quickly lead to not doing updates at all. Same goes for backups.
Whenever possible, one should automate tedious stuff.
Thanks for the reminder to check my backups
pacman -Syu
Rhetorical question?
Install all the patches immediately.
Crontab dnf update -y and trust that if anything breaks uptime monitoing/ someone will let me know sooner or later.
Don’t use cron for that. Use the package managers auto update utility. Plus if you use the proper tools you can set it to security updates only
Air gap.
Honestly it is a valid option for critical systems. It is a bad idea to connect water treatment plans to the internet for example
Some air gaps better than others
My airgap stinks.
Brush your teeth
the other one
( ))💨
The penguin pic is so cute