Friendly Ace Lobster 💜

  • 0 Posts
  • 8 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle
  • Fedora is not enterprise grade. That would be RHEL. And entreprise grade mostly just mean stable (some would say stale) packages anyway if you don’t pay for support.

    Installing nvidia drivers on fedora workstation is as easy as enabling rpm-fusion non-free and then installing a few packages. The issue here comes from OP running an OSI-based immutable system, which makes layering stuff on top a bit more difficult.

    OP’s already running something fedora based, might as well stay where they feel comfortable and just add a few drivers and gaming tweaks on top.

    Nothing against opensuse though. I’m currently running aeon because their approach of immutability is more modulable than fedora’s one.


  • If you find yourself wanting to game on your distro again, layering nvidia drivers ontop of immutable fedora is do-able. If you want a more hands off approach you can use bazzite (https://bazzite.gg/), which has an nvidia compatible version and is just a kinoite-based OSI image with gaming oriented tweaks and extra apps.

    You can even just rebase to it if you’re already using kinoite (and rebase back to kinoite if you don’t like it), no need to reinstall your system. The download page has a one-command exemple on how to do that.


  • I’ll confess that I only tried gpt 3.5 (and the mistral one but it was actually consistantly worse) given that there’s no way in the world I’m actually giving openAI any money.

    Having said that I don’t think it fundamently changes the way it works. Basically I think it’s fine as some sort of interactive man/stackoverflow parser. It can reduce frictions of having to read the man yourself, but I do think it could do things a lot better for new user onboarding, as you seem to suggest in the comments that it’s one of the useful aspect.

    Basically it should drop the whole “intelligent expert” thing and just tell you straight away where it got the info from (and actually link the bloody man pages. At the end of the day the goal is still for you te be able to maintain your own effing system). I should also learn to tell you when it actually doesn’t know instead of inventing some plausible answer out of nowhere (but I guess that’s a consequence of how those models work, being optimized for plausibility rather than correctness).

    As for the quality of the answer, usually it’s kind of good to save you from googling how to do simple one liners. For script it actually shat the bed every single time I tried it. In some instances it gave me 3 ways to do slightly different things all in the same loop. In other straight up conflicting code blocks. Maybe that part is better in GPT 4 I don’t know.

    It also gives you outdated answers without specifying the version of the packages it targets. Which can be really problematic.

    Basically where I’m going with this is that if you’re coding, or maintaining any server at all, you really should learn how to track the state of your infra (including package versions) and read man pages anyway. If you’re just a user, nowadays you don’t really have to get your hands in the terminal.

    At the end of the day, it can be useful as some sort of interactive meta search engine that you have to double check.

    I’m really not getting into the whole “automated garbage that’s filling up the internet, including bug reports and pull request” debate. I do think that all things considered, those models are a net negative for the web.


  • It’s the long term support version of OpenSUSE that is binary compatible with the entreprise version provided by SUSE (SLES). Kind of the same relation between RHEL and CentOS (before the stream controversy).

    In laymans terms it’s a stable desktop and server linux disribution. But it’s in a weird spot right now as OpenSUSE has stated that this will be the last major version following this release format. The next main OpenSUSE distro will be something based on modular imutable images.

    Edit: Apparently there will also be a non-immutable version of Leap 16


  • This is one of the worst case of tech dude tries to solve social sciences with math I’ve ever read. The paper is not just bad as a whole, it deliberately disregard 200 years of research in at least 3 different academic fields and instead quotes Borat.
    And then goes on to gleefully describe how the authors made a giant machine to reproduce their own (dangerous) biases about the universality of emotion-voicing with just chat-GPT and a zero-shot classifier, would you look at that? Yay science I guess?


  • The proposal explicitly goes against “more fingerprinting”, which is maybe the one area where they are honest. So I do think that it’s not about more data collection, at least not directly. The token is generated locally on the user’s machine and it’s supposedly the only thing that need to be shared. So the website’s vendor do get potentially some infos (in effect: that you pass the test used to verify your client), but I don’t think that it’s the major point.
    What you’re describing is the status quo today. Websites try to run invasive scripts to get as much info about you as they can, and if you try to derail that, they deem that you aren’t human, and they throw you a captcha.
    Right now though, you can absolutely configure your browser to lie at every step about who you are.
    I think that the proposal has much less to do with direct data collection (there’s better way to do that) than it has to do with control over the content-delivery chain.
    If google gets its way, it would effectively switch control over how you access the web from you to them. This enables all the stuff that people have been talking about in the comment: the end of edge case browser and operating systems, the prevention of add blocking (and with it indeed, the extension of data collection), the consolidation of chrome’s dominant position, etc.



  • As other have pointed out, it goes way beyond ad-blocking. It’s a complete reversal of the trust model, and is basically DRM for your OS:
    Right now, websites assume rightfully that clients can’t be trusted. Any security measure happens on the server side, with the rationale that the user has control over the client and you as a dev control the server. If your security is worth two cents, you secure server side. This change propose to extend vendor power, by defining a set of rule about what they deem acceptable as a client app, and enforcing it through a token system. It gives way too much power to the vendor, who gets to dictate what you can do on your machine.
    We actually have a live experience of how that could go down with safetynet on android. Instead of doubling down on the biggest security issue there (OEM that refuses to support their software for more than 1 or 2 year after release which, quite frankly, should be universally considered as unacceptable), google decided that OEMs should be allowed way more trust than the user. Therefore modifying your own OS in any way, even if it’s ripe with security flaws to begin with and you’re just trying to fix that, breaks safetynet. If you break safetynet, “critical apps” like banking apps stop working altogether.
    The worst part is that there are ways to circonvent safetynet breakage, because in the end, if DRM taught us anything, it is that if you control the client and know your way around, with enough work you can do pretty much anything you want with it. So bad actors are certainly not kept at bay, you just unjustly annoy people with legitmate usecases or even just experimenting with their hardware because in the end, you consider that your user are at best dumb security flaws, at worst huge cash machine, often both at the same time.