• 3 Posts
  • 148 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle

  • I really like the project and have been happily running it on my home lab for quite a while. But for enterprise their pricing for enterprise use is not really cheap either. 510€/socket/year is way more than the previous vmware deal we’re running. Apparently broadcom has changed their pricing to per core which is just lunatic (it would practically add up to millions per month on our environment), so it’s interesting to see what’s going to happen when our licenses expire.


  • As you can connect to the internet you can also access your router (or at least a router). And when running ping, even if you had overlapping IP addresses you should still get responses from the network.

    So, two things come to mind: Either your laptop is running with a different netmask than other devices which causes problems or you’re connected to something else than the local network you think you are. Changes on DHCP server or misconfigured network settings on the laptop might cause the first issue. The second might be because you’re connected to your phone AP, some guest network on your devices or neighbors wifi by accident (multiple networks with same SSID around or something like that).

    Other might be problems with mesh-networking (problem with ARP tables or something) which could cause issues like that. That scenario should get fixed by reconnecting to the network, but I’ve seen bugs in firmware which causes errors like this. Have you tried to restart the mesh-devices?

    Is it possible that your laptop has enabled very restrictive firewall rules for whatever reason? Check that.

    And then there’s of course the long route. Start by verifying that you actually have IP address you assume you have (address itself, subnet, gateway address). Then verify that you can connect to your router (open management portal, ping, ssh, all the things). Assuming you can, then check the router interface and verify that your laptop is shown there as a dhcp-client/connected device (or whatever term that software uses). Then start to ping other devices on your network and also ping your laptop from those devices and also verify that they have addresses you assume (netmask/gateway included).

    And so on, one piece at the time. Check only single thing at one time, so you get full picture on what’s working and what’s not. And from there you can eventually isolate the problem and fix it.




  • That’s better, but you still need to have single wire to loop it around, which is not normally accessible. And at least in here the term ‘multimeter’ spesifically means one without a clamp, so you’d need to wire the multimeter in series with the load and that can be very dangerous if you don’t know what you’re doing.

    Also, cheap ones often are not properly insulated nor rated for wall power (regardless of your voltage), so, again, if you don’t know what you are doing DO NOT measure current from a wall outlet with a multimeter.



  • “Enough battery life” is a bit wide requirement. What you’re running from that?

    Most of the ‘big brands’ (eaton, apc…) work just fine with linux/open source, but specially low end consumer models even from big players might not and not all of them have any kind of port for data transfer at all.

    Personally I’d say that if you’re looking for something smaller than 1000VA just get a brand new one. Bigger than those might be worth to buy used and just replace batteries, but that varies a lot. I’ve got few dirt cheap units around which apprently fried their charging circuit when the original battery died, so they’re e-waste now and on the other hand I have 1500VA cheap(ish) FSP which is running on 3rd or 4th set of batteries, so there’s not a definitive answer on what to get.


  • With Linux the scale alone makes it pretty difficult to maintain any kind of fork. Handful of individuals just can’t compete with a global effort and it’s pretty well understood that the power Linux has becomes from those globally spread devs working towards a common goal. So, should Linux Foundation cease to exist tomorrow I’d bet that something similar would raise to take it’s place.

    For the respect/authority side, I don’t really know. Linux is important enough for governments too, so maybe some entity ran by United nations or something similar could do?


  • I’ve worked with both kind of companies. Current one doesn’t really care about Bus factor, but currently, for myself personally, that’s just a bonus as after every project it would be even more difficult to onboard someone to my position. And then I’ve worked with companies who hire people to improve bus factor actively. When done correctly that’s a really, really good thing. And when it’s done badly it just grinds everything down to almost halt as people spend their time in nonsensical meetings and writing documentation no-one really cares about.

    Balancing that equation is not a easy task and people who are good at it deserve every penny they’re paid for it. And, again just for me, if I get overrun by a bus tomorrow, then it’s not my problem anymore and as the company doesn’t really care about that then I won’t either.


  • IsoKiero@sopuli.xyztoLinux@lemmy.mlUbuntu spotted in the latest Mark Rober video
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    23 days ago

    Nothing is perfect but “fundamentally broken” is bullshit.

    Compared on how things used to work when Ubuntu came to life it really is fundamentally broken. I’m not the oldest beard around, but I personally have updated both Debian and Ubuntu from obsoleted relase to a current one with very little hiccups in the way. Apt/dpkg is just so good that you could literally bring up a decade old distribution up to date and it was almost without no efforts. The updates ran whenever I chose them to and didn’t break production servers when unattended upgrades were enabled. This is very much not the case with Ubuntu today.

    Hatred for a piece of tech simply because other people said it’s bad, therefore it must be.

    I realize that this isn’t directly because of my comment, but there’s plenty of evidence even on this chain that the problems go way deeper than few individuals ranting over the net that snap is bad. As I already said, it’s objectively worse than the alternatives we’ve had since the 90’s. And the way canonical bundles snap with apt breaks that very long tradition where you could just rely that, when running stable distribution, you could be pretty much certain that ‘apt-get dist-upgrade’ wouldn’t break your system. And even if it did, you could always fix it manually and get the thing back to speed. And this isn’t just a old guy ranting how things were better in the past as you can still get the very reliable experience today, but not with snapd.

    Auto updating is not inherently bad.

    I’m not complaining about auto updates. They are very useful and nice to have, even for advanced users. The problem is that even if snap notification says that ‘software updates now’ it often really doesn’t. Restarting the software, and even some cases running manual update, still brings up the notification that the very same software I updated a second ago needs to restart again to update. Rinse and repeat, while losing your current session over and over again.

    Also, there’s absolutely no indication if anything is actually done. The notification just nags that I need to stop what I’m doing RIGHT NOW and let the system do whatever it wants instead of the tools I’ve chosen to work for me. I don’t want nor need the forced interruptions for my workflow, but when I do have the spare minute to stop working, I expect that the update process actually triggers on that very second and not after some random delay and I also want a progress bar or something to indicate when things are complete and I can resume doing whatever I had in mind.

    it just can’t be a problem to postpone snap updates with a simple command.

    But it is. “<your software> is updating now” message just interrupts pretty much everything I’ve been doing and at that point there’s no way to stop it. And after some update process has finally finalized I need to pretty much reboot to regain control of my system. This is a problem which applies to everybody, regardless of their technical skills.

    My computer is a tool and when I need to actively fight that tool to not interrupt whatever I’m doing it rubs me in a very wrong way. No matter if it’s just browsing the web or writing code to the next best thing ever or watching youtube, I expect the system to be stable for as long as I want it to be. Then there’s a separate time slot when the system can update and maybe break itself in the process, but I control when that time slot exists.

    There’s not a single case that I’ve encountered where snap actually solved a problem I’ve had and there’s a plenty of times when it was either annoying or just straight up caused more problems. Systemd at least have some advantages over SysVInit, but snap doesn’t have even that.

    As mentioned, I’m not the oldest linux guy around, but I’ve been running linux for 20+ years and ~15 of that has kept butter on my bread and snapcraft is easily the most annoying thing that I’ve encountered over that period.


  • You act as if Snap was bad in any way. Proprietary backend does not equal bad.

    I don’t give a rats ass if things I use are propietary or not. FOSS is obviously nice to have, but if something else does the work better I’m all for it, and have paid for several pieces of software. But Ubuntu and Snap (which are running on the thing I’m writing this with) are just objectivey bad. Software updates are even more aggressive than with Windows today and even if I try to work with the “<this software> updates in X days, restart now to update” notifications it just doesn’t do what it says it would/should. And once the package is finally updated the nagging notification returns in a day or two.

    Additionally, snap and/or ubuntu has bricked at least two of my installations in the last few years, canonicals solutions has broken apt/dpkg in a very fundamental way and it most definetly has caused way more issues with my linux-stuff over the years than anything else, systemd included.

    Trying to twist that as an elitist point of view with FOSS (which there are plenty of, obviously) is misleading and just straight up false. Snapcraft and it’s implementation is just broken on so many levels and has pushed me away from ubuntu (and derivatives). Way back when ubuntu started to gain traction it was a really welcomed distribution and I was a happy user for at least a decade, but as then things are now it’s either Debian (mostly for servers) or Mint (on desktops) for me. Whenever I have the choise I won’t even consider ubuntu as an option, both commercially at work and for my personal things.


  • I did quickly check the files on update.zip and it looks like they’re tarballs embedded in a shell script and image files including pretty much the whole operating system on the thing.

    You can extract those even without a VM and do whatever you want with the files and package them back up, so, you can override version checks and you can inject init.d scripts, binaries and pretty much everything to the device, including changing passwords to /etc/shadow and so on.

    I don’t know how the thing actually operates, but if it isn’t absolutely necessary I’d leave bootloader (appears to be uboot) and kernel untouched as messing up those might end up with a bricked device and then easy options are broken and you’ll need to try to gain access via other means, like interfacing directly with the storage on the device (which most likely includes opening the thing up and wiring something like arduino or an serial cable to it).

    But beyond that, once you override version checks, it should be possible to upload the same version number over and over again until you have what you need. After that you just need suitable binaries for the hardware/kernel, likely some libraries from the same package and a init-script and you should be good to go.

    The other way you can approach this is to look for web server configurations from the image and see if there’s any vulnerabilities (like apache running as root and insecure script on top of that to inject system files via http), which might be the safest route at least for a start.

    I’m not really experienced on a things like this, but I know a thing or two about linux, so do your homework before attempting anything, have a good luck and have fun while tinkering!


  • The statement is correct, rsync by itself doesn’t use ssh if you run it as an daemon and if you trigger rsync over ssh then it doesn’t use daemon but instead starts rsync with UID of the ssh-user.

    But, you can run rsyncd and bind it only to localhost and connect to that over ssh-tunnel. That way you can get benefits of rsync daemon and still have encrypted connection with ssh.






  • All of those are still standing on Firefox’s shoulders and the actual rendering engine on the browser isn’t really trivial thing to build. Sure, they’re not going away, and likely Firefox will be around too for quite a while, but the world wide web as we currently know it is changing and Google and Microsoft are few of the bigger players pushing the change.

    If you’re old enough you’ll remember the banners ‘Best viewed with <this browser> on <that resolution>’, and it’s not too far off from the future we’ll have if the big players get their wishes. Things like google suite, whatever meta is offering and pretty much “the internet” as your Joe Average understands it wants to implement technology where it’s not possible to block ads or modify the content you’re shown in any other way. It’s not too far off from your online banking and other very much real life affecting services start to have boundaries in place where they require certain level of ‘security’ from your browser and you can bet that things which allow content modifying things, like adblocker, doesn’t qualify for the new standards.

    On many places it’s already illegal to modify or tamper DRM protected content in any ways (does anyone remember libdvdcss?) and the plan is to include similar (more or less) restrictions to the whole world wide web, which would say that we’ll have things like fediverse who allow browsers like firefox and ‘the rest’ like banking, flight/ticket/hotel/whatever booking sites, big news outlets and so on who only allow the ‘secure’ version of the browser. And that of course has very little to do with actual security, they just want control over your device and what content is fed to you, regardless if you like it or not.


  • I have no idea about cozy.io, but just to offer another option, I’ve been running Seafile for years and it’s pretty solid piece of hardware. And while it does have other stuff than just file storage/sharing, it’s mostly about just files and nothing else. Android client isn’t the best one around, but gets the job done (background tasks at least on mine tend to freeze now and then), on desktop it just works.


  • I have absolutely zero insight on how the foundation and their financing works, but in general it tends to be easier to green light a one time expense than a recurring monthly payment. So it might be just that, a years salary at first to get the gears running again and getting some time to fit the ‘infinite’ running cost into plans/forecasts/everything.