• 2 Posts
  • 151 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle

  • Darktable developers pride themselves for their non-destructive processing pipeline and use it as an excuse for how quirky and inflexible their UX is. I believe they are highly competent on the highly technical bits that ultimately very few people see or understand. Personally I can use it to an extent if I unlearn what other software have taught me over decades of UX conventions.



  • You can always give a shot at using a third party client (possibly acting as bridge for other/better protocols, like e.g. slidge.im>xmpp or the buggy matrix equivalent), but you need to keep in mind that they will all require you to authenticate (and remain authenticated) using a smartphone, and that usage of 3rd party clients is forbidden from WA’s terms and conditions (which may lead to your account being blocked/deleted).





  • The problem I’ve observed with XMPP as an outsider is the lack of a standard. Each server or client has its own supported features and I’m not sure which one to choose.

    That’s a valid concern, but I wouldn’t call it a problem. There are practically 2 types of clients/servers: the ones which are maintained, and which work absolutely fine and well together, and the rest, the unmaintained/abandoned part of the ecosystem.

    And with the protocol being so stable and backwards/forwards compatible in large parts, those unmaintained clients will just work, just not with the latest and greatest features (XMPP has the machinery to let clients and servers advertise about their supported features so the experience is at least cohesive).

    Which client would you recommend?

    Depends on which platform you are on and the type of usage. You should be able to pick one as advertised on https://joinjabber.org , that should keep you away from the fringe/unmaintained stuff. Personally I use gajim and monocles.


  • They both qualify as “open, federated messaging protocols”, with XMPP being the oldest (about 25 years old) and an internet standard (IETF) but at this point we can consider Matrix to be quite old, too (10 years old). On the paper they are quite interchangeable, they both focus on bridging with established protocols, etc.

    Where things differ, though, is that Matrix is practically a single vendor implementation: the same organization (Element/New Vector/ however it’s called these days) develops both the reference client and the reference server. Which incidentally is super complex, not well documented (the code is the documentation), and practically not compatible with the other (semi-official) implementations. This is a red herring because it also happens that this organization was built on venture capital money with no financial stability in sight. XMPP is a much more diverse and accessible ecosystem: there are multiple independent teams and corporations implementing servers and clients, the protocol itself is very stable, versatile and extensible. This is how you can find XMPP today running the backbone of the modern internet, dispatching notifications to all Android devices, being the signaling system behind millions of IoT devices, providing messaging to billion of users (WhatsApp is, by the way, based on XMPP)

    Another significant difference is that, despite 10 years of existence and millions invested into it, Matrix still has not reached stability (and probably never will): the organization recently announced Matrix 2 as the (yet another) definitive answer to the protocol’s shortcomings, without changing anything to what makes the protocol so painful to work with, and the requirements (compute, memory, bandwidth) to run Matrix at even a small scale are still orders of magnitude higher than XMPP. This discouraged many organizations (even serious ones, like Mozilla, KDE, …) from running Matrix themselves and further contributes to the de-facto centralization and single point of control federated protocols are meant to prevent.




  • It’s part of the reason why I think decentralized services could be the future. Lemmy or Mastodon can have a lot of small servers with reasonable costs spread across many admins, instead of one centralized service that costs a significant amount to run.

    Ohh, absolutely, or rather, it is the past. I mean, internet was built that way, as a resilient federation of networks and protocols. Lemmy could be seen as us just rediscovering emails after the tech giants almost succeeded in killing it. We should approach all the services we use by asking ourselves basic sustainability questions:

    • is that thing opensource?

    • self hostable?

    • does it federate/interoperate with equivalent services?

    • can I pull my data out of it/relocate to another provider on a whim?

    • if not, is this a trustworthy and ethical business?

    • is it profitable?

    • are there open financial records available showing where/for what the money is going?

    • is it at risk of being acquired?

    • is it subject to foreign/unlawful interference

    Etc Etc


  • Until i can give a laptop with linux to my neighbour without also needing to also provide support, its not there yet.

    I mean, isn’t your neighbor already getting Windows support from his son or nephew anyway? Let’s not pretend that there exists a magical and perfect OS for those who don’t want to learn one. Some learning is required, whichever the OS, and I would be hard to convince that a current preinstalled Linux is more difficult to handle than a current preinstalled Windows.

    Windows has for itself that it’s a devil most people know/got exposure to (thanks to Microsoft schemes and monopolistic practices), there is nothing inherently better or easier about it (and arguably quite the opposite).








  • Well, that is boldly assuming:

    • that endlessly duplicating services across containers causes no overhead: you probably already have a SQL server, a Redis server, a PHP daemon, a Web server, … but a docker image doesn’t know, and indeed, doesn’t care about redundancy and wasting storage and memory

    • that the sum of those individual components work as well and as efficiently as a single (highly-optimized) pooled instance: every service/database in its own container duplicates tight event loops, socket communications, JITs, caches, … instead of pooling it and optimizing globally for the whole server, wasting threads, causing CPU cache misses, missing optimization paths, and increasing CPU load in the process

    • that those images are configured according to your actual end-users needs, and not to some packager’s conception of a “typical user”: do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not

    • that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization

    And this is even before assuming that docker abstractions are free (which they are not)