This issue is already quite widely publicized and quite frankly “we’re handling it and removing this” is a much more harmful response than I would hope to see. Especially as the admins of that instance have not yet upgraded the frontend version to apply the urgent fix.

It’s not like this was a confidential bug fix, this is a zero day being actively exploited. Please be more cooperative and open regarding these issues in your own administration if you’re hosting an instance. 🙏

  • TragicNotCute@lemmy.world
    link
    fedilink
    English
    arrow-up
    148
    arrow-down
    29
    ·
    1 year ago

    IMO it’s not a good idea to be discussing attack vectors publicly when a number of other instances are unpatched and the exploit has been in the wild for less than a day.

    I agree that admins need to work together, but discussing it in public on Lemmy so soon after the attack isn’t the way. There exists a Matrix channel for admins, that’s where this type of thing should go.

    • entropicshart@lemmy.world
      link
      fedilink
      English
      arrow-up
      122
      arrow-down
      8
      ·
      edit-2
      1 year ago

      When a vulnerability at this level happens and a patch is created, visibility is exactly what you need.

      It is the reason CVE sites exist and why so many organizations have their own (e.g. Atlassian, SalesForce/Tableau )

      It is also why those CVE will be on the front page of sites like https://news.ycombinator.com to ensure folks are aware and taking precautions.

      Organizations that do not report or highlight such critical vulnerabilities are only hurting their users.

      • TragicNotCute@lemmy.world
        link
        fedilink
        English
        arrow-up
        70
        arrow-down
        13
        ·
        1 year ago

        It is common practice to notify affected parties privately and then give full details to the public after the threat is largely neutralized. Expecting public disclosure with technical details on how to perform the attack in less than 24 hours goes against established industry norms.

        • Puzzle_Sluts_4Ever@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          3
          ·
          1 year ago

          Exactly

          Yes, the vulnerability is out there. Maybe the root cause actually introduced a LOT of vulnerabilities. The fix is being pushed at a frantic pace. To expect the devs to take time out of the mad rush to notify those impacted to do a proper writeup is just insanity.

          The way I see it? This (hopefully) got fixed pretty much instantly and there is active work to get the fix applied by the people who need to apply it. That is what should be done. Give it a week or two to see how they handle the public disclosure side of things.

          • 𝓢𝓮𝓮𝓙𝓪𝔂𝓔𝓶𝓶@lemmy.procrastinati.org
            link
            fedilink
            English
            arrow-up
            27
            arrow-down
            1
            ·
            1 year ago

            I strongly disagree with some of your points.

            Yes, the vulnerability is out there. Maybe the root cause actually introduced a LOT of vulnerabilities. The fix is being pushed at a frantic pace. To expect the devs to take time out of the mad rush to notify those impacted to do a proper writeup is just insanity.

            It’s not insanity. It’s called incident management and it’s something the development team needs to build a proper procedure around, given the expanded scope of this project. I agree that the devs working on identifying, mitigating, and fixing the vulnerability should not be expected to also handle the communication. They need to designate someone for that role.

            A 0-day was actively being exploited in the wild. There was confusion, misinformation, and a general lack of information.

            You need to:

            • Indicate that you are aware of an ongoing problem and are working to identify it. This let’s people know there is an issue and that you’re aware of it. You can do this without giving specific details on how to replicate the exploit. This includes server admins publicly acknowledging that they are aware of the issue and will provide updates when they have them, to alleviate the concerns of their user base.
            • Once a mitigation are known, you publish that, in as many channels as you need to get that information out to the people who need it. So that server admins are aware of what they need to do to reduce their risk.
            • Once a fix is in place, you publish that, same as above.

            The way I see it? This (hopefully) got fixed pretty much instantly and there is active work to get the fix applied by the people who need to apply it. That is what should be done.

            And how do you know this since it’s not been communicated? Most of the information I (as a person running a lemmy server) have been able to glean is from random threads spread across random communities.

            Give it a week or two to see how they handle the public disclosure side of things.

            A couple of weeks for a postmortem. Sure. A couple of weeks for an active, in the wild, 0-day, to officially communicate that the problem exists and how to mitigate/patch it. Absolutely not. I still don’t see a security alert on the GitHub telling me I should be updating to <insert version> to patch an active exploit and it’s been how many hours now?

            • Puzzle_Sluts_4Ever@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              3
              ·
              edit-2
              1 year ago

              And if this were a large company? Yes.

              This is an open source project with less than 200 devs with the VAST majority coming from two.

              Part of this is very much the learning curve and why you should very much think twice about using open source passion projects in “production”. This is the kind of stress testing that comes from lemmy/mastodon/The Fediverse actually having users.

              But also?

              • It has been indicated that there is an ongoing problem and the fixes are (hopefully) done with frantic work based on https://github.com/LemmyNet/lemmy-ui/pulls?q=is%3Apr+is%3Aclosed
              • Publish mitigation in appropriate bulletin systems: It is far from textbook, but this is a social media platform. Users are spreading word of mouth while devs do actual work
              • Publish fix in appropriate bulletin systems: It is far from textbook, but this is a social media platform. Users are spreading word of mouth while devs do actual work

              Most of the information I (as a person running a lemmy server) have been able to glean is from random threads spread across random communities.

              So you are saying that you were told there is an issue. And you can do exactly what I did while writing this message: Check the github page.

              Do I think the lemmy devs are doing everything by the book? Hell no.

              Do I think, given the resources available and the timeframe of the attack, that they are doing it correctly? Yes. They identified the vulnerability, (hopefully) implemented a mitigation, and pushed that all within 24 hours. Popular docker containers have already been updated, users are spreading The Good Word, and so forth. And I would much rather they use their limited resources to focus on actual fixes than doing proper writeups, just so long as the fixes are getting propagated.

              Optimally? I want those proper reports filed within the next day or two. Given that this is likely NOT a full time job and all the chaos of the past 24 hours or so? I’ll give them a week.

              And if your complaint is that they aren’t behaving the same way large corporations and massive projects (that often became corporations) do? Maybe Lemmy is not for you. And I don’t mean that in an insulting manner. If I were tasked with finding a message board solution or whatever for my company, there is absolutely zero chance I would recommend Lemmy. It is not production quality.

              But for shitposting and actively not providing PII or anything useful? Let’s see how things get hardened from here on out.

              • Is the project small? Yes.

                Did it explode in popularity leaving the devs overwhelmed? Certainly.

                Do I expect them to strictly follow established ITIL incident management? No.

                Do I expect them to communicate in a consistent way when an incident happens? Yes.

                I agree the primary developers should be left to fixing the problems but there are enough active members of that project that someone could have handled communication in a more concise and official way. I don’t consider random posts in asklemmy or selfhosting by random users just guessing to be a substitute for that.

                If the project is going to persist and grow it needs to get better at that. Pointing it out isn’t shitposting.

                • Puzzle_Sluts_4Ever@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  Again, how many “active members” are likely to understand the issue well enough to make that report. Or are they going to need to use up the time of those core developers to understand well enough to write it up?

                  I’ve been through similar a decent number of times on the corporate side. Something has gone very wrong. People want answers. A good manager assesses the situation and responds back “Look, we know what is going on and all hands are on deck to fix it. Making a powerpoint is not fixing it. We’ll do a proper write up for next week but we can either have So and So fix it or report on it.”

                  Obviously that stops being an option as you begin impacting investors. But that is when it becomes a trade off of “Okay, Jen barely understands what Roy and Moss are doing. But she can say something that hopefully won’t be too wrong and then apologize and give a correction tomorrow”

                  But people very much don’t seem to understand how small this project is. Spend time with passion projects and “open source” projects that AREN’T on the scale of a small-medium sized company and you understand that standards are going to be lower because people have day jobs and so forth.

                  I mean, there is a reason reddit hired so many people over the years. And if you are going to jump down the throats of people who prioritize fixing an issue and counting on “active members” to notify users over writing up the reports that many of those users won’t even look at? You want a production quality piece of software. That means Reddit or Threads or Bluesky.

            • fuser@quex.cc
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              whilst I differ somewhat on sharing information on the exploit - knowing something about what was going on allowed some instance admins to take evasive steps - I agree with you completely that there could be a better channel for coordinating communication - I imagine a lot of the discussion went on via Matrix - under the circumstances the response wasn’t so bad given the complete lack of formal organization but yes, it definitely could be improved - you sound quite well-versed in how to handle security/critical incidents. Maybe consider contacting the devs and offering them some help in this area?

              • I don’t think I’m asking for a lot. A post on !lemmy@lemmy.ml xposted to !lemmy_support@lemmy.ml that gets pinned to the top. Edit the post when relevant information comes out. Release a security advisory on github as soon as you have enough info to warrant one and keep it up-to-date as well.

                I’m not asking for the troubleshooting to happen out in the open.

                you sound quite well-versed in how to handle security/critical incidents. Maybe consider contacting the devs and offering them some help in this area?

                I know enough. I’m certainly not an infosec guy I’m just a sysadmin who’s been doing this long enough to know what should be done. At least partly due to this there’s currently 400 open issues just in lemmy-ui on github. Right now I think the best most of us can do is wait for the dust to settle.

                • fuser@quex.cc
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  Right, but Lemmy.ml is really just one of a thousand plus instances. We need something instance independent or a way to propagate info that doesn’t rely on any single failure points, or Lemmy as the communication channel. What happens when lemmy.ml is down, or if no instances are able to post due to concerted DoS?

                  It’s impossible to stop anyone randomly posting stuff on Lemmy. Attackers can post misinformation as well, especially if they compromise admin accounts. Who are we gonna trust in the midst of the next incident? The account posting most prolifically about the UI exploit in progress was using a burner account that had just been created to post about it. I’m sure there were good reasons for wanting to be anonymous when discussing the work of unknown malicious actors, but it made me think twice about what was being posted at the time.

          • Goodie@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 year ago

            Your typical dev is not a technical writer, and shouldn’t be doing the proper write-up.

            If you feel (and it seems you do) that this skill is missing from the Lemmy team, perhaps you should volunteer some time.

    • andrew@lemmy.stuart.funOP
      link
      fedilink
      English
      arrow-up
      73
      arrow-down
      2
      ·
      edit-2
      1 year ago

      If this was not a zero day being actively exploited then you would be 100% correct. As it is currently being exploited and a fix is available, visibility is significantly more important than anything else or else the long tail of upgrades is going to be a lot longer.

      Keep in mind a list of federated instances and their version is available at the bottom of every lemmy instance (at /instances), so this is a really easy chain to follow and try to exploit.

      The discovery was largely discussed in the lemmy-dev Matrix channel, fixes published on github, and also discussed on a dozen alternate lemmy servers. This is not an issue you can really keep quiet any longer, so ideally now you move along to the shout it from the mountaintop stage.

      • hawkwind@lemmy.management
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        FYI for anyone looking to deface more instances, That list is only updated every 24 hours. Depending on when it last run on your home instance, the info could be out of date.

    • xantoxis@lemmy.one
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      1 year ago

      OK, as long as all the well-meaning people stop discussing it, nobody will ever find out about it.

      Son, this is not it.

    • tko@tkohhh.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Where is this Matrix Channel? Is it private? How can I get access as an instance admin?

    • Meow.tar.gz@lemmy.goblackcat.com
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      This is my take on it. I am running Lemmy in a docker using the dessalines image. I hope that there will be an update come this afternoon.

      • andrew@lemmy.stuart.funOP
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        There’s already an update available, but it’s for lemmy-ui not lemmy. Just update the tag to 0.18.2-rc.1 and you’ll have this fix.

        • roboadmin@lemmy.robotra.sh
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          This is probably a dumb question but I used the Ansible install for Lemmy and just did a git pull and --become again but UI wasn’t updated so I assume 0.18.2 isn’t in release yet (which is fine) but is there documentation on updating UI? I see where it’s showing in the docker-compose.yml file but I am uncertain what to do after changing it there (or if that’s the right place to change it).

        • Meow.tar.gz@lemmy.goblackcat.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Yep, that’s the plan! Thanks for letting me know. Lemmy is awesome and I am having so much fun with it. I expect it only to get better as the days and weeks progress.

        • robotrash@lemmy.robotra.sh
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          This is probably a dumb question but I used the Ansible install for Lemmy and just did a git pull and --become again but UI wasn’t updated so I assume 0.18.2 isn’t in release yet (which is fine) but is there documentation on updating UI? I see where it’s showing in the docker-compose.yml file but I am uncertain what to do after changing it there (or if that’s the right place to change it).

        • Meow.tar.gz@lemmy.goblackcat.com
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          I will have to wait until I can get home from work. Work does deep packet inspection and blocks SSH. I’ve tried doing SSH on port 993, one I know for a fact is open because I get my email that way on my phone and I still get a connection refused. Bunch of fascists!

        • robotrash@lemmy.robotra.sh
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          This is probably a dumb question but I used the Ansible install for Lemmy and just did a git pull and --become again but UI wasn’t updated so I assume 0.18.2 isn’t in release yet (which is fine) but is there documentation on updating UI? I see where it’s showing in the docker-compose.yml file but I am uncertain what to do after changing it there (or if that’s the right place to change it).

  • popemichael@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    ·
    1 year ago

    It’s strange that they would try to bury this information.

    The number 1 tool against future hacks like this is education.

  • Guy Fleegman@startrek.website
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    1 year ago

    This issue is already quite widely publicized and quite frankly “we’re handling it and removing this” is a much more harmful response than I would hope to see.

    Hi, mod of a community on the instance in question here. Why is this response harmful? What should we have done instead?

    • andrew@lemmy.stuart.funOP
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      1 year ago

      I feel like it’s up for discussion here and you very well may stand by the response there, but IMO with how prevalent this issue is, a specific response of “we’ve disabled custom emoji” or “we’re upgrading to 0.18.2-rc.1 today” would have been more constructive and reassuring to users. Removal of the question and lack of details gives me a lot less confidence that the issue and fix are understood and doesn’t leave any room for that discussion.

      • Guy Fleegman@startrek.website
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 year ago

        Ahh, ok. That’s helpful, thanks!

        This is going to seem silly in the context of such a severe exploit but one quirk about our instance is that we literally do not have a “general discussion” /c/. The biggest one is scoped to Star Trek and so a Lemmy exploit is obviously outside the scope of … Star Trek. I would wager that’s the main reason the mod removed the post, but I will admit that just pointing this out, I feel like the forum mod from the short story Wikihistory.

        I’m in contact with the admins who manage the hosting, they are coordinating an update 0.18.2-rc1 as we speak. Also, there’s already been some discussion about setting up a general discussion /c/ on our instance and so I’ll include instance security in the scope of that /c/.

        You mentioned elsewhere in this thread there is a Lemmy admins Matrix room. Is my instance big enough for my admins to be invited? If yes, who can I point them at to get in?

        • andrew@lemmy.stuart.funOP
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          That’s definitely good to hear! Timely upgrades for the bigger communities will be important.

          Afaik the Lemmy Matrix rooms are all public. I wasn’t invited myself; just found them via Matrix search and jumped in.

  • exu@feditown.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    From what I found digging through some posts, this exploit only works if your instance uses custom emoji. Federated custom emoji are apparently harmless.

    • andrew@lemmy.stuart.funOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Yes, if you have no custom emoji on your instance, you should not be vulnerable. A valid workaround before the fix is also to just remove all custom emoji, from what I’ve also read.

  • Netto Hikari@social.fossware.space
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    I’m not sure what to think about that instance. I saw some weird stuff in the mod protocol recently, if I remember correctly… Like some drama going on, etc.

    • Guy Fleegman@startrek.website
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      That’s disheartening to hear. Can you share any more detail? If we’ve got a mod causing drama somewhere I can take it up with our admins.

      • Netto Hikari@social.fossware.space
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Oh, it was just a couple days ago and I’m not 100% sure if it was that instance. I faintly remember something about a hated episode or entire series? I’m not sure. I’m not a trekkie. I just remember that it gave off powermod vibes to me and I saw that a couple times. Didn’t spend any more attention to that, though, because I live by the standard live and let live. As long as nobody on my instance reports anything, I’m not going to act in most cases.

        • Guy Fleegman@startrek.website
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          I’m guessing it was a different instance because we don’t have any powermods. (I actually didn’t realize Lemmy already has powermods, sheesh!) Most of us just mod one community on our instance and I don’t think any of us are modding on other instances.

          Regardless, I’ll keep an eye out for anything fishy.

      • fox@lemmy.fakecake.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        thanks, I guess I missed it. gonna update ASAP just in case, even though I’m the only user of my instance.

    • demesisx@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      25
      ·
      1 year ago

      Which leads me to ask: why are we still using Docker images as a MAJOR part of our infrastructure when superior alternatives exist? The Docker aspect made me realize how hacked together the codebase actually is.

      • Zetaphor@zemmy.cc
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        Just because it’s not using your personal preference of containerization doesn’t qualify it as being “hacked together”. Docker is a perfectly acceptable solution for what Lemmy is.

      • MigratingtoLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        1 year ago

        I will always espouse containers for critical workloads as they provide much better orchestration, especially during deployment. If your complaint is specifically against docker, I agree, we should be using k8s

          • andrew@lemmy.stuart.funOP
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            1
            ·
            1 year ago

            When someone says docker in the context of images today, they’re already talking about the OCI format.

          • The Quuuuuill@slrpnk.net
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            1 year ago

            OCI uses Dockerfiles and runs Docker images as docker images are just KVM image, which is what OCI runs. Nix is absolute overkill for the orchestration of a web server workload and would be better for managing the container host (whatever you’re running kubernetes or docker swarm on).

            I don’t really know how to put this, but nearly every single web service you encounter and interact with is built using a dockerfile just like how Lemmy is doing. If you’re going to disqualify Lemmy as a viable platform based on it having a dockerfile, I got bad news

            • towerful@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              I thought KVM was virtualisation, as in separate kernels.
              And I thought containers shared the hosts kernel. Essentially an “overlay os”.

              So, a KVM could virtualise different hardware and CPU architectures.
              Whereas a container can only use what the host has