It was to talk about “team restructuring”

  • planetaryprotection@midwest.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Randomly got a message from one of my reports asking what this “Mandatory Team Meeting” was on his calendar. I hadn’t been invited, but it was our whole company shutting down ¯\_(ツ)_/¯

    • English Mobster@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Hey, that happened to me, too!

      I got scheduled for a mandatory meeting with 1 hour notice. During lunch.

      I asked my boss what it was. He didn’t know either. I joked that it was us being shut down.

      Sure enough, 1 hour later we were both writing LinkedIn recommendations and helping each other find jobs after it was announced that our whole studio was being shut down by corporate and myself plus all my coworkers were all now jobless.

    • robotrash@lemmy.robotra.sh
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Random team meeting on the first Friday after I got hired. “Telltale has lost it’s funding and everyone is being let go”. Fun week.

  • 1984@lemmy.today
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Companies are often insane. I’m working in one who has this one guy build a super complicated architecture, because he don’t know aws. So instead of just using a message queue on aws, he is building Java programs and tons of software and containers to try and send messages in a reliable way. Costs the company huge money, but they don’t care, since he is some old timer who has been there for like 10 years and everyone let’s him do what he wants.

      • 1984@lemmy.today
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        It’s a different form of lock-in since it’s just his creation. When he leaves, all of this will be very hard to maintain and the company will probably rebuild it all on aws.

        I have been bringing this up but they say that it’s too late to change direction now (they are afraid to upset the guy).

        But I’m looking on the bright side. I get to learn a lot of stuff I otherwise I wouldnt if this was a single managed aws service. I’m bringing in terraform and instead of just putting a message queue there, I need to spin up entire architectures to run his ec2 instances with all the apps and everything required to make things work.

        Takes months… So for me it’s fun. I don’t have to pay for it. But companies are crazy. :)

    • Zushii@feddit.de
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I personally always try to engineer away from cloud services. They cost you ridiculous amounts of money and all you need is documentation afterwards. Then it can be easier and faster than AWS or GC

        • wim@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Got to agree with @Zushii@feddit.de here, although it depends on the scope of your service or project.

          Cloud services are good at getting you up and running quickly, but they are very, very expensive to scale up.

          I work for a financial services company, and we are paying 7 digit monthly AWS bills for an amount of work that could realistically be done with one really big dedicated server. And now we’re required to support multiple cloud providers by some of our customers, we’ve spent a TON of effort trying to untangle from SQS/SNS and other AWS specific technologies.

          Clouds like to tell you:

          • Using the cloud is cheaper than running your own server
          • Using cloud services requires less manpower / labour to maintain and manage
          • It’s easier to get up and running and scale up later using cloud services

          The last item is true, but the first two are only true if you are running a small service. Scaling up on a cloud is not cost effective, and maintaining a complicated cloud architecture can be FAR more complicated than managing a similar centralized architecture.

          • shiftymccool@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            I worked in operations for a large company that had their own 50,000 sq ft data center with 2000 physical servers, uncountable virtual servers, backup tape robots, etc… Their cooling bill would like to disagree with your assessment about scaling. I was unpacking new servers regularly because, when you own you own servers, not only do you have to buy them, but you have to house them (so much rented space), run them, fix them, cool them, and replace them.

            Don’t get me wrong, I’ve also seen the AWS bill for another large company I worked for and that was staggering. But, we were a smaller tech team and didn’t require a separate ops group specifically to maintain the physical servers.

            • wim@lemmy.sdf.org
              link
              fedilink
              arrow-up
              0
              arrow-down
              1
              ·
              1 year ago

              If you really need the scale of 2000 physical machines, you’re at a scale and complexity level where it’s going to be expensive no matter what.

              And I think if you need that kind of resources, you’ll still be cheaper of DIY.

          • 1984@lemmy.today
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            You are paying aws to not have one big server, so you get high availability and dynamic load balancing as instances come and go.

            I agree its not cheaper than being on prem. But it’s much higher quality solutions.

            Today at work, they decided to upgrade from ancient Ubuntu version to a more recent version. Since they don’t use aws properly, they treat servers as pets. So to upgrade Ubuntu, they actually upgraded Ubuntu on the instance instead of creating a new one. This led to grub failing and now they are troubleshooting how to mount disks etc.

            All of this could easily be avoided by using the cloud properly.

            • ElectricCattleman@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              That could be avoided by using on prem properly, too. People are very capable of making bad infrastructure whether on prem or cloud.

            • wim@lemmy.sdf.org
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              1 year ago

              I used to work on an on premise object storage system before, where we required double digits of “nines” availability. High availability is not rocket science. Most scenarios are covered by having 2 or 3 machines.

              I’d also wager that using the cloud properly is a different skillset than properly managing or upgrading a Linux system, not necessarily a cheaper or better one from a company point of view.

              • merc@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                where we required double digits of “nines” availability

                Do you mean 99% or 99.99999999%? Because 99.99999999% is absurd. Even Google doesn’t go near that for internal targets. That’s 1/3 of a second per year of downtime. If a network hiccup causes 30s of downtime, you’ve blown through a century of error budget. If you’re talking durability, that’s another matter, but availability?

                For ten-nines availability to make any sense, any dependent system would also have to have ten nines availability, and any calling system would have to have close to ten nines availability or it’s not worth ten nines on the called system.

                If the traffic ever goes over TCP/IP, not even if it ever goes over the public internet, if it ever goes over Ethernet wires, ten nines sounds like overkill. Maybe if it stays within a mainframe computer, but you’d have to carefully audit that mainframe to ensure that every component involved also has approx ten nines.

                If you mean 2 nines availability, that’s not high availability at all. That’s nearly 4 days of downtime a year. That’s enough that you don’t necessarily need a standby system, you just need to be able to repair the main one within a few hours if it goes down.

                • wim@lemmy.sdf.org
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  edit-2
                  1 year ago

                  Sorry, yes, that was durability. I got it mixed up in my head. Availability had lower targets.

                  But I stand by the gist of my argument - you can achieve a lot with a live/live system, or a 3 node system with a master election, or…

                  High availability doesn’t have to equate high cost or complexity, if you can take it into account when designing the system.

    • SilverCode@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      What the company likes about the old timer is that because he has been there for 10 years, he will likely be there for the next 10 years to support the complicated system he is creating now. If a younger team member creates something using a modern approach, there is the risk they will leave in a years time and no one knows how the system works.

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        No one knows how to use a well documented, publicly available service? No, I’d argue that no one knows how to use a private, internal only, custom solution.

        • Ajen@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          1 year ago

          That because you’re an engineer (I assume). The people signing off on these kinds of projects don’t know enough themselves, so they go to someone they trust (the old timers) to help them make the decision. The old timers don’t keep up with new tech, so we keep reinventing the wheel.

          • BoofStroke@lemm.ee
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            1 year ago

            “keeping up with new tech” is often just re-inventing the wheel. If it isn’t broke, and can still be maintained, then why break it because you like the flavor of the week?

  • Noughmad@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    “Team restructuring” is so much fun, you never know what you’re going to get.

    Your boss’s boss now reports to a slightly different VP? Everyone is getting fired? No way to know which it’s going to be, until the end of the meeting.

  • 6daemonbag@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    That happened to me. I noticed a vague Monday morning meeting when I logged on. Checked with my team to see if they knew what it was about and no one knew. Supervisor was MIA on slack. Just before it starts we got a group text from him that essentially said, “what the fuck. I’m so sorry guys. I’m not allowed to speak or I’m immediately fired”

    I checked the invite list and, sure enough… VP of department, VP of HR, my supervisor, and my small team. I instantly knew we were all fired.

    Joined the meeting a few minutes early and it was just my teammates all wondering out loud what’s going on. They’re all pretty young. Couldn’t help but blurt out, “nice knowing yall…”

    Supervisor texts me with “please don’t, we’ll grab a drink right after this”

    The cool executives log and blah blah blah your team is getting shuttered thanks bye.

    We did get drinks at 9:30 in the morning.

    • 6daemonbag@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Oh and my supervisor quit a month later, right after he got the end of year bonus. I don’t blame him. Good dude. He helped a lot of the team secure other jobs in the industry within 3 months