Is it just me or are system requirements by vendor applications getting out of hand? In the past 5 years I’ve watched the minimum specs go from 2vCPU or 4vCPU with 8GB or 16GB RAM now up to a minimum of 24vCPU’s and 84GB of RAM!
What the actual hell?
We run a VERY efficient shop where I work. Our VM infrastructure is constantly monitored for services or VM’s that are using more resources than they need. We have 100+ VM’s running across 4 nodes, each with 2TB of RAM and 32 cores. If we find an application that is abusing CPU usage, or RAM consumption, we will tune it so it’s as efficient as can be. However, for vendor solutions where they provide a VM image to deploy, or they install a custom software suite on the VM, the requirements and the performance have been getting absolutely out of hand.
I just received a request to deploy a new VM that is going to be used for managing and provisioning switch ports on some new networking gear. The vendor has provided a document with their minimum requirements for this.
24 vCPU’s 84GB of RAM 600GB HDD with a minimum I/O speed of 200MB/s
<rant> I’m sorry… but this is absurd. For what?!? Enabling and disables ports on networking gear, and gathering metrics from the ports? This is nuts!
I’ve worked as a System Administrator for a long time. One thing I’ve learned is that a measure of a company’s product is not only how well it functions and how well it does what it advertises, but also how well it’s built. This includes system resource usage and requirements.
When I see system requirements like the ones I was just given, it really makes me call into question the quality of the development team and the quality of the product. For what it’s supposed to do, and what the minimum specs are, it doesn’t make sense. It’s like they ran into a performance bottleneck somewhere along the line, and instead of diagnosing and fixing the code to be more efficient, they just pulled a Jeremy Clarckson and added “More power!”. Because throwing more CPU’s and RAM at a performance issue always fixes it. Lets just pass the issue along to our customers and make them use more of their infrastructure resources to fix our problem. Jeez! </rant>
Just to be clear, I’m not making a blanket statement about all developers, there are a lot of developers or development teams that do put quite a bit of effort into refining their product and making it quite efficient, however it just seems more common place now that these “basic” applications from very large vendors have absurd system requirements.
Is anyone else experiencing this? Any similar stories to share?
I agree with you, I see this happening across multiple sectors of tech. I think its a combination of factors including the cheapness of memory, languages becoming more and more robust at handling themselves, compilers doing a lot of the “optimization” for software devs, and possibly many more. Either way, unless these “light transistors” and all that new tech really take off and see some improvement in their fragility these companies are going to have to git gud so to speak and actually make efficient programs again as our current tech begins to reach a limit. At one point we won’t be able to squeeze more nm into cpus and we’ll have to think about what our programs use again. Anyway thats my 2 cents, I’m a complete noob compared to you career wise but I’ve been in love with computers my whole life.
There’s some data reporting tool that I had to install
Minimum 8core 16vCPU 128GB RAM 500GB-1TB of free disk space
The installer fails if you don’t meet any of these
Storing time series data in ram so you can instantly generate pretty graph of various metrics is all the rage right now. The longer you want to keep the the data, the bigger the ram requirements. You might be wondering, why not storing the data on SSD? The answer is the bosses love pretty charts that loads INSTANTLY. The fact the the metrics database uses 200GB of RAM is not their concern.
That does explain it.
I don’t like it, but it does the behaviour, for sure
I just received a request to deploy a new VM that is going to be used for managing and provisioning switch ports on some new networking gear. The vendor has provided a document with their minimum requirements for this. 24 vCPU’s 84GB of RAM 600GB HDD with a minimum I/O speed of 200MB/s
Let me guess Cisco DNA centre?
“Not my hardware” -Vendor
If they’re using using the resources, then they’re necessary. But if you’re allocating resources to VMs and they’re going unused, just… allocate less? It’s very rare that that’s caused any issues. Just remember to bump them up if you see issues before calling support.
If they complain it doesn’t match their spec you can always allocate what they want but set a lower priority for the VM. Obviously, performance problems are then on you if there is an issue. That will at least appease the low-level support person that will just knee-jerk and blame the lack of required resources when you call for support on something unrelated.
Really wish people would run those things through our departments first before buying expensive enterprise grade shitware.
That way we’d at least get a chance to negotiate the specs. You want to give you that much compute? Please justify why you need that much compute and we might approve.
What I’ve learned from this is that I need to write a VM for managing and provisioning ports on networking gear, since apparently the best solution currently existing is …that.
There is. Netbox and Ansible
deleted by creator
I blame python
That and the gazillion edgewebview processes
Lazy programming and an out. “oh it’s not working well? That is just cause it’s not fast enough and we don’t optimize our code. Increase CPUs!”
@packetloss we are upgrading our ERP system, and the new version requires 7 Win Server VMs to run for the same amount of clients. Insane
We are going through a replacement of our BSS/BMM system, going from 2 systems to 9, so far… Maybe more in the future. All to handle the same number of customers.