. 6. GPUs were created because they do more math than a CPU can, if the workload fits into their constraints. They’re basically just a bunch of smaller cores (with less instructions) bunched into groups that are managed by one control unit for several cores to allow you to do more shit in less space with less power.
I work on networking for distributed rendering for a major cloud provider- very familiar with gpu architecture and use-cases :)
Saying they do more math is a bit tricky. The CPU does crazy types of very complicated math and accomplishes tasks we still have a hard time offloading to GPUs.
I agree with the rest of your statement as a good explanation for why GPUs can do faster and more efficient batch processing of the workloads that can be fit to the SIMD set up we use for most modern GPUs (ignoring general purpose gpu and fancier compute options)
My point was just that the whole purpose of them is that they do a better job at the embarrassingly parallel operations they’re made for. Obviously architectures evolve over time, and the exact details change, but if you were attempting to stifle growth, something that adds significant capability to user’s machines (and also without actually compromising the capability of the actual CPU, for the most part) doesn’t seem like it would help you.
. 6. GPUs were created because they do more math than a CPU can, if the workload fits into their constraints. They’re basically just a bunch of smaller cores (with less instructions) bunched into groups that are managed by one control unit for several cores to allow you to do more shit in less space with less power.
I work on networking for distributed rendering for a major cloud provider- very familiar with gpu architecture and use-cases :)
Saying they do more math is a bit tricky. The CPU does crazy types of very complicated math and accomplishes tasks we still have a hard time offloading to GPUs.
I agree with the rest of your statement as a good explanation for why GPUs can do faster and more efficient batch processing of the workloads that can be fit to the SIMD set up we use for most modern GPUs (ignoring general purpose gpu and fancier compute options)
CPUs can definitely do more variety.
My point was just that the whole purpose of them is that they do a better job at the embarrassingly parallel operations they’re made for. Obviously architectures evolve over time, and the exact details change, but if you were attempting to stifle growth, something that adds significant capability to user’s machines (and also without actually compromising the capability of the actual CPU, for the most part) doesn’t seem like it would help you.
Ah, yes - exactly! The article is also fully unrelated from OPs title - really weird post all around.