• 737@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    6 months ago

    It’s really not, x86 (CISC) CPUs could be just as efficient as arm (RISC) CPUs since instruction sets (despite popular consensus) don’t really influence performance or efficiency. It’s just that the x86 CPU oligopoly had little interest in producing power efficient CPUs while arm chip manufacturers were mostly making chips for phones and embedded devices making them focus on power efficiency instead of relentlessly maximizing performance. I expect the next few generations of intel and AMD x86 based laptop CPUs to approach the power efficiency Apple and Qualcomm have to offer.

    • bamboo@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      All else being equal, a complex decoding pipeline does reduce the efficiency of a processor. It’s likely not the most important aspect, but eventually there will be a point where it does become an issue once larger efficiency problems are addressed.

      • 737@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        6 months ago

        yeah, but you could improve the not ideal encoding with a relatively simple update, no need to throw out all the tools, great compatibility, and working binaries that intel and amd already have.

        its also not the isa’s fault

        • bamboo@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          Well, not exactly. You have to remove instructions at some point. That’s what Intel’s x86-S is supposed to be. You lose some backwards compatibility but they’re chosen to have the least impact on most users.

          • 737@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            Would this actually improve efficiency though or just reduce the manufacturing and development cost?