• Fake4000@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    9
    ·
    9 months ago

    That’s what they all say. But a lot of these so called AI features require power more than what a phone has. Offloading to a server is sometimes a must.

    • fartsparkles@sh.itjust.works
      link
      fedilink
      arrow-up
      18
      ·
      9 months ago

      Quantised models can be surprisingly small. And if Apple aren’t targeting LLMs for local use, more specific/tailored models absolutely can run on device.

      That said, given the precedent sent by Siri, their next progression of Siri into an LLM will absolutely require network connection and be executed server side.

    • Daxtron2@startrek.website
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      9 months ago

      Sure if you’re running large models like gpt, smaller models tailored to specific use cases can absolutely run on phones. Whether or not they get there implementation down right is a different story though