Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.


  • Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
  • The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
  • The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
  • The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
  • The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
  • FigMcLargeHuge@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    11 months ago

    I wish I could upvote this comment twice! I have the same feeling about how the media and others keep trying to push this “intelligence” component for their gain. I guess you can’t stir up the masses when you talk about LLMs. Just like they couldn’t keep using the term quad copters, and had to start calling them drones. Fucking media.

    • Obinice@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      What I love about the AI we have right now is that your comment could have been written by AI and we’d never know. Heck, mine could be too!

      Truly we live in the future haha