What if we never found the Rosetta Stone and could not read ancient Egyptian hieroglyphics. Could computers or AI decipher them today?

  • LemmyLefty@lemmy.world
    link
    fedilink
    arrow-up
    41
    arrow-down
    3
    ·
    1 year ago

    Given that the AI we have is prone to making things up because it “fits” according to the models it trains on, how much faith would you have in a translation done by an AI on writings made by people who lived millennia before said language models were developed?

    • NegativeNull@lemm.ee@lemm.ee
      link
      fedilink
      arrow-up
      36
      ·
      edit-2
      1 year ago

      Don’t confuse modern LLM models (like ChatGPT) with AI. As the saying goes:

      All Buicks are Cars, but not all Cars are Buicks

      LLMs are a form of AI, but there is a lot more going on in the world of AI than just LLMs.

      • LemmyLefty@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        1 year ago

        That’s a good point, and you’re right that I’m conflating them.

        What other elements of AI would you imagine would be useful here?

      • Bobby Turkalino@lemmy.yachts
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        to expand your point, the sole job of an LLM is to, when given a sequence of words (e.g. half a sentence), predict what the next several words should be. the model has no concept of what English words mean, so instead it makes this prediction based on statistics that were derived from basically reading through hundreds of thousands of English sentences

        TL;DR LLMs don’t understand languages, they’ve just memorized statistics about them

    • I’ll have more faith once it can reliably switch back and forth between Unicode symbols and their underlying HTML entities. It understands the concept of emojis and can use them appropriately, but I can tell there’s still some underlying issues in the token/object model for non-ASCII symbols.