• XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    1 day ago

    And a “hallucination” is also an inaccurate humanization of the actual meaning: “statistical relationship that we AI folks don’t like.”

    “Hallucinations” even include accurate data.

    It is a trash marketing buzzword.

      • athairmor@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 hours ago

        Nuclear energy companies aren’t trying to make people think that their reactors reproduce.

        AI companies are trying to make people think that their software is intelligent.

        The context matters.

          • [deleted]@piefed.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 day ago

            Hallucinations requires perception. LLMs are just statistical models and do not have perceptions.

            It was a cute name early on, now it is used to deflect when the output is just plain wrong.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            In AI, a “hallucination” is just as much “there” as a non-“hallucination.” It’s a way for scientists to stomp their foot and say that the wrong output is the computer’s fault and not a natural consequence of how LLMs work.

      • Bronzebeard@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I don’t think anyone is confusing radiation propagation with being alive though.

        The issue is, these things “communicate” with us so granting it even more leeway to seem like it’s thinking (it’s not) is only further muddying how people perceive them