• 0 Posts
  • 33 Comments
Joined 11 months ago
cake
Cake day: July 2nd, 2024

help-circle

  • No, at least not in the sense that “hallucination” is used in the context of LLMs. It is specifically used to differentiate between the two cases you jumbled together: outputting correct information (as is represented in the training data) vs outputting “made-up” information.

    A language model doesn’t “try” anything, it does what it is trained to do - predict the next token, yes, but that is not hallucination, that is the training objective.

    Also, though not widely used, there are other types of LLMs, e.g. diffusion-based ones, which actually do not use a next token prediction objective and rather iteratively predict parts of the text in multiple places at once (Llada is one such example). And, of course, these models also hallucinate a bunch if you let them.

    Redefining a term to suit some straw man AI boogeyman hate only makes it harder to properly discuss these issues.
















  • vintageballs@feddit.orgtoLinux@lemmy.mlAMD vs Nvidia
    link
    fedilink
    Deutsch
    arrow-up
    1
    ·
    4 months ago

    Where are you getting these numbers? I have a 3080, used a 1080Ti before, and though my last direct comparison was a while (like a few years) ago, I had more like 3-5% difference in FPS in the games I tested, at most 10% in RS2 Vietnam, but this ultimately turned out to be a CPU bottleneck. I would assume (and, reading reviews on reddit, this seems confirmed) that the drivers have mostly gotten better since then.




  • vintageballs@feddit.orgtoLinux@lemmy.mlAMD vs Nvidia
    link
    fedilink
    Deutsch
    arrow-up
    1
    ·
    4 months ago

    As someone who has been using Nvidia and Linux nearly exclusively for many years, I am interested in the aspects you think their drivers suck in. I have had literally no problems with them in the past 2 years, performance is incredible, Wayland just works, …