Title is a bit dramatic, but yes, Claude 3 claims to be better than GPT 4 in most ways.

  • june@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    10 months ago

    I just spent some time on Claude 3, and I see how it can be considered ‘better’ than GPT4, however I quickly found that it tends to lie about itself in subtle ways. When I called it out on an error it would say things like ‘I’ll strive to be better’. I called it out on the fact that it’s model doesn’t grow or change based on conversations it has and that it’s impossible for it to strive to do anything outside of, maybe, that chat. It then went on to show me that it couldn’t even adjust within that chat by doing the same thing 5 more times in 5 different ways.

    I see the model it used for the apologies (acknowledge, apologize, state intent to do better in the future) which is appropriate for people or beings capable of learning, but it is not. I went from having a good conversation with it about a poem I wrote to being weirdly grossed out by it. GPT does a good job of not pretending to be human, and I appreciate that.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      10 months ago

      The cynic in me says that’s perfectly human behavior, though

      • june@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        10 months ago

        Yea that’s what I’m saying, and I don’t like it. I don’t want my LLM acting human, I want it acting like an LLM. My interactions with Claude 3 were very uncanny valley and bugged me a lot.

        • 9bananas@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          so you’re basically saying it talked itself squarely into uncanny valley?

          i honestly didn’t consider that would be an issue for LLMs, but in hindsight…yeah, that’s gonna be a problem…

          • june@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Yea, that’s exactly what it did. It was bizarre to realize actually because I felt the same way because it’s text. But here I am

  • Hackerman_uwu@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    10 months ago

    Hey have you guys heard about ChatGPT 7? It makes chatGPT 6 look like ChatGPT 5!

    Who ever thought the AI awakening would this fucking banal?

  • IvanOverdrive@lemm.ee
    link
    fedilink
    English
    arrow-up
    19
    ·
    10 months ago

    It does amazingly well with schemas:

    When the dead rabbit was seen by the dog, it hopped. What does “it” refer to: the rabbit or the dog?

    In the sentence “When the dead rabbit was seen by the dog, it hopped,” the pronoun “it” refers to the dog, not the rabbit.

    This is because the rabbit is described as “dead,” meaning it cannot hop or move. The verb “hopped” implies movement, which can only logically apply to the living subject in the sentence, which is the dog.

    So the intended meaning is that when the dog saw the dead rabbit, the dog itself hopped, likely in surprise or reaction to seeing the dead animal.

    When the iceberg was struck by the ship, it sunk. What does “it” refer to: the iceberg or the ship?

    In the sentence “When the iceberg was struck by the ship, it sunk,” the pronoun “it” refers to the ship, not the iceberg.

    The reason is that icebergs, being large masses of ice floating in water, do not actually “sink” when struck. They are already floating and may crack or break apart when struck, but the word “sunk” implies something going down below the water’s surface.

    Ships, on the other hand, are vessels designed to float on water. If a ship strikes an iceberg with enough force, it can cause damage that leads to the ship taking on water and eventually sinking below the surface.

    So in this context, with the verb “sunk” being used, it is more logical that “it” is referring to the ship sinking after striking the iceberg, rather than describing what happened to the stationary iceberg itself.

    • asdfasdfasdf@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      10 months ago

      You’re right! I tried against ChatGPT 3.5:

      “It” in this context likely refers to the dead rabbit, as it is the subject of the sentence and the one described as hopping.

      It got the ship one right though.

      • IvanOverdrive@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        I found that it helps if you ask chatGPT 4 to act as a Vulcan from Star Trek, it does better with logic puzzles. But it doesn’t work with 3.5.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    10 months ago

    The sonnet model is decent, but something weird is going on with their opus model as it just terribly sucks.

    Mistral-large is probably the best large model for practical purposes at this point.

    • bugsmith@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      Mistral-large is probably the best large model for practical purposes at this point.

      What makes you say that? I have not performed my own comparison, but everything I have seen and read suggests that GPT4 is king, currently.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        edit-2
        10 months ago

        It depends on the task, but in general a lot of the models have fallen into a dark pattern of Goodhart’s Law, targeting the benchmarks but suffering at other things.

        So as an example, while GPT-4 used to correctly model variations of the wolf, goat, cabbage problem with token similarity hacks (i.e. using emojis instead of nouns to break pattern similarity with the standard form of the question), now it even fails for that with the most recent updates, whereas mistral-large is the only one that doesn’t need the hack at all.

        • bugsmith@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          Interesting. That’s not something I’ve heard about until now, but something I’ll surely look into.

  • spez_@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    10 months ago

    Is it open source? If not then it’s just as worthless as OpenAI

  • drawerair@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    It beat Claude 3 on math and reasoning when analyzing images.

    What beat Claude 3?

    No free version of Opus, so I can’t try it.

    It’s 👍 that the ai competition is sizzling.

      • nikt@lemmy.ca
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        10 months ago

        Yes, it’s named after Claude Shannon, but I’ve never heard him described as “the founder of AI”. He’s the father of information theory, which is only indirectly connected to AI.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          10 months ago

          From the linked Wikipedia page:

          “Theseus”, created in 1950, was a mechanical mouse controlled by an electromechanical relay circuit that enabled it to move around a labyrinth of 25 squares.[71] The maze configuration was flexible and it could be modified arbitrarily by rearranging movable partitions.[71] The mouse was designed to search through the corridors until it found the target. Having travelled through the maze, the mouse could then be placed anywhere it had been before, and because of its prior experience it could go directly to the target. If placed in unfamiliar territory, it was programmed to search until it reached a known location and then it would proceed to the target, adding the new knowledge to its memory and learning new behavior.[71] Shannon’s mouse appears to have been the first artificial learning device of its kind.[71]

    • Apt_Q258@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      10 months ago

      Absolutely not. It’s pronounced \klod. The « OW » diphthong sound doesn’t exist in French.

      Cloud is generally pronounced as in English \aʊ\ or maybe \klud\ for non English speakers.

      There is no possible confusion in French between this two words.

      • BeatTakeshi@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        10 months ago

        Trust me… “je les ai téléchargés depuis le Claude” is exactly how most French will pronounce. Not all, but most. First hand experience

      • BeatTakeshi@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 months ago

        Sounds more like claode. Which is fraction away from Claude, and most often than not the the ao sounds au