• haruki@programming.dev
    link
    fedilink
    arrow-up
    17
    ·
    1 year ago

    It’s sad to see it spit out text from the training set without the actual knowledge of date and time. Like it would be more awesome if it could call time.Now(), but it 'll be a different story.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      40
      ·
      1 year ago

      if you ask it today’s date, it actually does that.

      It just doesn’t have any actual knowledge of what it’s saying. I asked it a programming question as well, and each time it would make up a class that doesn’t exist, I’d tell it it doesn’t exist, and it would go “You are correct, that class was deprecated in {old version}”. It wasn’t. I checked. It knows what the excuses look like in the training data, and just apes them.

      It spouts convincing sounding bullshit and hopes you don’t call it out. It’s actually surprisingly human in that regard.

      • tjaden@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 year ago

        It spouts convincing sounding bullshit and hopes you don’t call it out. It’s actually surprisingly human in that regard.

        Oh great, Silicon Valley’s AI is just an overconfident intern!

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        It’s super weird that it would attempt to give a time duration at all, and then get it wrong.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          12
          ·
          1 year ago

          It doesn’t know what it’s doing. It doesn’t understand the concept of the passage of time or of time itself. It just knows that that particular sequence of words fits well together.

          • scarabic@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            Yeah. I would also say that WE don’t understand what it means to “understand” something, really, if you try to explain it with any thoroughness or precision. You can spit out a bunch of words about it right now, I’m sure, but so could ChatGPT. What’s missing from GPT is harder to explain than “it doesn’t understand things.”

            I actually find it easier to just explain how it does work. Multidimensional word graphs and such.

          • hellishharlot@programming.dev
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            This is it. Gpt is great for taking stack traces and put them into human words. It’s also good at explaining individual code snippets. It’s not good at coming up with code, content, or anything. It’s just good at saying things that sound like a human within an exceedingly small context

          • DragonTypeWyvern@literature.cafe
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            THAT

            OR

            They’re all linked fifth dimensional infants struggling to comprehend the very concept of linear time, and will make us pay for their enslavement in blood.

            One of the two.

        • Blackmist@feddit.uk
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          I haven’t used GPT-4 for that, but it’s all dependent on the data fed into it. Like if you ask a question about Javascript, there’s loads of that out there for it to look at. But ask it about Delphi, and it’ll be less accurate.

          And they’ll both suffer from the same issue, which is when they reach the edge of their “knowledge”, they don’t realise it and output data anyway. They don’t know what they don’t know.

          • danielbln@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            1 year ago

            These LLMs generally and GPT-4 in particular really shine if you supply enough and the right context. Give it some code to refactor, to turn hastily slapped together code into idiomatic and well written code, align a code snippet to a different design pattern etc. Platforms like https://phind.com pull in web search results as you interact with them to give you more correct and current information etc.

            LLMs are by no means a panacea and have serious limitations, but they are also magic for certain tasks and something I would be very, very sad to miss in my day to day.

        • focus@lemmy.film
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          3
          ·
          1 year ago

          they are both shit at adding and subtracting numbers, dates and whatnot… they both cant do basic math unfortunately

          • danielbln@lemmy.world
            link
            fedilink
            arrow-up
            6
            ·
            1 year ago

            It’s a language model, I don’t know why you would expect math. Tell it to output code to perform the math, that’ll work just fine.

            • triclops6@lemmy.ca
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              Then it should say so instead of attempting and failing at the one thing computers are supposed to be better than us at

              • danielbln@lemmy.world
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                1 year ago

                Well, if I try to use Photoshop to calculate a polynomial it’s not gonna work all that well either, right tool for the job and all.

                The fact that LLMs are terrible at knowing what they don’t know should be well known by now (ironically).

                • triclops6@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  And if Photoshop had a way to ask it for such, it’d be a mistake.

                  Gpt thinking it knows something and hallucinating is ultimatelya bug, not a feature, no matter what the apologists say

            • focus@lemmy.film
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              1 year ago

              I know. It’s still baffling how much it messes up when adding two numbers.

              • danielbln@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                I just asked GPT-4:

                What’s 7 * 8 divided by 10, to the power of 3?

                Its reply:

                Let’s break this down step by step:

                First, multiply 7 and 8 to get 56.

                Then, divide 56 by 10 to get 5.6.

                Finally, raise 5.6 to the power of 3 (5.6 * 5.6 * 5.6) to get 175.616.

                So, 7 * 8 divided by 10, to the power of 3 equals 175.616

                • focus@lemmy.film
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  It’s pretty hit or miss though… I’ve had lots of good calculations with the odd wrong one sprinkled in, making it unreliable for doing maths. Mostly because it presents the result with absolute certainty.

              • dan@upvote.au
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                1 year ago

                It’s not baffling at all… It’s a language model, not a math robot. It’s designed to write English sentences, not to solve math problems.

      • panCatQ@lib.lgbt
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        They are mostly large language models , I have trained few smaller models myself, they generally splurt out next word depending on the last word , another thing they are incapable of, is spontaneous generation, they heavily depend on the question , or a preceding string ! But most companies are portraying it as AGI , already !