• dick_stitches@lemm.ee
    link
    fedilink
    arrow-up
    22
    ·
    9 months ago

    In 30 years, we’re going to look back at this headline like we look back at articles about the internet or smart phones being fads.

    • darkphotonstudio@beehaw.org
      link
      fedilink
      arrow-up
      25
      ·
      9 months ago

      They are discussing a very specific approach and a paper that lays out the issues with pursuing this one specific type of generative AI. It’s not about AI in general. The headline is a bit click-baity.

      • dick_stitches@lemm.ee
        link
        fedilink
        arrow-up
        11
        ·
        9 months ago

        I think most people underestimate how big of a deal it’s going to be when this tech is pervasive in things like search engines or digital assistants. There are many times when I can’t figure out the right combination of words to put into a search engine to find the results. ChatGPT is already my go to when I want to figure out a movie or song from some random combination of foggy memories. Imagine after 10 more years of cpu/gpu innovations, and chat applications that have actually been designed for information retrieval, how much that is going to transform how we interact with data and information.

        Full disclosure, I didn’t watch the video. I just can’t imagine that that headline isn’t going to look silly in 30 years.

        • FreeFacts@sopuli.xyz
          link
          fedilink
          arrow-up
          18
          ·
          9 months ago

          Imagine after 10 more years of cpu/gpu innovations, and chat applications that have actually been designed for information retrieval, how much that is going to transform how we interact with data and information.

          LLMs are going to change how we interact with data and information, but not the way you think. The AI-generated spam will ruin the whole concept of internet search completely. Only information that we can trust is going to be human-curated.

        • eleitl@lemmy.ml
          link
          fedilink
          arrow-up
          8
          ·
          9 months ago

          There are diminishing returns in semiconductor photolitho. Moore scaling is long over, absolute real estate see WSI with Cerebras, DC costs and power envelope are all sending a clear message. Quantization is there, so you can go from digital multipliers to analog and go spiking networks, but transformers and Co have little power there.

          Also, the kind of economy that can carry Gen AI as business model is not a given, long term.

  • flora_explora@beehaw.org
    link
    fedilink
    arrow-up
    5
    ·
    9 months ago

    Great video, thanks! Regarding the over representation of certain concepts/things I have been disappointed from day one by generative AI. If you want it to draw you something obscure it miserably fails and tries to fall back on stuff it knows. Also all the discriminatory biases generative AI has about different people because of lacking data sets. It is very obvious that it cannot “outperform” its own data input (like the exciting curve in the video) but that it will rather stagnate.

  • h3ndrik@feddit.de
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    9 months ago

    I think that’s a good question. And a nice video. The findings in the paper seem to arrive at that conclusion and we might need to find a better approach. Mind that (as he pointed out) it doesn’t rule out growth in AI. It just hints at probable stagnation with the current methods. I’m already fascinated by the current tech and the new possibilities. But AI is really hyped as of now and I too, think we should take the claims of the big AI companies with a grain of salt. I’m sure the scientists at OpenAI are already concerned with exactly this as they do research for the next generations of ChatGPT. It’s a bit of a bummer that lots of the research get’s done behind closed curtains and we’re going to have to wait for a bit longer to find out.