• aiccount@monyet.cc
    link
    fedilink
    arrow-up
    80
    ·
    8 months ago

    “A solution in search for a problem” is a phrase used way to much, and almost always in the wrong way. Even in the article it says that it has been solving problems for over a year, it just complains that it isn’t solving the biggest problems possible yet. It is remarkable how hard it is for people to extrapolate based on the trajectory. The author of this paper would have been talking about how pointless computers are if they were alive in the early 90s, and how they are just “a solution in search for a problem”.

    • t3rmit3@beehaw.org
      link
      fedilink
      arrow-up
      27
      ·
      8 months ago

      I am not a huge fan of generative AI, but even I can see it’s potential (both for good and for harm). Today I found out about Suno in another thread on here, and tried it out. As a mid-millennial (1988) who grew up with CD players and still thinks MiniDiscs and ZIP discs are the coolest cartridge formats, aesthetically, that thing absolutely blows my mind.

      We are like, 5 years into generative AI as a widely-available technology, and I can use it to generate entire songs on the fly based on just a couple sentences, complete with singing. I can use it to create logos and web graphics on my laptop in a matter of seconds, as I build a webpage. I can use it to help me build said webpage, also running locally on my laptop.

      And it’s still accelerating. 10 years from now, this stuff could be generating entire movies on-demand, running on a home media box.

      • aiccount@monyet.cc
        link
        fedilink
        arrow-up
        16
        ·
        8 months ago

        Yeah, I absolutely agree. About a month ago, I would have said that Suno was clearly leading in AI music generation, but since then, Udio has definitely taken the lead. I can’t imagine where things will be by the end of the year, let alone the end of the decade. This is why it’s so crazy to me when people look at generative AI and act like it’s no big deal and just a passing fad or whatever. They have no idea that there is a tsunami crashing down on us all and they always seem to be the ones that bill themselves as the weather experts who have it all figured out. Nobody knows the implications of this, but it definelty isn’t an inconsequential tech.

        • t3rmit3@beehaw.org
          link
          fedilink
          arrow-up
          10
          ·
          edit-2
          8 months ago

          I have a deep love of change, just intrinsically. I have medical issues which have meant that since I was a kid I’ve been accutely aware of my significantly shorter prospective lifespan, and I think that really drives the desire in me to witness major changes and historical events, sort of like truly internalizing that I (literally) can’t afford to wait for slow change.

          That doesn’t mean I want to see changes that cause suffering, like wars, it means I want to see incredible changes that have the potential to better peoples’ lives, like electric vehicles, space exploration, socialistrevolution, advancements in healthcare, etc. I am hopeful that the wide-ranging availability of AI, beyond just corporations, means it has the potential to be one of those changes (I’m also wary that it may end up just being subsumed by Capitalism into enriching the already-wealthy even further).

          I still feel that desire that many tech-folks do, to buy a plot of land in the middle of nowhere and just raise llamas and serve artisanal coffee to the parents of the kids that come to play with the llamas, and never look at a computer again, but I still want the world to be out there advancing and getting better even if I don’t engage with every new advancement directly, myself.

    • AggressivelyPassive@feddit.de
      link
      fedilink
      arrow-up
      10
      ·
      8 months ago

      The problem I see is mainly the divergence between hype and reality now, and a lack of a clear path forward.

      Currently, AI is almost completely unable to work unsupervised. It fucks up constantly and is like a junior employee who sometimes shows up on acid. That’s cool and all, but has relatively little practical use. However, I also don’t see how this will improve over time. With computers or smartphones, you could see relatively early on, what the potential is and the progression was steady and could be somewhat reliably extrapolated. With AI that’s not possible. We have no idea, if the current architectures could hit a wall tomorrow and don’t improve anymore. It could become an asymptotic process, where we need massive increases for marginal gains.

      Those two things combined mean, we currently only have toys, and we don’t know if these will turn into tools anytime soon.

      • aiccount@monyet.cc
        link
        fedilink
        arrow-up
        5
        ·
        8 months ago

        Yeah, it’s trajectory thing. Most people see the one-shot responses of something like the chatgpt’s current web interface on openai’s website and they think that’s where we are at. It isn’t though, the cutting edge of just what is currently openly available to people is things like CrewAI or Autogen using agents powered by things like Claude Opus or Llama 3, and maybe the latest gpt4 update.

        When you use agents you don’t have to baby every response, the agents can run code, test code, check latest information on the internet, and more. This way you can give a complex instruction, let it run and come back to a finished product.

        I say it is a trajectory thing because when you compare what was cutting-edge just 1 year ago, basically one-shot gpt3.5 to an agent network with today’s latest models, the difference is stark, and when you go a couple years before that to gpt2, it is way beyond stark. When you go a step further and realise that there is lots of custom hardware being built(basically llm ASICs-traditionally a ~10,000x speedup over general use gpus), you can see that soon having instant agent based responses will be the norm.

        All this compounds when you consider that we have not hit a plateau and that we are still seeing that better datasets, and more compute, are still producing better models. Not to mention that other architectures, like state-based Mamba, are making remarkable achievements with very little compute so far. We have no idea how powerful thinks like Mamba would be if they were given the datasets and training that the current popular models are being given.

        • AggressivelyPassive@feddit.de
          link
          fedilink
          arrow-up
          4
          ·
          8 months ago

          Even agents suffer from the same problem stated above: you can’t trust them.

          Compare it to a traditional SQL database. If the DB says, that it saved a row or that there are 40 rows in the table, then that’s true. They do have bugs, obviously, but in general you can trust them.

          AI agents don’t have that level of reliability. They’ll happily tell you that the empty database has all the 509 entries you expect them to have. Sure, you can improve reliability, but you won’t get anywhere near the DB example.

          And I think that’s what makes it so hard to extrapolate progress. AI fails miserably at absolute basic tasks and doesn’t even see that it failed. Success seems more chance than science. That’s the opposite of how every technology before worked. Simple problems first, if that’s solved, you push towards the next challenge. AI in contrast is remarkably good at some highly complex tasks, but then fails at basic reasoning a minute later.

          • aiccount@monyet.cc
            link
            fedilink
            arrow-up
            3
            ·
            8 months ago

            I think having it give direct quotes and specific sources would help your experience quite a bit. I absolutely agree that if just use the simplest forms of current LLMs and the “hello world” agent setups that there are hallucination issues and such, but lots of this is no longer an issue when you get deeper into it. It’s just a matter of time until the stuff that most people can easily use will have this stuff baked in, it isn’t anything that is impossible. I mean, I pretty much always have my agents tell me exactly where they get all their information from. The exception is when I have them writing code because there the proof is in the results.

            • AggressivelyPassive@feddit.de
              link
              fedilink
              arrow-up
              3
              ·
              8 months ago

              And what is the result? Either you have to check the sources if they really mean what the agent says they do, or you don’t check them meaning the whole thing is useless since they might come up with garbage anyway.

              I think you’re arguing on a different level than I am. I’m not interested in mitigations or workarounds. That’s fine for a specific use case, but I’m talking about the usage in principle. You inherently cannot trust an AI. It does hallucinate. And unless we get the “shroominess” down to an extremely low level, we can’t trust the system with anything important. It will always be just a small tool that needs professional supervision.

              • aiccount@monyet.cc
                link
                fedilink
                arrow-up
                4
                ·
                8 months ago

                This is an issue with many humans I’ve hired, though. Maybe they try to cut corners and do a shitty job, but I occasionally check, if they are bad at their job, I warn them, correct them, maybe eventually fire them. For lots of stuff, AI can be interacted with in a very similar way.

                This is so similar to many people’s complaints with self driving cars. Sure, accidents will still be had, they are not perfect, but neither are human drivers. If we hold AI to some standard that is way beyond people then yes, it’s not there, but if we say it just needs to be better than people, then it is there for many applications, but more importantly, it is rapidly improving. Even if it was only as good as people at something, it is still way cheaper and faster. For some things, it’s worth it if it isn’t even as good as people yet.

                I have very little issues with hallucinations anymore, when I use an LLM to get anything involving facts, I always tell it to give sources for everything, and i can have another agent independently verify the sources before i see them. Often times I provide the books or papers that I want it to specifically source from. Even if I am going to check all the sources myself after that, it is still way more efficient then if I did the whole thing myself. The thing is, with the setups I use, I literally never have it make up sources anymore. I remember that kind of thing happening back in the days when AI didn’t have internet access, and there really weren’t agents yet. I realize some people are still back there, but in the future(that many of us are in) its basically solved. There is still logic mistakes and such, that stuff can’t be 100% depended on, but if you have a team of agents going back and forth to find an answer, then you pass it to another team of agents to independently verify the answer, and have it cycle back if a flaw is found, many issues just go away. Maybe some mistakes make it through this whole process, but the same thing happens sometimes with people.

                I don’t have the link on hand, but there have been studies done that show gpt3.5 working in agentic cycles perform as good or better than gpt4 out of the box. The article I saw that in was saying that basically there are already people using what gpt5 will most likely be just by using teams of agents with the latest models.

              • aiccount@monyet.cc
                link
                fedilink
                arrow-up
                1
                ·
                8 months ago

                This is an issue with many humans I’ve hired, though. Maybe they try to cut corners and do a shitty job, but I occasionally check, if they are bad at their job, I warn them, correct them, maybe eventually fire them. For lots of stuff, AI can be interacted with in a very similar way.

                This is so similar to many people’s complaints with self driving cars. Sure, accidents will still be had, they are not perfect, but neither are human drivers. If we hold AI to some standard that is way beyond people then yes, it’s not there, but if we say it just needs to be better than people, then it is there for many applications, but more importantly, it is rapidly improving. Even if it was only as good as people at something, it is still way cheaper and faster. For some things, it’s worth it if it isn’t even as good as people yet.

                I have very little issues with hallucinations anymore, when I use an LLM to get anything involving facts, I always tell it to give sources for everything, and i can have another agent independently verify the sources before i see them. Often times I provide the books or papers that I want it to specifically source from. Even if I am going to check all the sources myself after that, it is still way more efficient then if I did the whole thing myself. The thing is, with the setups I use, I literally never have it make up sources anymore. I remember that kind of thing happening back in the days when AI didn’t have internet access, and there really weren’t agents yet. I realize some people are still back there, but in the future(that many of us are in) its basically solved. There is still logic mistakes and such, that stuff can’t be 100% depended on, but if you have a team of agents going back and forth to find an answer, then you pass it to another team of agents to independently verify the answer, and have it cycle back if a flaw is found, many issues just go away. Maybe some mistakes make it through this whole process, but the same thing happens sometimes with people.

                I don’t have the link on hand, but there have been studies done that show gpt3.5 working in agentic cycles perform as good or better than gpt4 out of the box. The article I saw that in was saying that basically there are already people using what gpt5 will most likely be just by using teams of agents with the latest models.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      8 months ago

      Extrapolation is, like, one notch above guessing, though. It’s not wrong, exactly, but I’m not convinced failing to do it is an error in every context.

      Mostly, you’re right, this article makes it’s argument by openly ignoring all the applications it has found. But anything where “hallucination” would be a problem might need a fundamentally different technology.

      • aiccount@monyet.cc
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        I think without anything akin to extrapolation, we just need to wait and see what the future holds. In my view, most people are almost certainly going to be hit up side the head in the not to distant future. Many people haven’t even considered what a world might be like where pretty much all the jobs that people are doing now are easily automated. It is almost like instead of considering this, they are just clinging to some idea that the 100-meter wave hanging above us couldn’t possibly crash down.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          3
          ·
          8 months ago

          Since coming to Lemmy, I have had more conversations with people unreasonably doubting anything will change, that’s true. Those people are guessing at best.

          There’s other data we have on this one. GPT5 is coming, so near-term extrapolation is reasonable. After that, exponential increase in compute has only linearly increased performance, and running out of internet to train on is increasingly a threat, so just adding more params can only be unsustainable. The following period would be about using neural nets cleverly together with conventional algorithms, but it’s hard to know how far that can go. Anything from a spooky near-term hyperintellegence to another decades-long AI winter is possible.

          Physical jobs, at least, are looking fairly safe, so if you want job security become an electrician. Millions of years of evolving to scurry through chaotic, tangled environments is apparently hard to replicate. Even regulated public roadways have proven tricky.

          It is almost like instead of considering this, they are just clinging to some idea that the 100-meter wave hanging above us couldn’t possibly crash down.

          Honestly, the fact that serious, important people are talking about it at all is a pleasant surprise. I still have conversations where people complain about the freakish weather, and then clam up suddenly the after a while because they remember climate change is supposed to be a hoax. I don’t even try to rub it in, it just happens.

    • realitista@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      8 months ago

      My issue with generative AI is not that it doesn’t have uses, but that it seems to me that the vast majority of those uses are nefarious.

      As far as I can tell, it has the most potential for:

      • Creating sock puppet accounts on social media to sway public opinion

      • Make fake media/ identity theft

      • Plagarize various art mediums and meld them together enough to make attribution difficult

      Other positive use cases like summarization or reformatting seem to pale in comparison to the potential negative effects of the bad use cases. There are many marginal use cases like coding or law where you may save some time but the review required is likely not that much different than the time it would take for a good programmer or lawyer to just write it.

      • aiccount@monyet.cc
        link
        fedilink
        arrow-up
        7
        ·
        8 months ago

        Most positive use cases are agent-based and the average user doesn’t have access to good agent-based systems yet because it requires a bit of willingness to do some “coding”. This will soon not be the case though. I can give my crew of AI agents a mission, for example, “find all the papers on baby owl vocalizations and make 10 different charts of the frequency range relative to their average size after each of their first 10 weeks of life”, and come back an hour later and have something that would have been 100 hours for a grad student just last year. Right now I have to wait an hour or so, soon it will be instant.

        The real usefulness of these agents today is enormous, it is just outside of the view of many average people because their normal lives don’t require this kind of power.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        8 months ago

        You forgot porn.

        Edit: Actually, in the article it mentions coding assistents and various interfaces. Not to mention the plagiarism thing is a misunderstanding. I’m not sure why I decided to jump on the jerk there, I disagree with you.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 months ago

      Agreed. I mean yeah, image generators are still very limited (or at least, difficult to use in an advanced, targeted way), but there’s a new research paper out every day detailing new techniques. None of the criticisms of Midjourney or Stable Diffusion today are likely to remain valid in a year or even six months. And they’re already highly useful for certain tasks.

      Same with LLMs, only we’ve already reached the point where they are good enough for almost anything if you care to write a good application around them. The problem with LLMs at this point is marketing; people expect them to be magic and are disappointed when they don’t live up their expectations. They’re not magic but they are extremely useful. Just please, for the love of god, do not treat them as information repositories…

    • emerald@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      Isn’t the “trajectory” that these systems are incredibly unsustainable both economically and environmentally? I’d hope that a machine that uses a few thousand homes’ worth of energy to answer a single query would be more useful than “can generate boilerplate code for me” or whatever.

      • aiccount@monyet.cc
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        I think there may be some confusion about how much energy it takes to respond to a single query or generate boilerplate code. I can run Llama 3 on my computer and it can do those things no problem. My computer would use about 6kWh if I ran it for 24 hours, a person in comparison takes about half of that. If my computer spends 4 hours answering queries and making code then it would take 1kWh, and that would be a whole lot of code and answers. The whole thing about powering a small town is a one-time process when the model is made, so to determine if that it worth it or not it needs to be distributed over everyone who ends up using the model that is produced. The math for that would be a bit trickier.

        When compared to the amount of energy it would take to produce a group of people that can do question answering and code writing, I’m very certain that the ai model method is considerably less. Hopefully, we don’t start making our decision about which one to produce based on energy efficiency. We may, though, if the people that choose the fate of the masses sees us like livestock, then we may end up having our numbers reduced in the name of efficiency. When cars were invented, horses didn’t end up all living in paradise. There were just a whole lot less of them around.

      • d3Xt3r@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        8 months ago

        That’s going to change in the future with NPUs (neural processing units). They’re already being bundled with both regular CPUs (such as the Ryzen 8000 series) and mobile SoCs (such as the Snapdragon 8 Gen 3). The NPU included with the the SD8Gen3 for instance can run models like Llama 2 - something an average desktop would normally struggle with. Now this is only the 7B model mind you, so it’s a far cry from more powerful models like the 70B, but this will only improve in the future. Over the next few years, NPUs - and applications that take advantage of them - will be a completely normal thing, and it won’t require a household’s worth of energy. I mean, we’re already seeing various applications of it, eg in smartphone cameras, photo editing apps, digital assistants etc. The next would be I guess autocorrect and word prediction, and I for one can’t wait to ditch our current, crappy markov keyboards.

  • Lad@reddthat.com
    link
    fedilink
    arrow-up
    10
    ·
    8 months ago

    I enjoy image generation AI for it’s ability to turn that really specific picture you have in your head into something that you can show to others within a matter of seconds.

  • null@slrpnk.net
    link
    fedilink
    arrow-up
    9
    ·
    8 months ago

    Weird, I use it all the time. Even starting to use it for work to save a ton of time on simple, time-consuming work.

  • GBU_28@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    It’s use in hybrid search CRAG and discussion systems as a human in the loop augmentation is quite valuable. It saves analyst time and streamlines further review. In a sufficiently adversarial system where multiple models are vetting the responses for sanity with back checks, it’s quite performant, having decent recall and accuracy.

  • Murvel@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    8 months ago

    It clearly isn’t!

    A) Man has dreamt of Artificial Intelligence for decades now, often times very much realizing the capabilities (and dangers) of such technology B) AI in its current form already support business, hobbies, creative work etc. The traffic and processing power needed is constantly rising.

    I feel with a such a bold (and just incorrect) statement the article cannot be much worth to read.