• Mordikan@kbin.earth
    link
    fedilink
    arrow-up
    1
    ·
    4 hours ago

    This is the definition of a zero effort post.

    You don’t want to put forth the effort to bug hunt, you want an AI agent to bug hunt for you. You don’t want to learn to even setup the agent, you want other people to explain step by step how to do that for you.

    I’m assuming you aren’t even going to review before it submits. Honestly though, how would you review it given you don’t know anything about the topic its submitting on.

  • artyom@piefed.social
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    2
    ·
    19 hours ago

    Dear God, please don’t. FF does not want your AI slop bug reports. You people are ruining open source.

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    9 hours ago

    This makes me genuinely curious, who thought that would be a good idea?

    It feels like a lot of “contribution” to open source suddenly is fueled by AI hype. Is it a LinkedIn/TikTok “trick” that is being amplified that somehow one will get a very well paid job at a BigTech company if they somehow have a lot of contributions on popular projects?

    Where does that this trend actually come from?

    Did anybody doing so ever bother checking contribution guidelines to see which tasks should actually be prioritized and if so with which tools?

    This seems like a recurring pattern so it’s not a random idea someone had.

  • org@lemmy.org
    link
    fedilink
    arrow-up
    14
    ·
    18 hours ago

    Pretty sure if you have to ask how to do it, you’re not qualified to do it.

  • Hexarei@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    14 hours ago

    run a local LLM like Claude!

    Look inside

    “Run ollama”

    Ollama will almost always be slower than running vllm or llama.cpp, nobody should be suggesting it for anything agentic. On most consumer hardware, the availability of llama.cpp’s --cpu-moe flag alone is absurdly good and worth the effort to familiarize yourself with llamacpp instead of ollama.

    • ctrl_alt_esc@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      7 hours ago

      I have used Ollama so far and it’s indeed quite slow, can you recommend a good guide for setting up llama.cpp (on linux). I have Ollama running in a docker container with openwebui, that kind of setup would be ideal.

      • Hexarei@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        6 hours ago

        I just run the llama-swap docker container with a config file mounted, set to listen for config changes so I don’t have to restart it to add new models. I don’t have a guide besides the README for llama-swap.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    19 hours ago

    Did you forget the body text? Or is this some bug? Looks like a question here, and like an AI fabricated tutorial in the original version of this cross-post.

  • ZWQbpkzl [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 hours ago

    You’ll have to be more specific with how anthropic is debuging Firefox. There’s many sort of possible setups. In general though, you’ll need

    • an llm model file
    • some openai compatible server, eg lmstudio, llama.cpp, ollama.
    • some sort of client to that server there’s a myriad of options here. OpenCode is the most like Claude. But there’s also more modular programmatic clients, which might suit a long term task
    • the Firefox source code and/or an MCP server via some plugin.

    You’ll also need to know which models your hardware can run. “Smarter” models require more ram. Models can run on both CPUs and GPUs but they run way faster on the GPU, if they fit in the VRAM.

  • etchinghillside@reddthat.com
    link
    fedilink
    arrow-up
    4
    arrow-down
    3
    ·
    edit-2
    2 hours ago

    Props for putting something together and not burying it in a 20 minute YouTube video.

    My mind initially went to OpenCode - I’m not familiarity lite-cc - any reason you opted for that? Is it just kinder on smaller local models?

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      10 hours ago

      Judging by the github repo, it’s the very basic cousin, written (vibe-coded) in Python. It doesn’t do planning or anything, just preface your command with a system prompt telling your model it’s a coding assistant. And gives it tool access to read and write files. And execute commands.

      And seems no human uses it, there’s no interactions like bug reports, PRs or people who star and like the repo.