I think my issue is I’m trying to push 4K. And I like pretty high averages. That’s too much to ask, heh.
- 5 Posts
- 911 Comments
IIRC FSR4 is 9000 series-only.
Out of curiosity, what settings do you run on your 3090?
I followed this guide to try and optimize RT, and even ran a mod that hacked in FG3.1 (which I found pretty underwhelming TBH): https://www.nexusmods.com/games/cyberpunk2077/collections/lv4wpp
And… it’s way too slow, even with copious dlss and tons of tweaking. Most I can handle is just RT reflections with all the other settings off.
brucethemoose@lemmy.worldto Technology@lemmy.world•Leading AI Models Are Completely Flunking the Three Laws of RoboticsEnglish8·23 hours agoClickbait.
brucethemoose@lemmy.worldto Technology@lemmy.world•Leading AI Models Are Completely Flunking the Three Laws of RoboticsEnglish51·23 hours agoThere may be thought in a sense.
A analogy might be a static biological “brain” custom grown to predict a list of possible next words in a block of text. It’s thinking, sorta. Maybe it could acknowledge itself in a mirror. That doesn’t mean it’s self aware, though: It’s an unchanging organ.
And if one wants to go down the rabbit hole of “well there are different types of sentience, lines blur,” yada yada, with the end point of that being to treat things like they are…
All ML models are static tools.
For now.
brucethemoose@lemmy.worldto Selfhosted@lemmy.world•Very large amounts of gaming gpus vs AI gpusEnglish4·2 days agoIt depends!
Exllamav2 was pretty fast on AMD, exllamav3 is getting support soon. Vllm is also fast AMD. But its not easy to setup; you basically have to be a Python dev on linux and wrestle with pip. Or get lucky with docker.
Base llama.cpp is fine, as are forks like kobold.cpp rocm. This is more doable without so much hastle.
The AMD framework desktop is a pretty good machine for large MoE models. The 7900 XTX is the next best hardware, but unfortunately AMD is not really interested in competing with Nvidia in terms of high VRAM offerings :'/. They don’t want money I guess.
And there are… quirks, depending on the model.
I dunno about Intel Arc these days, but AFAIK you are stuck with their docker container or llama.cpp. And again, they don’t offer a lot of VRAM for the $ either.
NPUs are mostly a nothingburger so far, only good for tiny models.
Llama.cpp Vulkan (for use on anything) is improving but still behind in terms of support.
A lot of people do offload MoE models to Threadripper or EPYC CPUs, via ik_llama.cpp, transformers or some Chinese frameworks. That’s the homelab way to run big models like Qwen 235B or deepseek these days. An Nvidia GPU is still standard, but you can use a 3090 or 4090 and put more of the money in the CPU platform.
You wont find a good comparison because it literally changes by the minute. AMD updates ROCM? Better! Oh, but something broke in llama.cpp! Now its fixed an optimized 4 days later! Oh, architecture change, not it doesn’t work again. And look, exl3 support!
You can literally bench it in a day and have the results be obsolete the next, pretty often.
brucethemoose@lemmy.worldto Selfhosted@lemmy.world•Very large amounts of gaming gpus vs AI gpusEnglish1·2 days agoDepends. You’re in luck, as someone made a DWQ (which is the most optimal way to run it on Macs, and should work in LM Studio): https://huggingface.co/mlx-community/Kimi-Dev-72B-4bit-DWQ/tree/main
It’s chonky though. The weights alone are like 40GB, so assume 50GB of VRAM allocation for some context. I’m not sure what Macs that equates to… 96GB? Can the 64GB can allocate enough?
Otherwise, the requirement is basically a 5090. You can stuff it into 32GB as an exl3.
Note that it is going to be slow on Macs, being a dense 72B model.
brucethemoose@lemmy.worldto Selfhosted@lemmy.world•Very large amounts of gaming gpus vs AI gpusEnglish1·3 days agoOne last thing: I’ve heard mixed things about 235B, hence there might be a smaller, more optimal LLM for whatever you do.
For instance, Kimi 72B is quite a good coding model: https://huggingface.co/moonshotai/Kimi-Dev-72B
It might fit in vllm (as an AWQ) with 2x 4090s. It and would easily fit in TabbyAPI as an exl3: https://huggingface.co/ArtusDev/moonshotai_Kimi-Dev-72B-EXL3/tree/4.25bpw_H6
As another example, I personally use Nvidia Nemotron models for STEM stuff (other than coding). They rock at that, specifically, and are weaker elsewhere.
brucethemoose@lemmy.worldto Selfhosted@lemmy.world•Very large amounts of gaming gpus vs AI gpusEnglish2·3 days agoAh, here we go:
https://huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF
Ubergarm is great. See this part in particular: https://huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF#quick-start
You will need to modify the syntax for 2x GPUs. I’d recommend starting f16/f16 K/V cache with 32K (to see if that’s acceptable, as then theres no dequantization compute overhead), and try not go lower than q8_0/q5_1 (as the V is more amenable to quantization).
brucethemoose@lemmy.worldto Selfhosted@lemmy.world•Very large amounts of gaming gpus vs AI gpusEnglish4·3 days agoQwen3-235B-A22B-FP8
Good! An MoE.
Ideally its maxium context lenght of 131K but i’m willing to compromise.
I can tell you from experience all Qwen models are terrible past 32K. What’s more, going over 32K, you have to run them in a special “mode” (YaRN) that degrades performance under 32K. This is particularly bad in vllm, as it does not support dynamic YaRN scaling.
Also, you lose a lot of quality with FP8/AWQ quantization unless it’s native FP8 (like deepseek). Exllama and ik_llama.cpp quants are much higher quality, and their low batch performance is still quite good. Also, VLLM has no good K/V cache quantization (its FP8 destroys quality), while llama.cpp’s is good, and exllama’s is excellent, making it less than ideal for >16K. Its niche is more highly parallel, low context size serving.
My current setup is already: Xeon w7-3465X 128gb DDR5 2x 4090
Honestly, you should be set now. I can get 16+ t/s with high context Hunyuan 70B (which is 13B active) on a 7800 CPU/3090 GPU system with ik_llama.cpp. That rig (8 channel DDR5, and plenty of it, vs my 2 channels) should at least double that with 235B, with the right quantization, and you could speed it up by throwing in 2 more 4090s. The project is explicitly optimized for your exact rig, basically :)
It is poorly documented through. The general strategy is to keep the “core” of the LLM on the GPUs while offloading the less compute intense experts to RAM, and it takes some tinkering. There’s even a project to try and calculate it automatically:
https://github.com/k-koehler/gguf-tensor-overrider
IK_llama.cpp can also use special GGUFs regular llama.cpp can’t take, for faster inference in less space. I’m not sure if one for 235B is floating around huggingface, I will check.
Side note: I hope you can see why I asked. The web of engine strengths/quirks is extremely complicated, heh, and the answer could be totally different for different models.
brucethemoose@lemmy.worldto Selfhosted@lemmy.world•Very large amounts of gaming gpus vs AI gpusEnglish9·3 days agoBe specific!
-
What models size (or model) are you looking to host?
-
At what context length?
-
What kind of speed (token/s) do you need?
-
Is it just for you, or many people? How many? In other words should the serving be parallel?
In other words, it depends, but the sweetpsot option for a self hosted rig, OP, is probably:
-
One 5090 or A6000 ADA GPU. Or maybe 2x 3090s/4090s, underclocked.
-
A cost-effective EPYC CPU/Mobo
-
At least 256 GB DDR5
Now run ik_llama.cpp, and you can serve Deepseek 671B faster than you can read without burning your house down with H200s: https://github.com/ikawrakow/ik_llama.cpp
It will also do for dots.llm, kimi, pretty much any of the mega MoEs de joure.
But there’s all sorts of niches. In a nutshell, don’t think “How much do I need for AI?” But “What is my target use case, what model is good for that, and what’s the best runtime for it?” Then build your rig around that.
-
brucethemoose@lemmy.worldto Mildly Infuriating@lemmy.world•Reddit bought giant ads in Paris, urging people to join English6·3 days agoUnfortunately, most are moving to Discord :(
brucethemoose@lemmy.worldto Games@lemmy.world•Vintage gaming advertising pictures: a galleryEnglish4·3 days agoThey need to give it to the current marketing team. And save some for me.
brucethemoose@lemmy.worldto No Stupid Questions@lemmy.world•How did websites like TinEye recognize cropped photos of the same image (and other likened pictures), without the low-entry easyness of LLM/AI Models these days?3·3 days agomaking the most with what you have
That was, indeed, the motto of ML research for a long time. Just hacking out more efficient approaches.
It’s people like Altman that introduced the idea of not innovating and just scaling up what you already have. Hence many in the research community know he’s full of it.
brucethemoose@lemmy.worldto No Stupid Questions@lemmy.world•How did websites like TinEye recognize cropped photos of the same image (and other likened pictures), without the low-entry easyness of LLM/AI Models these days?6·3 days agoOh and to answer this, specifically, Nvidia has been used in ML research forever. It goes back to 2008 and stuff like the desktop GTX 280/CUDA 1.0. Maybe earlier.
Most “AI accelerators” are basically the same thing these days: overgrown desktop GPUs. They have pixel shaders, ROPs, video encoders and everything, with the one partial exception being the AMD MI300X and beyond (which are missing ROPs).
CPUs were used, too. In fact, Intel made specific server SKUs for giant AI users like Facebook. See: https://www.servethehome.com/facebook-introduces-next-gen-cooper-lake-intel-xeon-platforms/
brucethemoose@lemmy.worldto No Stupid Questions@lemmy.world•How did websites like TinEye recognize cropped photos of the same image (and other likened pictures), without the low-entry easyness of LLM/AI Models these days?11·3 days agoMachine learning has been a field for years, as others said, yeah, but Wikipedia would be a better expansion of the topic. In a nutshell, it’s largely about predicting outputs based on trained input examples.
It doesn’t have to be text. For example, astronmers use it to find certain kinds of objects in raw data feeds. Object recognition (identifying things in pictures with little bounding boxes) is an old art at this point. Series prediction models are a thing, languagetool uses a tiny model to detect commonly confused words for grammar checking. And yes, image hashing is another, though not entirely machine learning based. IDK what Tineye does in their backend, but there are some more “oldschool” approaches using more traditional programming techniques, generating signatures for images that can be easily compared in a huge database.
You’ve probably run ML models in photo editors, your TV, your phone (voice recognition), desktop video players or something else without even knowing it. They’re tools.
Seperately, image similarity metrics (like lpips or SSIM) that measure the difference between two images as a number (where, say, 1 would be a perfect match and 0 totally unrelated) are common components in machine learning pipelines. These are not usually machine learning based, barring a few execptions like VMAF (which Netflix developed for video).
Text embedding models do the same with text. They are ML models.
LLMs (aka models designed to predict the next ‘word’ in a block of text, one at a time, as we know them) in particular have an interesting history, going back to (If I even remember the name correctly) BERT in Google’s labs. There were also tiny LLMS people did run on personal GPUs before ChatGPT was ever a thing, like the infamous Pygmalion 6B roleplaying bot, a finetune of GPT-J 6B. They were primitive and dumb, but it felt like witchcraft back then (before AI Bros marketers poisoned the well).
brucethemoose@lemmy.worldto Asklemmy@lemmy.ml•Lemmy, what's the meaning, or point if you prefer, of life? I know 42, but I'm serious. Nothing lasts, everything is meaningless - are we just amusing ourselves until death?10·4 days agoOther people.
Make connections in your little circle/tribe; make people happy. It’s our biology, it’s what we evolved to do, and it’s what you leave behind.
brucethemoose@lemmy.worldto World News@lemmy.world•Gay blessings 'will remain' under Pope Leo, Vatican doctrine chief saysEnglish41·6 days agoIn the future, when we’re transcendent tentacled robofurries doing poly in virtual space, on drugs (think Yivo from Futurama), we will look back in confusion at why so many people hate homosexuality so much. Like… don’t they have other things to worry about?
Or humanity will be all dead, I guess.
And I’m talking about the mega conservatives protesting this; at least the Vatican is baby stepping and trying to minimize their cruelty.
Coffee Stain’s another good example on the bigger end.
It does seem like there’s a danger zone behind a certain size threshold. It makes me worry for Warhorse (the KCD2 dev), which plans to expand beyond 250.
Eh, there’s not as much attention paid to them working across hardware because AMD prices their hardware uncompetitively (hence devs don’t test them much), and AMD themself focuses on the MI300X and above.
Also, I’m not sure what layer one needs to get ROCM working.