I’m not sure either, Win 10/11 are pretty quick to get going and Ubuntu is not much longer than that. If I have to hard reset the mbp for work, it’s a nice block of slacker time :)
Gamer, rider, dev. Interested in anything AI.
I’m not sure either, Win 10/11 are pretty quick to get going and Ubuntu is not much longer than that. If I have to hard reset the mbp for work, it’s a nice block of slacker time :)
For the really old stuff, I used to do NetBSD. I’m sure their 32bit x86 support is still top notch.
These are amazing. Dell, Lenovo and I think HP made these tiny things and they were so much easier to get than Pi’s during the shortage. Plus they’re incredibly fast in comparison.
Bad article title. This is the “Textbooks are all you need” paper from a few days ago. It’s programming focused and I think Python only. For general purpose LLM use, LLaMA is still better.
I hate these filthy neutrals…
I paid $1100 for a 3070 during the pandemic with a newegg bundle deal (trash stuff they couldn’t sell). I already had a 2070 and it was a complete waste of money.
The advancements in this space have moved so fast, it’s hard to extract a predictive model on where we’ll end up and how fast it’ll get there.
Meta releasing LLaMA produced a ton of innovation from open source that showed you could run models that were nearly the same level as ChatGPT with less parameters, on smaller and smaller hardware. At the same time, almost every large company you can think of has prioritized integrating generative AI as a high strategic priority with blank cheque budgets. Whole industries (also deeply funded) are popping up around solving the context window memory deficiencies, prompt stuffing for better steerability, better summarization and embedding of your personal or corporate data.
We’re going to see LLM tech everywhere in everything, even if it makes no sense and becomes annoying. After a few years, maybe it’ll seem normal to have a conversation with your shoes?