







deleted by creator


Are all these data centers really going to be running at full capacity with open models like Qwen 3.6 27B that have performance approaching frontier, but can run on consumer hardware? Sure, it’s slow as of now, though there are tweaks to optimize it, and how long until we see open models that run reasonably fast and give frontier models a run for their money? My company MacBook can run models like this, so will there be a point where companies stop paying hundreds per user per month for cloud AI and have devs run open models on the laptops they already have? I definitely won’t be surprised if that’s the case.


An official member of the US military industrial complex is making a phone with a proprietary OS that hoovers up your data and shoves AI slop in your face 24/7. What’s not to like?


Maybe try old.reddit.com?
Lots of fond memories watching Planet Earth in the evenings with a cup of herbal tea.


I got an espresso machine a few years ago and learned to make a proper latte with it. At this point, a $9 cup of charry sugar water made by a teenager in a fast food restaurant doesn’t really appeal to me.
This is the bee’s knees.


deleted by creator


AIsubscriptions


Just as open weight models are getting good. Qwen 3.6 27B just dropped with claimed performance approaching Opus 4.6, but it can run on a Mac with a M-series SoC. I tested it out today on a M4 Pro with Ollama and Cline and was impressed with its reasoning, but it was slow. Going to try with llama.cpp tomorrow and mess around tweaking it for speed.
https://ai.rs/ai-developer/qwen-3-6-27b-local-coding-model
AI coding agents are useful, but it’s time for the cloud-based models to chill out so we can get cheap RAM again to run our shit locally.


If we are going to eschew open source projects from shitty tech companies, then there’s a pretty long list.


Quite an interesting development and here’s hoping this makes it to production.


Right? I’ve got the original and the 90s version in Jellyfin on my home lab server. 🧟♂️


Cool, let me know when the model leaks. 🥱


I know people who lied about having a degree, could do the job, and never got caught. I suppose speed running a degree from a degree mill yields a similar level of education, except with a piece of paper.


Part of my reason for self-hosting is not to have what I’m watching tracked, but I did use the Trakt trending list to discover new content. Now, their website doesn’t show the trending list without an account, so I can’t be bothered with Trakt now.


I should not have laughed so hard at that. I’m a horrible person.


I decided against Backblaze for server backups because they charge for certain API calls, and I ended up exceeding the quota when I was testing with the free tier. I was experimenting with encrypted backups and not sure how I exceeded it, but it really put me off that I could potentially have a surprise bill from experimenting without exceeding my storage quota. I went with iDrive e2 specifically because they don’t have API fees and it has worked fine the last couple years. My storage utilization has grown and I’ve been charged extra, which is expected, whereas API calls would be harder to predict depending on what I do in a given month. For self-hosting, I want easy, predictable pricing and don’t want to deal with surprise bills. It’s enough of a chore to manage cloud spend at work without it being a headache at home too.
I first heard pizzle in Kingdom Come Deliverance: “Are you pulling my pizzle?”