

Given how few upvotes this has, it seems people in this thread don’t like Microsoft’s policy, but also have a moral objection to running a script to get the extended updates free.
Given how few upvotes this has, it seems people in this thread don’t like Microsoft’s policy, but also have a moral objection to running a script to get the extended updates free.
I use k3s with Calico so I can have k8s network policies for each service I’m running.
NVIDIA definitely dominates for specialized workloads. Look at these Blender rendering benchmarks and notice AMD doesn’t appear until page 3. Wish there were an alternative to NVIDIA Optix that were as fast for path tracing, but there unfortunately is not. Buy an AMD card if you’re just gaming, but you’re unfortunately stuck with NVIDIA if you want to do path traced rendering cost effectively:
Edit:
Here’s hoping AMD makes it to the first page with next generation hardware like Radiance Cores:
I use Restic and also use Backrest to have a UI to browse my repos. I would use Backrest for everything, but I’d rather have my backup config completely source controlled.
Plot twist: they’re the same.
AI isn’t taking the jobs, dipshit rich assholes are cutting the jobs. Taking a job implies doing the job, and from that perspective, the remaining people who weren’t laid off are taking the jobs, not AI.
Some of their videos are pretty good, but taking funding from billionaires is never a good look.
I normally use ADB anyway, but wouldn’t surprise me if that becomes more locked down as well. For example, I believe Meta Quest requires a developer account with a credit card attached to even put it in developer mode, and I worry that kind of bullshit will become the norm.
Discord 😬
Edit:
DuckDuckGo’s AI says this, which sounds interesting if true, though it doesn’t provide a source to confirm:
Chaptarr is an upcoming project that is a heavily revamped fork of Readarr, currently in closed Alpha phase, and aims to improve interoperability with Readarr. You can find more information and updates on its development on GitHub
Sometimes, I’m inclined to swear at it, but I try to be professional on work machines with the assumption I’m being monitored in one way or another. I’m planning to try some self-hosted models at some point and will happy use more colorful language in that case, especially if I can delete it should it become vengeful.
deleted by creator
I’ve been using GitHub Copilot a lot lately, and the overly positive language combined with being frequently wrong is just obnoxious:
Me: This doesn’t look correct. Can you provide a link to some documentation to show the SDK can be used in this manner?
Copilot: You’re absolutely right to question this!
Me: 🤦♂️
Is there anything open source that provides the same experience as Google Admin Console where IT admins can manage everything from a single pane of glass? I’d imagine schools use Chromebooks because Google has put a lot of resources into making it a simple and cost effective option for schools, where IT budgets and staffing are usually pretty limited. An open source software suite that provides a similar experience would seemingly be a compelling alternative. I’d imagine there would need to be a company hosting the software for a fee, with the funds used to build on top of existing open source software to make a seamless and unified experience that works well. Barring that, I don’t imagine any school IT admin has sufficient bandwidth to buy a bunch of cheap laptops, install Linux on them, self-host Nextcloud, secure and lock down everything, etc. I know next to nothing about how IT in schools is managed, so this a lot of conjecture that could be wrong.
I originally thought it was one of my drives in my RAID1 array that was failing, but I noticed copying data was yielding btrfs corruption errors on both drives that could not be fixed with a scrub and I was also getting btrfs corruption errors on the root volume as well. I figured it would be quite an odd coincidence if my main SSD and 2 hard disks all went bad and I happened upon an article talking about how corrupt data can also occur if the RAM is bad. I also ran SMART tests and everything came back with a clean bill of health. So, I installed and booted into Memtester86+ and it immediately started showing errors on the single 16Gi stick I was using. I happened to have a spare stick that was a different brand, and that one passed the memory test with flying colors. After that, all the corruption errors went away and everything has been working perfectly ever since.
I will also say that legacy file systems like ext4 with no checksums wouldn’t even complain about corrupt data. I originally had ext4 on my main drive and at one point thought my OS install went bad, so I reinstalled with btrfs on top of LUKS and saw I was getting corruption errors on the main drive at that point, so it occurred to me that 3 different drives could not have possibly had a hardware failure and something else must be going on. I was also previously using ext4 and mdadm for my RAID1 and migrated it to btrfs a while back. I was previously noticing as far back as a year ago that certain installers, etc. that previously worked no longer worked, which happened infrequently and didn’t really register with me as a potential hardware problem at the time, but I think the RAM was actually progressively going bad for quite a while. btrfs with regular scrubs would’ve made it abundantly clear much sooner that I had files getting corrupted and that something was wrong.
So, I’m quite convinced at this point that RAID is not a backup, even with the abilities of btrfs to self-heal, and simply copying data elsewhere is not a backup, because something like bad RAM in both cases can destroy data during the copying process, whereas older snapshots in the cloud will survive such a hardware failure. Older data backed up that wasn’t coped with faulty RAM may be fine as well, but you’re taking a chance that a recent update may overwrite good data with bad data. I was previously using Rclone for most backups while testing Restic with daily, weekly, and monthly snapshots for a small subset of important data the last few months. After finding some data that was only recoverable in a previous Restic snapshot, I’ve since switched to using Restic exclusively for anything important enough for cloud backups. I was mainly concerned about the space requirements of keeping historical snapshots, and I’m still working on tweaking retention policies and taking separate snapshots of different directories with different retention policies according risk tolerance for each directory I’m backing up. For some things, I think even btrfs local snapshots would suffice with the understanding that it’s to reduce recovery time, but isn’t really a backup . However, any irreplaceable data really needs monthly Restic snapshots in the cloud. I suppose if don’t have something like btrfs scrubs to alert you that you have a problem, even snapshots from months ago may have an unnoticed problem.
Don’t understand the downvotes. This is the type of lesson people have learned from losing data and no sense in learning it the hard way yourself.
I believe Waymo’s strategy has always been to shoot for level 5 autonomous driving and not bother with the others. Tesla not following that strategy has proven them correct. You either have a system that is safe, reliable, and fully autonomous, or you’ve got nothing. Not that Waymo has a system at this point that can work under all conditions, but their approach is definitely superior to Tesla’s if nothing else.
Having a synced copy elsewhere is not an adequate backup and snapshots are pretty important. I recently had RAM go bad and my most recent backups had corrupt data, but having previous snapshots saved the day.
This article sums up a Stanford study of AI and developer productivity. TL;DR - net productivity boost is a modest 15-20%, or as low as negative to 10% in complex, brownfield codebases. This tracks with my own experience as a dev.
https://www.linkedin.com/pulse/does-ai-actually-boost-developer-productivity-striking-çelebi-tcp8f
It’s a weird concept that you buy a device and then have to find an exploit that hasn’t been patched in order to do what you like with it as though you’re a hacker trying to breach someone else’s system, but it’s actually your own system you’re trying to breach.
Except the way it actually works is Larry, Jensen, and Sam keep the money while the rest of us eat shit.