• 0 Posts
  • 95 Comments
Joined 10 months ago
cake
Cake day: February 15th, 2024

help-circle

  • Not to defend nvidia entirely, but there are physical cost savings that used to occur with actual die wafer shrinkage back in the day since process node improvements allowed such a substantial increase in transistor density. Improvements in recent years have been lesser and now they have to use larger and larger dies to increase performance despite the process improvements. This leads to things like the 400w 4090 despite it being significantly more efficient per watt and causes them to get less gpus per silicon wafer since the dies are all industry standardized for the extremely specialized chip manufacturering equipment. Less dies per wafer means higher chip costs by a pretty big factor. That being said they’re certainly… “Proud of their work”.






  • I’d recommend against it. Apple’s software ecosystem isn’t as friendly for self hosting anything, storage is difficult to add, ram impossible, and you’ll be beholden to macOS running things inside containers until the good folks at Asahi or some other coummity startup add partial linux support.

    And yes, I’ve tried this route. I ran an m1 mac mini as a home server for a while (running jellyfin and some other containers). It pretty consistently ran into software bugs (less maintained than x64 software) and every time I wanted to do an update instead of sudo whateveryourdistroships update, and a reboot, it was an entire process involving an apple account, logging into the bare metal device, and then finally running their 15-60 minute long update. Perfectly fine and acceptable for home computing, but not exactly a good experience when you’re hosting a service.