So, I am thinking about getting myself a NAS to host mainly Immich and Plex. Got a couple of questions for the experienced folk;
- Is Synology the best/easiest way to start? If not, what are the closest alternatives?
- What OS should i go for? OMV, Synology’s OS, or UNRAID?
- Mainly gonna host Plex/Jellyfin, and Synology Photos/Immich - not decided quite what solutions to go for.
Appricate any tips :sparkles:
If you want a “setup and forget” type of experience, synology will serve you well, if you can afford it. Of you are more of a tinkerer and see yourself experimenting and upgrading in the future, then I recommend custom built. OMV is a solid OS for a novice, but any Linux distro you fancy most can do the job very well!
I’ve started my NAS journey with a very humble 1-bay synology. For the last few years I am using a custom built ARM NAS (nanopi m4v2), with 4-bays and running Armbian. All my services run on docker, I have Jellyfin, *arr, bitwarden and several other servicies running very reliably.
And if you’re not sure how much of tinkering you want to do a Synology with docker support is a good option.
^ This. I have an M1 Mac mini running Asahi Linux with a bunch of docker containers and it works great. Run Jellyfin off of a separate stick PC running an Intel Celeron with Ubuntu Mate on it. Basically I just have docker compose files on those two machines and occasionally ssh in from my phone to
sudo apt update && sudo apt upgrade -y
(on Ubuntu) orsudo pacman -Syu
(on Asahi) and thendocker compose pull && docker compose up -d
I went with Synology for my Plex sever. I chose to use a docker container to run it instead of the normal Plex server app they have available. I found some YouTube videos on the setup and it was fairly straight forward. Now that it’s setup I rarely have to think about it, other than the occasional update. I now have to update the docker image instead of just letting Plex update itself.
I went this route because I didn’t want to have to maintain a desktop for my Plex server, which I had done since Plex was first released. But I also didn’t want to have a new sys admin hobby just to watch some videos. Rolling my own raid felt like it would end up that way.
Can definitely confirm this. I started with a Proxmox system which had a TrueNAS VM. TrueNAS just used a USB HDD for storage though. Setting everything up and getting the permissions set correctly so I could connect my computers was a pain in the ass though.
Later I bought a synology and it just works. Only thing I would recommend is getting good HDDs. I bought Toshiba MG08 16TB drives and while they work great, they are obnoxiously loud during read and write operations. They are so loud, that even though the NAS is in a separate room I have to shut it off at night.
Meanwhile the Seagate Ironwolf drive I used for TrueNAS was next to my bed for multiple months and was basically silent.
I have Iron wolves and I thought THEY were noisy.
You might have gotten a dud. I’ve got 4 in my NAS and they are silent.
I bought WD Red Pros, which are supposed to be made for use in a NAS. They’re pretty loud. It’s sitting right next to my couch and it a bit annoying. It’s always doing something, even when I’m not doing anything. When backups fire up at 3am they go nuts.
Thankfully, I don’t hear it when I’m trying to sleep. I do use a white noise machine, so that could be part of the reason why.
SSD prices are coming down. I can see they tempting me in the future to get a completely silent NAS.
I have the same SSD thought, except every time they come down HDD prices also go down.
From what I’ve read there is a hard floor to HDD drives due to what physically needs to go into them, so at some point (probably around $50, the HDD should stop dropping.
I have proxmox on bare metal, an HBA card to passthrough to TrueNAS Scale. I’ve had good luck with this setup.
The HBA card is to passthrough to TrueNAS so it can get direct control of the drives for ZFS. I got mine on eBay.
I’m running proxmox so that I can separate some of my processes (e.g. plex LXC) into a different VM.
This is a great way to set this up. I’m moving over to this in a few days. I have a temporary setup with ZFS directly on Proxmox with an OMV VM for handling shares bc my B450 motherboard IOMMU groups won’t let me pass through my GPU and an HBA to separate VMs (note for OP: if you cannot pass through your HBA to a VM, this setup is not a good idea). I ordered an ASRock X570 Phantom Gaming motherboard as a replacement ($110 on Amazon right now. It’s a great deal.) that will have more separate IOMMU groups.
My old setup was similar but used ESXi instead of Proxmox. I also went nuts and virtualized pfSense on the same PC. It was surprisingly stable, but I’m keeping my gateway on a separate PC from now on.
If you can’t pass through your HBA to a VM, feel free to manage ZFS through Proxmox instead (CLI or with something like Cockpit). While TrueNAS is a nice GUI for ZFS, if it’s getting in the way you really don’t need it.
TrueNAS has nice defaults for managing snapshots and the like that make it a bit safer, but yeah, as I said, I run ZFS directly on Proxmox right now.
Oh sorry for some reason I read OMV VM and assumed the ZFS pool was set up there. The Cockpit ZFS Manager extension that I linked has good management of snapshots as well, which may be sufficient depending on how much power you need.
Good to know!
I’d love to find out more about this setup. Do you know of any blogs/wikis explaining that? Are you separating the storage from the compute with the HBA card?
This is a fairly common setup and it’s not too complex - learning more about Proxmox and TrueNAS/ZFS individually will probably be easiest.
Usually:
-
Proxmox on bare metal
-
TrueNAS Core/Scale in a VM
-
Pass the HBA PCI card through to TrueNAS and set up your ZFS pool there
-
If you run your app stack through Docker, set up a minimal Debian/Alpine host VM (you can technically use Docker under an LXC but experienced people keep saying it causes problems eventually and I’ll take their word for it)
-
If you run your app stack through LXCs, just set them up through Proxmox normally
-
Set up an NFS share through TrueNAS, and connect your app stack to that NFS share
-
(Optional): Just run your ZFS pool on Proxmox itself and skip TrueNAS
This is 100% my experience and setup. (Though I run Debian for my docker VM)
I did run docker in an LXC but ran into some weird permission issues that shouldn’t have existed. Ran it again in VM and no issues with the same setup. Decided to keep it that way.
I do run my plex and jellyfin on an LXC tough. No issues with that so far.
I already run proxmox but not TrueNAS. I’m really just confused about the HBA card. Probably a stupid question but why can’t TrueNAS access regular drives connected to SATA?
The main problem is just getting TrueNAS access to the physical disks via IOMMU groups and passthrough. HBA cards are a super easy way to get a dedicated IOMMU group that has all your drives attached, so it’s common for people to use them in these sorts of setups. If you can pull your normal SATA controller down into the TrueNAS VM without messing anything else up on the host layer, it will work the same way as an HBA card for all TrueNAS cares.
(TMK, SATA controller hubs are usually an all-at-once passthrough, so if you have your host system running off some part of this controller it probably won’t work to unhook it from the host and give it to the guest.)
Makes sense, thanks for the info
That was one of the things I got wrong at first as well. But it totally makes it much easier in the long run.
So theoretically if someone has alrady set up their NAS (custom Debian with ZFS root instead of TrueNAS, but shouldn’t matter), it sounds like it should be relatively straightforward to migrate all of that into a Proxmox VM, by installing Proxmox “under it”, right? Only thing I’d need right now is some SSD for Proxmox itself.
Proxmox would be the host on bare metal, with your current install as a VM under that. I’m not sure how to migrate an existing real install into a VM so it might require backing up configs and reinstalling.
You shouldn’t need any extra hardware in theory, as Proxmox will let you split up the space on a drive to give to guest VMs.
(I’m probably misunderstanding what you’re trying to do?)
I just thought that if all storage can easily be “passed through” to a VM then it should in theory be very simple to boot the existing installation in a VM directly.
Regarding the extra storage: sharing disk space between proxmox and my current installation would imply that I have to pass-through “half of a drive” which I don’t think works like that. Also, I’m using ZFS for my OS disk and I don’t feel comformtable trying to figure out if I can easily resize those partitions without breaking anything ;-)
That should work, but I don’t have experience with it. In that case yeah you’d need another separate drive to store Proxmox on.
-
Synology is generally a great option if you can afford the premium.
Unraid is a good alternative for the poor man. Check this list of cases to build in. I personally have a Fractal R5 which can support up to 13 HDD slots.
Unraid is generally a better bang for your buck imo. It’s got great support from the community.
this. Op, look for a used Synology. It Just Works ™️
:)
Just throwing out an option, not saying it’s the best:
If you are comfortable with Linux (or you want to be come intimately familiar with it), then you can just run your favorite distribution. Running a couple of docker containers can be done on anything easily.
What you’re losing is usually the simple configuration GUI and some built-in features such as automatic backups. What you gain is absolute control over everything. That tradeoff is definitely not for everyone, but it’s what I picked and I’m quite happy with it.
Yeah already quite familiar, already got a server but looking for something more premium, but essentially deliver the most easy platforms for the rest of the family to use.
Also, you could run Linux off of a real CPU. My experience is that my DS916+ is way underpowered even with 8 GB memory. I use my NAS for actual storage, and an old Intel mainboard w/16GB RAM for actual CPU work.
My Synogy NAS was super easy to set up and has been very solid. Very happy with it. I’m sure there’s other solutions though.
This was the route I went with when I started, and I’ve never had cause to regret it. For people near the start of their self-hosting journey, it’s the no-hassle, reliable choice.
TrueNAS Scale is a pretty easy to use option (based on Debian) backed by the excellent ZFS file system.
But ZFS has a learning curve and limits easy backup options… but it’s worth it.
I agree with the learning curve (personally I found it worthwhile, but that’s subjective).
But how does ZFS limit easy backup options? IMO it only adds options (like zfs send/receive) but any backup solution that works with any other file systems should work just as well with ZFS (potentially better since you can use snapshots to make sure any backup is internally consistent).
Because you can’t use typical back product software. If you do it the right way, you’re using my ZFS send and receive to another machine running ZFS which significantly adds to cost.
That’s an extremely silly reason not to use a specific tool: Tool A provides an alternative way to do X, but I want to do X with some other tool B (that’ll also work with tool A), so I won’t be using tool A.
Send/receive may or may not be the right answer for backing up even on ZFS, depending on what exactly you want to achieve. It’s really nice when it is what you want, but it’s no panacea (and certainly no reason to avoid ZFS, since its use it 100% optional).
I really don’t get your meaning of my apparent silly reason. You can’t use Acronis, Veeam, or other typical backup products with ZFS. My point is this is a barrier to entry. I disagree that it’s not silly for a home user to build another expensive NAS just to do ZFS send and receive which would be the proper way.
I don’t consider backups optional.
Eh… TrueNAS UI basically takes care of any zfs learning curve. The main thing I’d note is that RAID 5 & 6 can’t currently be expanded incrementally. So you either need to use mirroring, configure the system upfront to be as big as you expect you’ll need for years to come, or use smaller RAID 5 sets of disk (e.g. create 2 raid 5 volumes with 3 disks each instead of 1 RAID 5 volume with 6 disks).
Not sure what you’re referring to as an easy backup option that zfs excludes, but maybe I’m just ignorant 🙂
The most common software choices are TrueNAS and UNRAID.
Depending on your use-case, one is better than the other:
TrueNAS uses ZFS, which is great if you want to be absolutely sure the unreplaceable data on your disks is 100% safe, like your personal photos. UNRAID has a more flexible expansion and more power efficient, but doesn’t prevent any bit flip, which is not really an issue if you only store multimedia for streaming.
If you prefer a hardware solution ready to use, Synology and QNAP are great choices so long you remember to use ZFS (QNAP) or BTRFS (Synology) as filesystem.
Unraid 6.12 and higher has full support for ZFS pools. You can even use ZFS in the Unraid Array itself - allowing you to use many, but not all, of ZFS extended features. Self healing isn’t one of those features, though, it would be incompatible with Unraid’s parity approach to data integrity.
I just changed my cache pool from BTRFS to ZFS with Raid 1 and encryption, it was a breeze.
I generally recommend TrueNAS for projects where speed and security are more important than anything else and Unraid where (hard- and software-)flexibility, power efficiency, ease of use and a very extensive and healthy ecosystem are more pressing concerns.
Oh that’s great to hear. Thank you for sharing.
Do either of them matter in terms of life of the hardisks? My server just had one of its HDDs reach EoL :| Kind of want to buy something that will last a very long time. Also, not familiar with ZFS, but read that Synology uses Butterfs - which always sounds good in my ears, been having a taste of the filesystem with Garuda on my desktop.
Yes, ZFS is commonly known for heavy disk I/O and also huge RAM usage, the rule used to be “1GB of RAM for every TB of disk” but that’s not compulsory.
Meanwhile, about BTRFS, keep in mind that Synology uses a mixed recipe because the RAID code of BTRFS is still green and it’s not considered production ready. Here’s an interesting read about how Synology filled the gaps: https://daltondur.st/syno_btrfs_1/
The only place ZFS seems to use a sizable amount of RAM is for the arc memory cache system which is an really nice feature when you have piles of small file access going on. For me some of the most high access things are the image stores for lemmy and mastodon that combine up to just under 200GB right now but are some crazy high number of files. Letting the system eat up idle ram to not have to pull all those from disk constantly is awesome.
Something kind of unique about UnRaid is the JBOD plus parity array. With this you can keep most disks spun down while only the actively read/written disks need to be spun up. Combine with an SSD cache for your dockers/databases/recent data and UnRaid will put a lot less hours(heat, vibration) on your disks than any raid equivalent system that requires the whole array to be spun up for any disk activity. Performance won’t be as high as comparably sized RAID type arrays, but as bulk network storage for backups, media libraries, etc. it’s still plenty fast enough.
Do you have any old hardware that doesn’t have a job? That is a great place to start. Take some time try out different solutions (proxmox, unraid, casa OS). Then as you nail down your needs you can better pick hardware.
Yeah this is what I have been doing so far, loads of spare parts - running Debian atm. So kind of looking for ‘the next step’ rn.
I use UNRAID, I didn’t want to pay for a license originally but having the option to mix and match drives and have redundancy is nice.
I also use the built in docker feature to host most of my services.
Unraid is also awesome for places with high energy cost: Unlike with your typical RAID / standard NAS, it allows you to spin down all drives that aren’t in active use at a relatively minor write speed performance penalty.
That’s pretty ideal for your typical Plex-server where most data is static.
I built a 10HDD + 2SSD Unraid Server that idles at well below 30W and I could have even lowered that further had I been more selective about certain hardware. In a medium to high energy cost country, Unraid’s license cost is compensated by energy savings within a year or two.
Mixing & matching older drives means even more savings.
Simple array extension, single or dual parity, powerful cache pool tools and easily the best plugin and docker app store make it just such a cool tool.
This sounds very good, i like what i am reading and hearing about unraid! And I do live somewhere with very high energy costs…
I run most of my stuff on k8s, but I really enjoy simple docker ecosystem of apps that home assistant supervisor provides. Unraid app approach looks similar, preconfigured and working together. Even thou I don’t need fancy nas, I might try unraid just to evaluate apps ecosystem. How to u find their community apps?
I usually search thru the apps and they install as docker containers, I can edit the configs after the fact, it’s pretty nice. There’s also a terminal so I can run regular docker commands too.
First I chose a Pi now am using a Nuc as a NAS.
Reason for why: The price was too much for a synology + transcode capable CPU as it wasmt clear what type of processor was being used.deleted by creator
Seconded. But for more details… it’s great because you can throw in many different drives of different sizes, unlike RAID servers where every drive has to be the same size. You can also specify however much you want to use as parity (backup) drives.
It has a nice web interface that you can access from any other PC on your LAN. I also have mine set up with Unraid Connect which allows me to access it from the open web also. It has a strong password and 2FA so I’m not concerned about security.
It also makes it easy to serve Docker containers and full blown VMs. You can set them up right in the UI, or you can also SSH to it and use it as a normal Linux OS if you’re a power user. The web UI also has a button that’ll launch a SSH terminal in a separate window too.
You can just use it as a NAS if you want, but Unraid makes it easy to expand your capabilities if you later feel like it. For example, you are only a few button clicks away from running Jellyfin to provide a nice UI for all your media files that you may be storing on your NAS.
Do people not like TrueNAS?
It’s fine, but it’s really only good as a NAS. BHyve is a terrible virtualization platform. With something like Open Media Vault you get access to KVM, which is a much better way to run a virt or two on the side.
You’re one of the few who mentioned OMV in the thread and I was wondering why, it works great for me as a VM on proxmox… the only gripe I have is that sometimes the GUI decides I’ve made changes to the configuration and asks me to apply them, only to fail and get stuck with the notification.
TrueNAS isn’t just BSD anymore. There’s Scale which is just TrueNAS’s UI and ZFS on top of Debian.
I’ve found CasaOS to be the simplest to set up and get going. I tried TrueNAS for a year, but wish I had started with CasaOS.
How would you rate Casa compared to Open Media Vault?
Haven’t tried OMV, but the lesson I learned with TrueNAS is that software designed primarily for NAS has a lot of features I don’t care about, and the other apps can be finicky. I’m not storing petabytes of data. CasaOS was the closest I found to “just works”.
There’s also Umbrel OS which looks promising, but I’ve been happy with CasaOS so haven’t felt the need to switch.
CasaOS looks interesting. But i prefer OpenMediaVault for the moment.
I have a qnap. I have had no issues. It runs its own qts OS so no need to figure out what you want to run. Make sure the hardware is x86. Plex runs better on x86.