Since my first post, I’ve done the research and settled on a storage solution. I did a fair bit of reading up on different file systems, SAN (Storage Area Network) and NAS (Network Attached Storage) architectures, and backup/ failover strategies. In the end I’ve decided to go with TrueNAS SCALE to serve up shares on a ZFS file system. I read a lot of articles and watched a lot of YouTube videos and I’ll include links to some of the resources I found helpful at the end of this post.
Why ZFS?
There are a lot of reasons to like ZFS and even more resources online explaining why you should use ZFS, but I’ll highlight the ones that are important to me for my use case.
- Expandability: you can add drives to an existing pool (not as easily as in Unraid, but its possible)
- Data Integrity: the way ZFS handles writes does a lot to avoid data corruption
- Snapshots: snapshots (aka shadow copies) allow for keeping backups at the file-system level with minimal effort and disk usage; they also enable easy replication for external backups.
- Special Devices: a pool of hard drives can be augmented with a small amount of SSD storage for caching, file metadata, and more
Why TrueNAS SCALE?
One of my goals in my homelab is to make administration and maintenance easy since I’m doing this in my “free time”. Having a good Web UI means there’s a shorter learning curve compared to needing to drop into a shell for things. TrueNAS SCALE is also a popular solution with an active community forum as well as a number of tutorials and how-to articles/videos. I chose TrueNAS SCALE over TrueNAS CORE primarily because it is based on Linux instead of BSD, so in the event I do need to drop into a terminal for something I will be more at home. To summarize, it checks a lot of boxes:
- Understandable Management Web UI
- Built on Linux for familiar permissions management and CLI
- Supports SMB and NFS shares
- Includes built-in backup (replication) services
- Supports iSCSI for exposing raw storage (this video from Craft Computing explained to me why I would care about iSCSI)
- Bonus: The web UI supports 2FA
The Hardware
Most of the hardware for this storage server came from ebay, so I’ll list model numbers here (I don’t want this to be another blog with a bunch of dead ebay links). I’ve excluded incidentals like power cables and screws; I also already had the storage on hand to put in this.
Component | Model | Description | Price |
---|---|---|---|
Chassis | Supermicro SC826 | 2U Server with 12 3.5” bays | $219 |
CPU/MB | Supermicro X9SCL with Xeon E3-1230V2 | Motherboard and CPU | $46 |
HBA | LSI 9207-8i | SAS Controller Card and cables | $84 |
RAM | Crucual ECC Unbuffered UDIMM PC3L | 32GB RAM Kit | $50 |
Network Card | Intel E10G42BTDA | 10GbE Network Card | $35 |
PSU | PWS-1K28P-SQ | 2x Quiet PSU | $100 |
Fans | Nocuta NF-A8 | 3x fans | $56 |
The core components come in just under $350 with another $250 for some upgrades for a total of around $600. I initially deployed my server without those upgrades but ended up making them sooner rather than later for a few reasons:
- More RAM means more cache for ZFS which can significantly help with accessing commonly used files
- I already had a 10GbE network card in my old server, so adding one here means I can migrate data at faster than Gigabit speeds
- The stock PSUs can get loud and this server lives in my office, about 3 feet from my head when I’m at my desk.
- The stock fans can also get loud so I swapped in some appropriately sized Noctua replacements
Hard Drives
For the storage, I’m using some schucked WD 8TB external drives. If you’re doing the same, these drives won’t work when 3V3 power is supplied; I found a good illustrated guide for how to address this. I did this awhile ago with my drives and I did notice some of the tape was starting to peel; if I did it again I might consider using conformal coating, but these are about to be permanently installed in my server so I didn’t bother doing anything since they all worked immediately.
2.5” SSDs
I have 2 1TB SSDs that I’ll be using for a metadata VDEV to speed up my Hard Drive array. In short, this 1TB will be used to store file metadata (file names, access times, etc) while file contents will be stored on hard drives. This should significantly speed up browsing network shares and navigating the file system.
I plan on re-doing this setup with 3 drives here so the redundancy matches my data VDEV (RAIDZ2) to resolve the warning in TrueNAS. It is also important to note that warning exists because if the metadata VDEV fails, then the data is inaccessible, just as it would be if the data VDEV failed. Realistically, I think its less likely for 2 concurrent SSD failures than it is for 2 concurrent HDD failures and rebuilding 1TB is faster than 8TB but its only another $50 to add a drive and I have the capacity in my chassis to accomodate it.
I found a model on thingiverse to mount these in drive sleds. I also noticed while doing this that some of the drive sleds I got have 2.5” drive mounting holes and others don’t.
Noctua Fans
If you have ever used a rack-mounted server, you know that noise is not really a consideration in their cooling solutions. The stock fans in these servers idle fairly quietly, but ramp up fast with any appreciable CPU load (i.e. data compression). I found a printable fan carrier on Thingiverse and printed out 3 of them to install Noctua fans in my server. The Thingiverse project notes how the fan connector needs to be modified slightly to fit the fan connectors and the fit is nearly perfect and things are MUCH quieter.
OS Installation
If you’re building with the X9SCL board I’m using, I learned the hard way that PCIe storage devices will show up in BIOS, but you can NOT boot from PCIe. I also learned (and I think there may be a warning in the installer) that it is NOT recommended to install TrueNAS SCALE to a USB flash drive. For the moment, I’ve attached a 2.5” SSD to the internal USB port, but longer-term I intend on migrating to a SATA Disk-on-Module (DOM) to resolve the warnings in the Web UI (and to remove the SSD installed with 3M VHB tape).
Wrap Up
There are a lot of considerations for how to structure ZFS VDEVs and implement shares and I’m still actively working out a solution for myself. Initially, I have created a RAIDZ2 Data VDEV with a RAIDZ1 Metadata VDEV and that is working well. I already see that my storage pool isn’t big enough though and I might want to change some of the encryption and compression options so I’ll wait to document things until I have a better idea of how I want this configured.
For the moment, I have duplicated data from my Unraid server and will keep things mostly in sync until I finalize the new setup. I also have an offline backup of really important stuff, just in case.
References
Perhaps my first introduction to ZFS was from a Level1Techs video on YouTube. Wendell also wrote a couple of forum posts about TrueNAS SCALE and ZFS Metadata devices.
As I got closer to implementation, I found Lawrence Systems on YouTube has a playlist of TrueNAS explainers and how-to videos. There’s also some other videos on the channel about ZFS and general network design. I also previously used this channel as a resource for pfSense setup, but that’s a topic for another day.
There are many other great resources online so I encourage searching around and doing other research into alternatives; ZFS seems a good fit for me, but it may not be the best option for everyone.