FreeBSD NAS Building Guide: Hardware to Software
Building a NAS (Network Attached Storage) on FreeBSD is one of the most rewarding infrastructure projects you can undertake. FreeBSD's native ZFS support, stable networking stack, and lightweight base system make it an ideal NAS operating system. Unlike appliance-based solutions, a FreeBSD NAS gives you full control over hardware selection, filesystem configuration, and service deployment.
This guide walks through every step: selecting hardware (ECC RAM, HBA cards, cases, drives), installing FreeBSD, configuring ZFS for NAS workloads, setting up file sharing with Samba and NFS, monitoring the system, and integrating a UPS for power protection.
Hardware Selection
CPU
NAS workloads are primarily I/O-bound, not CPU-bound. A modern CPU spends most of its time waiting for disks. However, ZFS compression and checksumming do use CPU cycles, and if you plan to run services in jails (Plex, Nextcloud, etc.), more CPU helps.
Budget NAS (home/small office): Intel Core i3 or AMD Ryzen 3. Four cores are sufficient.
Mid-range NAS (small business, 10+ users): Intel Core i5 or AMD Ryzen 5. Handles Samba, NFS, ZFS scrubs, and jails.
Enterprise/prosumer: Intel Xeon E or AMD EPYC for ECC memory support and more PCIe lanes.
ECC RAM
Use ECC RAM. This is the strongest hardware recommendation in this guide.
ECC (Error-Correcting Code) RAM detects and corrects single-bit memory errors. ZFS checksums protect data on disk, but if a bit flips in RAM between reading from disk and writing to the network (or vice versa), ZFS cannot detect it. ECC RAM closes this gap.
Minimum: 8 GB. Recommended: 16-32 GB. Rule of thumb: 1 GB of ARC per TB of raw storage. Consumer platforms (Intel Core, AMD Ryzen on non-Pro chipsets) generally do not support ECC -- use Xeon E, Ryzen Pro, or EPYC. If ECC is not feasible, ZFS on non-ECC is still safer than any other filesystem on non-ECC. But ECC is the correct choice for data you care about.
HBA (Host Bus Adapter)
The storage controller is critical. Do not use hardware RAID controllers. ZFS needs direct access to the raw disks. Hardware RAID controllers hide disk errors from ZFS, defeating its self-healing capabilities.
Use an HBA (IT mode) card that presents each disk individually to the OS.
Recommended HBAs:
- LSI SAS 9211-8i (rebranded as Broadcom): The gold standard for FreeBSD NAS builds. Supports 8 SAS/SATA drives. Available for $20-40 used. Flash to IT mode firmware if it comes in IR (RAID) mode.
- LSI SAS 9300-8i: Newer generation, 12 Gbps SAS. Same reliability, higher throughput for SAS SSDs.
- LSI SAS 9207-8i: Similar to 9211, good FreeBSD compatibility.
Flashing to IT mode (if needed):
sh# This is done before FreeBSD installation, typically from an EFI shell # Download IT mode firmware from Broadcom/Avago # Flash using sas2flash utility sas2flash -o -f 2118it.bin -b mptsas2.rom
Drives
For bulk storage (data pool):
- WD Red Plus (CMR) or WD Ultrastar for NAS duty
- Seagate IronWolf or Exos for enterprise workloads
- Avoid SMR (Shingled Magnetic Recording) drives -- they have poor random write performance that degrades ZFS resilver times
For cache (L2ARC/SLOG):
- Intel Optane (if available used) for SLOG (ZFS Intent Log)
- Enterprise NVMe SSDs (Intel DC, Samsung PM series) for L2ARC
- Consumer NVMe works but check write endurance ratings
For boot:
- A pair of mirrored small SSDs (120-240 GB) or USB drives
- Keep the boot pool separate from the data pool
Case
NAS cases need drive bays, airflow, and enough space for the motherboard and HBA.
Budget: Fractal Design Node 304 (6 x 3.5" bays, Mini-ITX)
Mid-range: Fractal Design Define 7 (up to 14 x 3.5" bays, ATX)
Rackmount: Supermicro 846 chassis (24 x 3.5" bays, hot-swap) or Rosewill RSV-L4500U (15 x 3.5" bays)
Network
- 1 GbE: Sufficient for home use with 1-3 clients
- 2.5 GbE: Good price/performance for small offices. Intel I225-V or Realtek RTL8125
- 10 GbE: Required for video editing, VM storage, or serving many clients. Intel X550-T2 or Mellanox ConnectX-3/4
FreeBSD has excellent Intel NIC driver support. Intel is preferred over Realtek for stability.
FreeBSD Installation
Download the FreeBSD installer, write to USB with dd, boot, and select ZFS as the root filesystem with a mirrored boot pool on the boot SSDs.
Post-Installation Essentials
sh# Update the base system freebsd-update fetch install # Install essential packages pkg install smartmontools tmux htop # Enable SSH sysrc sshd_enable="YES" service sshd start # Set timezone tzsetup # Enable NTP sysrc ntpd_enable="YES" service ntpd start
Kernel Tuning for NAS
Edit /boot/loader.conf:
sh# ZFS ARC maximum (adjust to your RAM, leave 4-8GB for OS and services) vfs.zfs.arc_max="25769803776" # 24 GB for a 32 GB system # Increase AIO threads for Samba vfs.aio.max_aio_queue_per_proc=4096 vfs.aio.max_aio_per_proc=2048 # Enable AHCI for SATA controllers ahci_load="YES" # Load HBA driver at boot mps_load="YES" # For LSI 9211 mpr_load="YES" # For LSI 9300
Edit /etc/sysctl.conf:
sh# Network tuning for file serving net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.recvbuf_max=16777216 kern.ipc.maxsockbuf=16777216 net.inet.tcp.sendbuf_auto=1 net.inet.tcp.recvbuf_auto=1 # Increase maximum open files kern.maxfiles=65536 kern.maxfilesperproc=32768
ZFS Layout
Creating the Data Pool
For a NAS with 6 drives, RAID-Z2 provides a good balance of space and redundancy:
sh# Create the data pool zpool create -o ashift=12 \ -O compression=lz4 \ -O atime=off \ -O xattr=sa \ -O acltype=posixacl \ datapool raidz2 da0 da1 da2 da3 da4 da5
Key options: ashift=12 matches modern 4K-sector drives, compression=lz4 provides near-zero CPU overhead with space savings, atime=off reduces write I/O, xattr=sa and acltype=posixacl enable Samba ACL support.
Dataset Structure
Create separate datasets for different purposes:
sh# Shared folders zfs create datapool/shared zfs create datapool/shared/documents zfs create datapool/shared/media zfs create datapool/shared/photos zfs create datapool/shared/backups # Per-user home directories zfs create datapool/home zfs create datapool/home/alice zfs create datapool/home/bob # Set quotas zfs set quota=500G datapool/shared/media zfs set quota=100G datapool/home/alice zfs set quota=100G datapool/home/bob # Tune for media (large sequential files) zfs set recordsize=1M datapool/shared/media zfs set recordsize=1M datapool/shared/photos # Tune for documents (smaller files) zfs set recordsize=128K datapool/shared/documents
Adding Cache Devices
sh# Add L2ARC (read cache) -- useful if ARC is full and read latency matters zpool add datapool cache nvd0 # Add SLOG (write log) -- critical for synchronous writes (NFS, databases) zpool add datapool log mirror nvd1 nvd2
Automated Snapshots
shpkg install zfstools
Or use a cron-based approach:
sh# /etc/cron.d/zfs-snapshots 15 * * * * root zfs snapshot -r datapool@auto-hourly-$(date +\%Y\%m\%d-\%H\%M) 0 0 * * * root zfs snapshot -r datapool@auto-daily-$(date +\%Y\%m\%d) 0 0 * * 0 root zfs snapshot -r datapool@auto-weekly-$(date +\%Y\%m\%d) # Cleanup: keep 48 hourly, 30 daily, 12 weekly 30 0 * * * root zfs list -H -t snapshot -o name | grep auto-hourly | sort | head -n -48 | xargs -n1 zfs destroy 2>/dev/null 30 0 * * * root zfs list -H -t snapshot -o name | grep auto-daily | sort | head -n -30 | xargs -n1 zfs destroy 2>/dev/null 30 0 * * 0 root zfs list -H -t snapshot -o name | grep auto-weekly | sort | head -n -12 | xargs -n1 zfs destroy 2>/dev/null
Scheduled Scrubs
sh# Weekly scrub (add to /etc/cron.d/zfs-scrub) 0 2 * * 6 root zpool scrub datapool
File Sharing: Samba (SMB)
Samba provides Windows-compatible file sharing (SMB/CIFS). It is the primary sharing protocol for mixed networks with Windows, macOS, and Linux clients.
Installation
shpkg install samba419 sysrc samba_server_enable="YES"
Configuration
Create /usr/local/etc/smb4.conf:
sh[global] workgroup = WORKGROUP server string = FreeBSD NAS server role = standalone server # Security security = user passdb backend = tdbsam map to guest = Bad User # Performance socket options = TCP_NODELAY IPTOS_LOWDELAY read raw = yes write raw = yes use sendfile = yes aio read size = 16384 aio write size = 16384 min receivefile size = 16384 # Logging log file = /var/log/samba4/log.%m max log size = 1000 log level = 1 # macOS compatibility vfs objects = catia fruit streams_xattr fruit:metadata = stream fruit:model = MacSamba [documents] path = /datapool/shared/documents valid users = @staff browseable = yes writable = yes create mask = 0664 directory mask = 0775 [media] path = /datapool/shared/media valid users = @staff browseable = yes read only = yes write list = alice bob [photos] path = /datapool/shared/photos valid users = @family browseable = yes writable = yes create mask = 0664 directory mask = 0775 [homes] comment = Home Directories browseable = no writable = yes create mask = 0600 directory mask = 0700
User Setup
sh# Create system users pw useradd alice -m -s /usr/sbin/nologin pw useradd bob -m -s /usr/sbin/nologin # Create groups pw groupadd staff pw groupmod staff -m alice,bob pw groupadd family pw groupmod family -m alice,bob # Set Samba passwords smbpasswd -a alice smbpasswd -a bob # Set filesystem ownership chown -R alice:staff /datapool/shared/documents chmod -R 2775 /datapool/shared/documents
Start Samba:
shservice samba_server start
Test the configuration:
shtestparm /usr/local/etc/smb4.conf
File Sharing: NFS
NFS is the standard Unix/Linux file sharing protocol. Use it for sharing with other FreeBSD, Linux, or macOS clients (macOS supports both SMB and NFS).
Configuration
Enable NFS server services:
shsysrc nfs_server_enable="YES" sysrc nfsv4_server_enable="YES" sysrc nfsuserd_enable="YES" sysrc mountd_enable="YES" sysrc rpcbind_enable="YES"
Configure exports in /etc/exports:
sh# /etc/exports # Share to specific subnet /datapool/shared/documents -network 10.0.1.0 -mask 255.255.255.0 /datapool/shared/media -network 10.0.1.0 -mask 255.255.255.0 -ro # Share to specific hosts /datapool/home/alice -maproot=alice alice-workstation.local /datapool/home/bob -maproot=bob bob-workstation.local # NFSv4 root export V4: /datapool -sec=sys -network 10.0.1.0 -mask 255.255.255.0
Start NFS services:
shservice nfsd start service mountd start service rpcbind start service nfsuserd start
Verify exports:
shshowmount -e localhost
Monitoring
SMART Monitoring
Monitor drive health with smartmontools:
shpkg install smartmontools sysrc smartd_enable="YES"
Configure /usr/local/etc/smartd.conf:
sh# Monitor all drives, send email on issues DEVICESCAN -a -o on -S on -n standby,q -s (S/../.././02|L/../../6/03) -W 4,45,55 -m admin@example.com
shservice smartd start
ZFS Health Monitoring
Create a monitoring script:
sh#!/bin/sh # /usr/local/bin/nas-monitor.sh # Check pool health POOL_HEALTH=$(zpool status -x) if [ "$POOL_HEALTH" != "all pools are healthy" ]; then echo "WARNING: ZFS pool issue detected" zpool status echo "$POOL_HEALTH" | mail -s "NAS ZFS Alert: Pool Degraded" admin@example.com fi # Check disk space zpool list -H -o name,capacity | while IFS=$'\t' read name cap; do cap_num=$(echo "$cap" | tr -d '%') if [ "$cap_num" -gt 80 ]; then echo "WARNING: Pool $name is ${cap} full" | mail -s "NAS Alert: Pool $name space low" admin@example.com fi done # Check drive temperatures for drive in /dev/da[0-9]*; do temp=$(smartctl -A "$drive" 2>/dev/null | awk '/Temperature_Celsius/{print $10}') if [ -n "$temp" ] && [ "$temp" -gt 50 ]; then echo "WARNING: $drive temperature is ${temp}C" | mail -s "NAS Alert: Drive temperature high" admin@example.com fi done
Schedule it:
shchmod +x /usr/local/bin/nas-monitor.sh # Run every 30 minutes echo "*/30 * * * * root /usr/local/bin/nas-monitor.sh" >> /etc/crontab
Prometheus Integration
For Grafana dashboards, install node_exporter:
shpkg install node_exporter sysrc node_exporter_enable="YES" sysrc node_exporter_args="--collector.zfs --collector.textfile --collector.textfile.directory=/var/tmp/node_exporter" service node_exporter start
UPS Integration
A UPS (Uninterruptible Power Supply) is not optional for a NAS. Power loss during a ZFS write can corrupt the ZIL or in-flight data. While ZFS is designed to survive power loss, a graceful shutdown is always preferred.
NUT (Network UPS Tools)
shpkg install nut
Configure for a USB-connected UPS:
sh# /usr/local/etc/nut/ups.conf [myups] driver = usbhid-ups port = auto desc = "Server Room UPS"
sh# /usr/local/etc/nut/upsd.conf LISTEN 127.0.0.1 3493
sh# /usr/local/etc/nut/upsd.users [admin] password = secret actions = set instcmds = all [upsmon] password = secret upsmon master
sh# /usr/local/etc/nut/upsmon.conf MONITOR myups@localhost 1 upsmon secret master SHUTDOWNCMD "/sbin/shutdown -p now" POWERDOWNFLAG /etc/killpower NOTIFYCMD "/usr/local/sbin/upssched"
sh# /usr/local/etc/nut/nut.conf MODE=standalone
Enable and start:
shsysrc nut_enable="YES" sysrc nut_upsmon_enable="YES" sysrc nut_upslog_enable="YES" service nut start service nut_upsmon start
Verify UPS communication:
shupsc myups@localhost
This shows battery level, load, input voltage, and runtime estimate.
NUT handles automatic shutdown when battery runs low. The defaults shut down when battery reaches critical level. Verify with upsc myups@localhost that battery and runtime metrics are being read correctly.
FAQ
Do I really need ECC RAM for a FreeBSD NAS?
For data you care about, yes. ZFS provides end-to-end data integrity on disk, but a bit flip in RAM between read and write bypasses all ZFS protections. ECC RAM corrects single-bit errors and detects double-bit errors. The price premium for ECC is small compared to the cost of silent data corruption. If budget forces a choice between ECC and more storage, choose ECC.
How many drives should I start with?
For a first NAS, 4 drives in RAID-Z1 or a pair of mirrors gives you a good starting point. RAID-Z2 requires a minimum of 4 drives but 6 is more practical (4 for data, 2 for parity). You can expand later by adding vdevs (mirror pairs or additional RAID-Z groups), but you cannot add individual disks to an existing RAID-Z vdev (except with the new RAID-Z expansion feature in OpenZFS 2.2+).
Should I use a dedicated boot drive?
Yes. Keep the boot pool on separate drives (a pair of mirrored SSDs or USB sticks) from the data pool. This allows you to reinstall or upgrade FreeBSD without touching your data. If a boot drive fails, you still have the data pool intact.
What is the performance difference between RAID-Z1, RAID-Z2, and mirrors?
Mirrors provide the best random read performance (reads can be served from any mirror member) and the fastest resilver times. RAID-Z2 provides better space efficiency but lower random read IOPS. For a NAS serving files to a small number of clients, the difference is negligible. For a NAS serving VMs or databases, mirrors are preferred.
How do I replace a failed drive?
sh# Identify the failed drive zpool status datapool # Replace it (da1 is failed, da6 is the new drive) zpool replace datapool da1 da6 # Monitor resilver progress zpool status datapool
The pool remains online during resilver. Performance is reduced but all data remains accessible.
How do I share a ZFS snapshot with users for self-service file recovery?
Enable the ZFS .zfs directory:
shzfs set snapdir=visible datapool/shared/documents
Users can then browse \\NAS\documents\.zfs\snapshot\ via Samba to find and recover their own files from any snapshot. This is the FreeBSD equivalent of Windows "Previous Versions."