How to Build a NAS with FreeBSD and ZFS
A NAS that runs FreeBSD and ZFS is not a compromise. It is what production storage looks like when you strip away the GUI, the subscription fees, and the vendor lock-in. You get a filesystem that checksums every block, self-heals from corruption, snapshots in milliseconds, and replicates to remote sites with a single command. You get an operating system that has shipped ZFS as a first-class citizen since 2007 and treats stability as a feature, not a marketing claim.
This guide takes you from bare hardware to a working, production-grade NAS. Every command runs on FreeBSD 14.x with OpenZFS 2.2.x. No hypotheticals. No placeholder values you have to guess at. If you can install an operating system and open a terminal, you can build this.
For deep coverage of ZFS itself -- pools, datasets, snapshots, tuning -- see the companion ZFS guide. This article focuses on the full NAS build: hardware, shares, automation, and maintenance.
Table of Contents
- Why FreeBSD for a NAS
- Hardware Recommendations
- FreeBSD Installation for NAS Use
- ZFS Pool Design
- Dataset Layout
- Setting Up Samba (SMB) Shares
- Setting Up NFS for Linux/Unix Clients
- Snapshot Automation
- Monitoring and Alerts
- Remote Replication
- Optional Services in Jails
- Maintenance Schedule
- FAQ
Why FreeBSD for a NAS
Three reasons. Each one is sufficient on its own.
ZFS is native. FreeBSD ships ZFS in the base system. It is built with the kernel, tested in the release cycle, and supported by the core project. On Linux, ZFS is an out-of-tree module maintained by a separate team, rebuilt on every kernel update, and constrained by licensing friction. On FreeBSD, ZFS just works. It has worked for nearly two decades.
Stability is the default. FreeBSD's release engineering is conservative by design. A NAS is not the place for bleeding-edge kernels. It is the place for an OS that boots the same way every time, runs for years between reboots, and does not break your storage stack with a routine update. FreeBSD delivers this.
Jails give you services without bloat. Want to run Jellyfin for media streaming, Nextcloud for file sync, or Transmission for downloads? FreeBSD jails let you run each service in an isolated container with near-zero overhead, no Docker dependency, and no separate VM eating your RAM. Your NAS stays clean. Your services stay contained. See the FreeBSD jails guide for the full walkthrough.
If you are evaluating appliance-style alternatives, read the TrueNAS vs Unraid comparison. TrueNAS Core is itself built on FreeBSD -- which tells you what the storage industry thinks of this stack.
Hardware Recommendations
A NAS does not need a fast CPU. It needs reliable storage I/O, enough RAM for ZFS's ARC cache, and a motherboard that does not lie to the drives about write completion. Here is what to buy.
CPU
Any modern 4-core x86-64 processor is more than enough. An Intel Core i3-12100 or AMD Ryzen 5 5600G will idle at 90% utilization on a NAS workload. Avoid server Xeons unless you already own one -- the power draw is not justified for a home or small-office NAS.
If you plan to run Jellyfin for hardware transcoding, pick an Intel CPU with Quick Sync. The integrated GPU handles 4K transcode without breaking a sweat.
RAM: The ECC Discussion
ZFS benefits from RAM. The ARC (Adaptive Replacement Cache) uses available memory to cache frequently accessed data. A practical minimum is 8 GB. 16 GB is comfortable for a 4-8 drive NAS. 32 GB is ideal if you plan to run services in jails alongside your storage.
ECC RAM corrects single-bit memory errors before they reach ZFS. The ZFS community has debated ECC for years. Here is the practical answer: ECC is good practice for any server that stores data you care about, but non-ECC systems run ZFS successfully every day. If your motherboard supports ECC and the price delta is small, buy ECC. If it does not, build your NAS anyway and rely on ZFS's checksumming to catch the errors that matter most -- the ones on disk.
HBA vs RAID Card
This is the single most important hardware decision. You want an HBA (Host Bus Adapter), not a hardware RAID card.
ZFS needs direct access to the physical disks. A hardware RAID card sits between ZFS and the drives, hiding the individual disks behind a virtual volume. This defeats ZFS's ability to detect and correct corruption at the block level.
Buy an LSI SAS 9207-8i or 9300-8i (or any card using the LSI SAS2008 or SAS3008 chipset) and flash it to IT mode. These are available used for $20-40 and are the de facto standard for ZFS NAS builds. If your motherboard has enough SATA ports for your drives, you may not need an HBA at all -- onboard SATA works fine.
Never use a RAID card in RAID mode with ZFS. If you have a RAID card, flash it to IT/JBOD mode or replace it.
Drives
For bulk storage, use CMR (Conventional Magnetic Recording) hard drives. Avoid SMR (Shingled Magnetic Recording) drives -- they have severe write performance penalties under ZFS scrub and resilver workloads.
Reliable choices:
- WD Red Plus (CMR) -- good value for 4-8 bay NAS builds
- WD Ultrastar / HGST -- datacenter drives, often available refurbished at good prices
- Seagate Exos -- datacenter line, high endurance
- Seagate IronWolf Pro -- NAS-rated, CMR
For the boot drive, use a small SSD (120-250 GB). A mirrored pair of SSDs for the boot pool is ideal. Do not boot from your data drives.
Case
You need a case with drive bays and airflow. The Fractal Design Node 804 (8 bays), Silverstone CS380 (8 bays), and Jonsbo N series are popular choices. Ensure direct airflow over the drives -- hard drives die from heat more than anything else.
UPS
A UPS is not optional. ZFS is a copy-on-write filesystem and handles power loss better than most, but a UPS protects your drives from the mechanical shock of sudden power cuts and gives you time for a clean shutdown. A basic APC Back-UPS 600VA covers a low-power NAS build. Connect it via USB and configure apcupsd or nut for automatic shutdown on low battery.
FreeBSD Installation for NAS Use
Download the FreeBSD 14.2-RELEASE installer image from https://www.freebsd.org/where/ and write it to a USB drive.
Installation Steps
- Boot from the USB installer.
- Select Install.
- Choose your keyboard layout.
- Set a hostname:
nas.localor whatever fits your network. - Select optional system components. Defaults are fine. You can skip
portsif you preferpkg. - At the partitioning step, select Auto (ZFS) for the boot drive. Choose your boot SSD (or a mirror of two SSDs). Leave data drives untouched -- you will configure those manually after installation.
- Set the root password.
- Configure networking. For a NAS, use a static IP or a DHCP reservation. A NAS that changes IP addresses is a NAS that breaks client mounts.
- Select your timezone.
- Enable
sshdandntpdat boot. - Add a regular user and add it to the
wheelgroup. - Reboot into the installed system.
Post-Install Baseline
After first boot, update the system and install the packages you will need:
shfreebsd-update fetch install pkg update && pkg upgrade -y pkg install -y smartmontools samba416 nano tmux
Enable ZFS if it is not already enabled (it should be if you chose ZFS for root):
shsysrc zfs_enable="YES"
ZFS Pool Design
This is where the real decisions happen. Your pool topology determines your redundancy, performance, and capacity. Choose carefully -- changing vdev topology after creation is limited.
Mirror vs RAIDZ2
For a NAS with 4-6 drives, you have two serious options:
Mirrored vdevs (mirror): Pair drives into mirrors, then stripe across pairs. A 4-drive setup gives you two mirrored vdevs (50% usable capacity). Best random read performance. You can lose one drive per mirror pair. Easiest to expand -- add another mirror pair at any time.
RAIDZ2: A single vdev across all drives with double parity. A 4-drive RAIDZ2 gives you 50% usable capacity (same as mirrors in this case). A 6-drive RAIDZ2 gives you 67% usable. You can lose any two drives. Expansion is harder -- you must add a complete new vdev of the same width, or use RAIDZ expansion (available in OpenZFS 2.3+).
Recommendation: If you have 4 drives, use 2x mirror. If you have 6+ drives, RAIDZ2 is a strong choice. Never use RAIDZ1 for drives larger than 2 TB -- rebuild times on large drives create a dangerous window where a second failure destroys the pool.
Creating the Pool
Identify your data drives:
shcamcontrol devlist geom disk list
Example: you have four drives at da0, da1, da2, da3.
Mirrored vdevs (recommended for 4 drives):
shzpool create -o ashift=12 -O compression=lz4 -O atime=off \ -O xattr=sa -O aclmode=passthrough -O aclinherit=passthrough \ tank mirror da0 da1 mirror da2 da3
RAIDZ2 (for 6 drives):
shzpool create -o ashift=12 -O compression=lz4 -O atime=off \ -O xattr=sa -O aclmode=passthrough -O aclinherit=passthrough \ tank raidz2 da0 da1 da2 da3 da4 da5
Key options explained:
ashift=12-- aligns to 4K sectors. Correct for every modern drive.compression=lz4-- nearly free compression. Always enable it.atime=off-- disables access time updates. Eliminates pointless write I/O.xattr=sa-- stores extended attributes in the inode. Required for Samba performance.aclmode=passthroughandaclinherit=passthrough-- allow ACLs to work properly with Samba.
Hot Spare
If you have an extra drive, add it as a hot spare:
shzpool add tank spare da4
ZFS will automatically resilver to the spare if a drive fails (requires zfsd, which runs by default on FreeBSD).
Verify the Pool
shzpool status tank zpool list tank
For a deep dive into pool types, properties, and tuning, see the ZFS guide.
Dataset Layout
Do not dump everything into the root of your pool. Create separate datasets for each logical share. Datasets are free, and they give you independent snapshot, compression, and quota controls.
shzfs create tank/documents zfs create tank/media zfs create tank/media/movies zfs create tank/media/tv zfs create tank/media/music zfs create tank/photos zfs create tank/backups zfs create tank/timemachine
Set Properties Per Dataset
sh# Large media files benefit from larger record sizes zfs set recordsize=1M tank/media # Time Machine needs refquota to prevent it from consuming all space zfs set refquota=500G tank/timemachine # Backups can use higher compression zfs set compression=zstd tank/backups # Documents are small files -- default 128K recordsize is fine zfs set recordsize=128K tank/documents
Mount Points
By default, tank/documents mounts at /tank/documents. You can override this:
shzfs set mountpoint=/mnt/documents tank/documents
But the default layout under /tank/ is clean and predictable. Stick with it unless you have a reason not to.
Setting Up Samba (SMB) Shares
Samba provides file sharing for Windows and macOS clients. Install it if you have not already:
shpkg install -y samba416
Create a Samba User
Samba maintains its own user database. The user must also exist as a system user:
shpw useradd -n nasuser -m -s /usr/sbin/nologin smbpasswd -a nasuser
Configuration
Edit /usr/local/etc/smb4.conf:
ini[global] server string = FreeBSD NAS workgroup = WORKGROUP security = user passdb backend = tdbsam # Performance socket options = TCP_NODELAY IPTOS_LOWDELAY read raw = yes write raw = yes use sendfile = yes aio read size = 16384 aio write size = 16384 # macOS compatibility fruit:metadata = stream fruit:model = MacSamba fruit:posix_rename = yes fruit:veto_appledouble = no fruit:wipe_intentionally_left_blank_rfork = yes fruit:delete_empty_adfiles = yes vfs objects = fruit streams_xattr # Disable printing load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes # Logging log file = /var/log/samba4/log.%m max log size = 50 log level = 1 [documents] path = /tank/documents valid users = nasuser writable = yes browseable = yes create mask = 0664 directory mask = 0775 [media] path = /tank/media valid users = nasuser writable = yes browseable = yes create mask = 0664 directory mask = 0775 [photos] path = /tank/photos valid users = nasuser writable = yes browseable = yes create mask = 0664 directory mask = 0775 [timemachine] path = /tank/timemachine valid users = nasuser writable = yes browseable = yes fruit:time machine = yes fruit:time machine max size = 500G
Set Permissions
shchown -R nasuser:nasuser /tank/documents /tank/media /tank/photos /tank/timemachine
Enable and Start Samba
shsysrc samba_server_enable="YES" service samba_server start
Test the configuration:
shtestparm /usr/local/etc/smb4.conf
From a Windows client, open \\nas-ip\documents in File Explorer. From macOS, use Finder > Go > Connect to Server > smb://nas-ip/documents.
Setting Up NFS for Linux/Unix Clients
NFS is simpler than Samba and performs better for Unix-to-Unix file sharing. FreeBSD includes NFS in the base system.
Enable NFS Services
shsysrc nfs_server_enable="YES" sysrc mountd_enable="YES" sysrc rpcbind_enable="YES" sysrc nfsv4_server_enable="YES" sysrc nfsuserd_enable="YES"
Configure Exports
Edit /etc/exports:
shell/tank/documents -alldirs -maproot=root -network 192.168.1.0 -mask 255.255.255.0 /tank/media -alldirs -maproot=root -network 192.168.1.0 -mask 255.255.255.0 /tank/backups -alldirs -maproot=root -network 192.168.1.0 -mask 255.255.255.0
This exports three datasets to all clients on the 192.168.1.0/24 subnet. Adjust the network range to match your LAN. The -alldirs option allows mounting subdirectories. The -maproot=root option maps the remote root user to local root -- restrict this in multi-user environments.
For a more restrictive setup, export to specific hosts:
shell/tank/backups -alldirs -maproot=root 192.168.1.50
Start NFS
shservice rpcbind start service nfsd start service mountd start service nfsuserd start
Mount from a Linux Client
shmount -t nfs4 nas-ip:/tank/documents /mnt/nas-documents
Add to /etc/fstab on the client for persistent mounts:
shellnas-ip:/tank/documents /mnt/nas-documents nfs4 rw,hard,intr 0 0
Snapshot Automation
ZFS snapshots are instant, free (until you delete the original data), and the single best backup mechanism you have for local recovery. Automate them.
Cron-Based Snapshot Script
Create /usr/local/bin/zfs-snap.sh:
sh#!/bin/sh # ZFS automatic snapshot management # Creates timestamped snapshots and prunes old ones DATASETS="tank/documents tank/media tank/photos tank/backups tank/timemachine" DATE=$(date +%Y-%m-%d_%H-%M) KEEP_HOURLY=24 KEEP_DAILY=30 KEEP_WEEKLY=12 # Determine snapshot type based on time HOUR=$(date +%H) DAY_OF_WEEK=$(date +%u) if [ "$DAY_OF_WEEK" = "7" ] && [ "$HOUR" = "00" ]; then TYPE="weekly" KEEP=$KEEP_WEEKLY elif [ "$HOUR" = "00" ]; then TYPE="daily" KEEP=$KEEP_DAILY else TYPE="hourly" KEEP=$KEEP_HOURLY fi # Create snapshots for ds in $DATASETS; do SNAPNAME="${ds}@auto-${TYPE}-${DATE}" zfs snapshot "$SNAPNAME" if [ $? -eq 0 ]; then logger -t zfs-snap "Created snapshot: $SNAPNAME" else logger -t zfs-snap "FAILED to create snapshot: $SNAPNAME" fi done # Prune old snapshots of this type for ds in $DATASETS; do zfs list -t snapshot -o name -s creation -r "$ds" | \ grep "@auto-${TYPE}-" | \ head -n -${KEEP} | \ while read snap; do zfs destroy "$snap" logger -t zfs-snap "Destroyed old snapshot: $snap" done done
Make it executable and add it to cron:
shchmod +x /usr/local/bin/zfs-snap.sh crontab -e
Add these lines:
shell0 * * * * /usr/local/bin/zfs-snap.sh
This runs every hour. The script determines whether to create an hourly, daily, or weekly snapshot based on the current time.
Alternative: Sanoid
If you prefer a mature tool with more features, install sanoid:
shpkg install -y sanoid
Configure /usr/local/etc/sanoid/sanoid.conf:
ini[tank/documents] use_template = production [tank/media] use_template = production [tank/photos] use_template = production [template_production] hourly = 24 daily = 30 weekly = 12 monthly = 6 autosnap = yes autoprune = yes
Enable and start it:
shsysrc sanoid_enable="YES" service sanoid start
Sanoid also comes with syncoid for replication, which we cover next.
Monitoring and Alerts
A NAS you do not monitor is a NAS that will fail silently. Set up three layers of monitoring.
Layer 1: ZFS Pool Health
Check pool status with:
shzpool status tank
The output should show ONLINE for every device. Any DEGRADED, FAULTED, or UNAVAIL status requires immediate attention.
Automate this with a cron job. Create /usr/local/bin/zpool-check.sh:
sh#!/bin/sh # Alert on unhealthy ZFS pools STATUS=$(zpool status -x) if [ "$STATUS" != "all pools are healthy" ]; then echo "$STATUS" | mail -s "ZFS POOL ALERT on $(hostname)" admin@yourdomain.com logger -t zpool-check "ALERT: ZFS pool is not healthy" fi
shchmod +x /usr/local/bin/zpool-check.sh
Add to cron (run every 5 minutes):
shell*/5 * * * * /usr/local/bin/zpool-check.sh
Layer 2: SMART Monitoring
Install and configure smartmontools:
shpkg install -y smartmontools sysrc smartd_enable="YES"
Edit /usr/local/etc/smartd.conf:
shell# Monitor all drives, email on any issue DEVICESCAN -a -o on -S on -n standby,q -s (S/../.././02|L/../../7/03) -W 4,45,55 -m admin@yourdomain.com
This configuration:
- Enables all SMART tests (
-a) - Runs short self-tests daily at 2 AM (
S/../.././02) - Runs long self-tests weekly on Sundays at 3 AM (
L/../../7/03) - Alerts on temperature changes of 4 degrees, warns at 45C, critical at 55C
- Sends email on any issue
shservice smartd start
Layer 3: Email Configuration
For the alerts above to work, configure mail delivery. The simplest approach is to relay through an external SMTP server. Install msmtp:
shpkg install -y msmtp
Create /usr/local/etc/msmtp.conf:
shelldefaults auth on tls on tls_trust_file /etc/ssl/cert.pem logfile /var/log/msmtp.log account default host smtp.gmail.com port 587 from yournas@gmail.com user yournas@gmail.com password your-app-password
shchmod 600 /usr/local/etc/msmtp.conf
Set it as the system mailer in /etc/mail.rc:
shellset sendmail=/usr/local/bin/msmtp
Test it:
shecho "Test from NAS" | mail -s "NAS Alert Test" admin@yourdomain.com
For comprehensive monitoring including metrics, dashboards, and alerting, see the FreeBSD monitoring guide.
Remote Replication
Local snapshots protect against accidental deletion. Remote replication protects against hardware failure, fire, flood, and theft. ZFS send/receive makes this trivially simple.
Manual Send/Receive
Send the latest snapshot of tank/documents to a remote FreeBSD host:
shzfs send tank/documents@auto-daily-2026-03-29_00-00 | \ ssh backup-host zfs receive -F backuppool/documents
For incremental sends (much faster after the initial full send):
shzfs send -i tank/documents@auto-daily-2026-03-28_00-00 \ tank/documents@auto-daily-2026-03-29_00-00 | \ ssh backup-host zfs receive -F backuppool/documents
Automated Replication with Syncoid
If you installed sanoid earlier, you already have syncoid. It handles the incremental logic automatically:
shsyncoid tank/documents backup-host:backuppool/documents syncoid tank/media backup-host:backuppool/media syncoid tank/photos backup-host:backuppool/photos
Automate it via cron:
shell0 2 * * * /usr/local/bin/syncoid --no-sync-snap tank/documents backup-host:backuppool/documents 0 3 * * * /usr/local/bin/syncoid --no-sync-snap tank/media backup-host:backuppool/media 0 4 * * * /usr/local/bin/syncoid --no-sync-snap tank/photos backup-host:backuppool/photos
Set up SSH key-based authentication for the root user (or a dedicated replication user) on the remote host so these commands can run unattended.
Replication to Cloud Storage
For offsite replication without a second physical server, you can use zfs send piped to a cloud storage tool. For example, sending encrypted snapshots to a remote storage provider via rclone:
shzfs send tank/documents@auto-daily-2026-03-29_00-00 | \ gzip | \ rclone rcat remote:bucket/documents-2026-03-29.zfs.gz
This is a last-resort backup, not a primary replication target. Restoring from cloud is slow. But it is better than losing everything.
Optional Services in Jails
FreeBSD jails let you run additional services on your NAS without polluting the base system. Each jail is an isolated environment with its own filesystem, network stack, and package set.
Full jail setup is covered in the FreeBSD jails guide. Here is a quick overview of common NAS services.
Jellyfin (Media Server)
Stream your media library to any device. Install Jellyfin in a jail and mount your media dataset read-only:
sh# In the jail pkg install -y jellyfin sysrc jellyfin_enable="YES" service jellyfin start
Mount the media dataset into the jail using nullfs in your jail configuration:
shellmount.fstab = "/etc/fstab.jellyfin";
In /etc/fstab.jellyfin:
shell/tank/media /jails/jellyfin/mnt/media nullfs ro 0 0
Access Jellyfin at http://nas-ip:8096.
Nextcloud (File Sync)
Run your own cloud storage. Nextcloud in a jail gives you file sync, calendar, contacts, and collaboration without trusting a third party with your data.
sh# In the jail pkg install -y nextcloud-php83 nginx php83 postgresql16-server
Point Nextcloud's data directory at a nullfs-mounted dataset for storage.
Transmission (Torrent Client)
Download Linux ISOs (or whatever you download) directly to your NAS:
sh# In the jail pkg install -y transmission-daemon sysrc transmission_enable="YES" service transmission start
Access the web UI at http://nas-ip:9091.
The key principle: jails consume negligible resources, keep services isolated from each other and from the NAS base system, and can be destroyed and recreated without affecting your data.
Maintenance Schedule
A NAS is not "set it and forget it." These are the tasks that keep your data safe.
Weekly
- ZFS scrub (run one scrub per month at minimum; weekly is better for critical data):
shzpool scrub tank
Automate via cron:
shell0 1 * * 0 /sbin/zpool scrub tank
This starts a scrub every Sunday at 1 AM. A scrub reads every block in the pool and verifies checksums. It is the only way to detect silent data corruption (bit rot) before it matters.
- Review SMART reports:
shsmartctl -a /dev/da0 smartctl -a /dev/da1
Look at Reallocated_Sector_Ct, Current_Pending_Sector, and Offline_Uncorrectable. Any non-zero values on these counters mean the drive is developing problems. Replace it proactively.
Monthly
- Check pool status and capacity:
shzpool status tank zpool list -v tank zfs list -o name,used,avail,refer,compressratio
Do not let your pool exceed 80% capacity. ZFS performance degrades as the pool fills because the copy-on-write allocator has fewer contiguous blocks to work with.
- Apply FreeBSD security patches:
shfreebsd-update fetch install pkg update && pkg upgrade -y
Reboot only if a kernel update was applied.
Quarterly
- Long SMART test on every drive:
shfor drive in da0 da1 da2 da3; do smartctl -t long /dev/$drive done
A long test takes several hours per drive. Check results afterward:
shsmartctl -l selftest /dev/da0
- Verify backup integrity. Restore a snapshot from your remote replication target and confirm the data is intact. A backup you have never tested is not a backup.
- Review snapshot usage:
shzfs list -t snapshot -o name,used,refer -s used
Snapshots that hold large amounts of unique data may indicate deleted files that are still consuming space.
Annually
- Review drive warranty status. Replace drives that are approaching end-of-warranty or showing any SMART warnings.
- Evaluate capacity planning. If you are consistently above 60% pool usage, plan expansion.
- Test your UPS. Pull the plug and confirm the NAS shuts down cleanly.
FAQ
How much RAM does ZFS actually need?
The "1 GB per TB of storage" rule is a myth from the Solaris days and refers to deduplication (which you should not enable anyway). For a NAS without dedup, 8 GB is a practical minimum, 16 GB is comfortable, and 32 GB is generous. ZFS will use whatever RAM is available for ARC caching and give it back when applications need it.
Should I use RAIDZ1?
Not with modern drive sizes. A 12 TB drive takes 20+ hours to resilver. During that time, a second drive failure destroys the entire pool. RAIDZ2 tolerates two simultaneous failures. Use it. If you only have three drives, use a 3-way mirror instead of RAIDZ1.
Can I expand a RAIDZ vdev?
OpenZFS 2.3+ supports RAIDZ expansion by adding a single disk to an existing RAIDZ vdev. FreeBSD 14.x ships OpenZFS 2.2.x, so check your version with zpool --version. If expansion is not yet available, your options are: add a new vdev of the same type, or backup, destroy, and recreate with more drives.
Should I use deduplication?
No. ZFS deduplication requires approximately 5 GB of RAM per TB of stored data for the dedup table. It also degrades write performance significantly. Use compression instead -- lz4 is essentially free, and zstd provides better ratios for archival data. These give you most of the space savings with none of the RAM or performance cost.
How do I replace a failed drive?
When a drive fails, ZFS marks it as FAULTED or UNAVAIL in zpool status. Replace it:
sh# Identify the failed drive zpool status tank # Replace it (after physically swapping the drive) zpool replace tank da1 da4
Where da1 is the failed device and da4 is the new drive. ZFS will resilver the new drive automatically. If you configured a hot spare, ZFS may have already resilvered to the spare via zfsd.
What about encryption?
ZFS supports native encryption at the dataset level. Enable it at dataset creation time:
shzfs create -o encryption=aes-256-gcm -o keylocation=prompt -o keyformat=passphrase tank/private
You will need to unlock the dataset after each reboot:
shzfs load-key tank/private zfs mount tank/private
Encryption is per-dataset, so you can mix encrypted and unencrypted datasets in the same pool.
How does this compare to TrueNAS?
TrueNAS Core is FreeBSD with a web GUI on top. If you want a point-and-click interface, use TrueNAS. If you want full control, fewer abstraction layers, and the ability to customize everything, build your own FreeBSD NAS. The underlying storage stack -- FreeBSD + OpenZFS -- is identical. See the full TrueNAS vs Unraid comparison for more detail.
Conclusion
A FreeBSD NAS with ZFS is a storage platform you can trust for years. The filesystem checksums and self-heals your data. Snapshots let you recover from mistakes in seconds. Replication protects against site-level failures. Jails give you services without the complexity of a separate hypervisor.
The total cost is a few hours of setup and a few minutes of monthly maintenance. In exchange, you get a storage system that enterprise teams pay tens of thousands of dollars for -- running on hardware you control, with software you can audit, and no subscription fees.
Build it. Run it. Trust it.