ZFS Utilities on FreeBSD: Complete Review
ZFS is FreeBSD's flagship filesystem and volume manager. It combines the roles of a traditional filesystem, a logical volume manager, and a RAID controller into a single, integrated system. FreeBSD has shipped with ZFS support since version 7.0 (2008) and adopted OpenZFS as the default root filesystem option in FreeBSD 13.0. The ZFS utilities on FreeBSD are mature, well-integrated, and form the backbone of storage management on the platform.
This review covers the core ZFS command-line utilities -- zpool, zfs, arc_summary, zdb, and zfs-stats -- along with FreeBSD's boot environment system, and a comparison with btrfs tools on Linux.
zpool: Pool Management
The zpool command manages storage pools -- the fundamental ZFS storage abstraction that sits between physical disks and ZFS datasets. A pool aggregates one or more virtual devices (vdevs) into a single storage namespace.
Creating Pools
sh# Single disk (no redundancy) zpool create tank da0 # Mirror (2-way) zpool create tank mirror da0 da1 # RAID-Z1 (single parity, minimum 3 disks) zpool create tank raidz1 da0 da1 da2 # RAID-Z2 (double parity, minimum 4 disks) zpool create tank raidz2 da0 da1 da2 da3 # RAID-Z3 (triple parity, minimum 5 disks) zpool create tank raidz3 da0 da1 da2 da3 da4 # Striped mirrors (best performance + redundancy) zpool create tank mirror da0 da1 mirror da2 da3 # With dedicated log and cache devices zpool create tank raidz2 da0 da1 da2 da3 \ log mirror nvd0 nvd1 \ cache nvd2
Pool Status and Health
sh# Pool status overview zpool status # Detailed status with error counts zpool status -v # Pool I/O statistics (2 second interval) zpool iostat 2 # Per-vdev I/O statistics zpool iostat -v 2 # Pool space usage zpool list # Detailed space accounting zpool list -v
Pool Maintenance
sh# Scrub (verify all checksums, repair if possible) zpool scrub tank # Check scrub progress zpool status tank # Clear error counters after replacing a failed disk zpool clear tank # Import/export pools (for moving between systems) zpool export tank zpool import tank # List available pools for import zpool import # History of all pool operations zpool history tank
Disk Replacement
sh# Replace a failed disk zpool replace tank da1 da4 # Offline a disk for maintenance zpool offline tank da1 # Bring a disk back online zpool online tank da1 # Attach a disk to create a mirror from a single vdev zpool attach tank da0 da1 # Detach a mirror member zpool detach tank da1
Pool Properties
sh# List all pool properties zpool get all tank # Set autotrim for SSDs zpool set autotrim=on tank # Enable pool features zpool upgrade tank
zfs: Dataset Management
The zfs command manages datasets -- filesystems, volumes (zvols), snapshots, and clones within a pool.
Creating and Managing Datasets
sh# Create a filesystem dataset zfs create tank/data # Create with specific properties zfs create -o compression=lz4 -o atime=off -o recordsize=1M tank/backups # Create a volume (block device) zfs create -V 50G tank/swap # Set properties on existing dataset zfs set compression=zstd tank/data # Get a specific property zfs get compression tank/data # Get all properties zfs get all tank/data # List all datasets zfs list # List with specific columns zfs list -o name,used,avail,refer,compressratio
Snapshots
Snapshots are ZFS's most powerful feature for data protection. They are instantaneous, space-efficient, and form the basis for replication.
sh# Create a snapshot zfs snapshot tank/data@2026-04-09 # Create recursive snapshot (all child datasets) zfs snapshot -r tank@daily-2026-04-09 # List snapshots zfs list -t snapshot # List snapshots of a specific dataset zfs list -t snapshot -r tank/data # Compare changes since snapshot zfs diff tank/data@2026-04-09 # Rollback to a snapshot (destroys changes after snapshot) zfs rollback tank/data@2026-04-09 # Destroy a snapshot zfs destroy tank/data@2026-04-09 # Destroy snapshots matching a pattern zfs destroy tank/data@daily-%
Send/Receive (Replication)
sh# Full send to a file zfs send tank/data@snap1 > /backup/data-snap1.zfs # Full send to a remote host zfs send tank/data@snap1 | ssh backup-host zfs receive pool/data # Incremental send (much faster for regular replication) zfs send -i tank/data@snap1 tank/data@snap2 | ssh backup-host zfs receive pool/data # Compressed send (reduces bandwidth) zfs send --compressed tank/data@snap2 | ssh backup-host zfs receive pool/data # Resume interrupted transfers zfs send -t <resume_token> | ssh backup-host zfs receive pool/data
Quotas and Reservations
sh# Set a quota (maximum space a dataset can use) zfs set quota=100G tank/data # Set a reservation (guaranteed space for a dataset) zfs set reservation=50G tank/data # Reference quota (excludes snapshot space) zfs set refquota=80G tank/data # User and group quotas zfs set userquota@www:50G tank/data zfs set groupquota@staff:200G tank/data # List user space usage zfs userspace tank/data
Encryption
sh# Create an encrypted dataset zfs create -o encryption=aes-256-gcm -o keyformat=passphrase tank/secure # Lock a dataset (unmount and unload key) zfs unload-key tank/secure # Unlock a dataset zfs load-key tank/secure zfs mount tank/secure # Change passphrase zfs change-key tank/secure
arc_summary: ARC Cache Analysis
The ARC (Adaptive Replacement Cache) is ZFS's primary read cache in RAM. Understanding ARC behavior is essential for performance tuning. FreeBSD includes arc_summary as a base system tool.
sharc_summary
This produces detailed output covering:
- ARC size: Current, target, minimum, and maximum sizes
- ARC hit rate: Overall cache effectiveness (target: above 90%)
- MRU/MFU breakdown: Most Recently Used vs Most Frequently Used cache distribution
- Prefetch statistics: How effectively ZFS predicts sequential reads
- L2ARC statistics: Second-level cache (SSD) hit rate and throughput
- Demand data vs metadata: Balance between data and metadata caching
Key Metrics to Watch
sh# Quick ARC size check sysctl kstat.zfs.misc.arcstats.size sysctl kstat.zfs.misc.arcstats.c_max # ARC hit ratio sysctl kstat.zfs.misc.arcstats.hits sysctl kstat.zfs.misc.arcstats.misses
Tuning ARC Size
By default, ZFS ARC uses up to 87.5% of system RAM on FreeBSD. For systems running memory-intensive applications (databases, jails), limit the ARC:
sh# Set max ARC to 8GB in /boot/loader.conf vfs.zfs.arc_max="8589934592"
Or at runtime:
shsysctl vfs.zfs.arc_max=8589934592
zdb: ZFS Debugger
zdb is a low-level debugging tool for examining ZFS on-disk structures. It is not for daily use but is invaluable for troubleshooting pool issues and understanding ZFS internals.
Common Uses
sh# Display pool configuration (label data) zdb -l /dev/da0 # Display pool-wide statistics zdb tank # Display dataset metadata zdb -d tank/data # Display object metadata zdb -d tank/data 2 # Verify pool integrity (more thorough than scrub) zdb -cc tank # Display space maps (advanced) zdb -mmm tank # Dump ZIL (ZFS Intent Log) contents zdb -i tank
Recovering from Problems
When a pool will not import:
sh# Check all readable labels zdb -l /dev/da0 zdb -l /dev/da1 zdb -l /dev/da2 # Try importing with recovery mode zpool import -F tank # Import read-only for data recovery zpool import -o readonly=on tank
zdb output is technical and requires understanding of ZFS internals. It is a diagnostic tool, not an administrative one.
zfs-stats and Additional Tools
sysctl for ZFS Statistics
FreeBSD exposes ZFS statistics through the sysctl tree:
sh# All ZFS parameters sysctl -a | grep zfs # ARC statistics sysctl kstat.zfs.misc.arcstats # ZIO statistics sysctl kstat.zfs.misc.zio_stats # DBUF statistics sysctl kstat.zfs.misc.dbufstats # Transaction group statistics sysctl kstat.zfs.misc.txgs
zpool iostat for Performance Monitoring
sh# Basic I/O stats every 2 seconds zpool iostat 2 # Per-vdev breakdown zpool iostat -v 2 # Latency histogram zpool iostat -w 2 # Request size histogram zpool iostat -r 2 # Combined latency and request size zpool iostat -rw 2
gstat for Disk I/O
FreeBSD's gstat shows GEOM-level disk I/O, which complements ZFS-level statistics:
shgstat -a
Boot Environments with bectl
FreeBSD's bectl manages ZFS boot environments -- complete, bootable snapshots of the operating system. This is one of FreeBSD's most compelling features for system administration.
Understanding Boot Environments
A boot environment is a clone of the root filesystem dataset. Since ZFS clones are copy-on-write, creating a boot environment is instant and uses no extra space until files diverge. You can:
- Create a BE before system upgrades and roll back if something breaks
- Keep multiple OS versions available for booting
- Test configuration changes safely
bectl Commands
sh# List boot environments bectl list # Create a new boot environment bectl create pre-upgrade # Create from a specific snapshot bectl create restored-state @2026-04-09 # Activate a boot environment for next boot bectl activate pre-upgrade # Temporarily boot into a BE (one-time) bectl activate -t pre-upgrade # Mount a BE for inspection bectl mount pre-upgrade /mnt # Unmount bectl umount pre-upgrade # Rename bectl rename pre-upgrade pre-upgrade-13.2 # Destroy bectl destroy old-be # Jail into a boot environment bectl jail pre-upgrade
Practical Workflow: Safe System Upgrade
sh# Create a boot environment before upgrading bectl create pre-14.1-upgrade # Perform the upgrade freebsd-update fetch install # If the upgrade fails, revert bectl activate pre-14.1-upgrade reboot # If the upgrade works, clean up bectl destroy pre-14.1-upgrade
This workflow is why many FreeBSD administrators consider boot environments indispensable. It makes system upgrades risk-free -- you can always roll back to the exact state before the upgrade.
ZFS vs btrfs Tools
btrfs is Linux's copy-on-write filesystem, often compared to ZFS. Here is how their tooling compares.
Pool/Volume Management
ZFS uses zpool to manage pools of disks and zfs for datasets. btrfs combines both into the btrfs command with subcommands.
sh# ZFS: create a mirrored pool zpool create tank mirror da0 da1 # btrfs: create a mirrored filesystem mkfs.btrfs -m raid1 -d raid1 /dev/sda /dev/sdb
ZFS's pool concept is more flexible. You can add vdevs to a pool (expanding storage) and manage datasets independently. btrfs uses a balance operation to redistribute data across devices, which is slower and less predictable.
Snapshots
Both support snapshots, but ZFS's implementation is more mature:
sh# ZFS zfs snapshot tank/data@snap1 zfs send tank/data@snap1 | ssh remote zfs receive pool/data # btrfs btrfs subvolume snapshot /data /data/.snapshots/snap1 btrfs send /data/.snapshots/snap1 | ssh remote btrfs receive /pool/data
ZFS send/receive is more robust, supports encrypted send, compressed send, and resumable transfers. btrfs send/receive works but has had historical reliability issues.
Scrubbing and Repair
sh# ZFS zpool scrub tank # btrfs btrfs scrub start /mnt/data
Both verify checksums across all data. ZFS automatically repairs corrupted data from redundant copies. btrfs can repair from mirrors and RAID1, but RAID5/6 has had long-standing reliability problems that are only recently being addressed.
Tooling Maturity
ZFS tools on FreeBSD are stable, well-documented, and have not had breaking changes in years. btrfs tools on Linux are still evolving, with occasional command-line interface changes. ZFS has zdb for low-level debugging; btrfs has btrfs inspect-internal and btrfs-debug-tree, but these are less documented.
Key Difference
ZFS on FreeBSD is a production filesystem with decades of enterprise deployment. btrfs on Linux is improving but still carries the caveat that some RAID levels (RAID5/6) are not fully production-ready. For FreeBSD users, ZFS is the clear choice -- it is deeply integrated, well-tested, and the entire FreeBSD boot environment system depends on it.
FAQ
How do I check the health of my ZFS pool?
shzpool status
Look for the state field. ONLINE means healthy. DEGRADED means a disk has failed but the pool is functional. FAULTED means the pool has lost too many disks and is offline. Run zpool scrub tank regularly (weekly or monthly) to verify all checksums and catch silent data corruption early.
How much RAM does ZFS need on FreeBSD?
ZFS works on systems with 2 GB of RAM, but performance improves significantly with more. The ARC (read cache) is the primary consumer. For a file server, 1 GB of ARC per TB of storage is a useful starting guideline. For databases on ZFS, you may want to limit ARC size to leave room for application caches. Set vfs.zfs.arc_max in /boot/loader.conf to control this.
Should I use RAID-Z1, RAID-Z2, or mirrors?
RAID-Z2 is the most common recommendation for general use. It tolerates two simultaneous disk failures. RAID-Z1 is acceptable for small pools with fast rebuild times. Mirrors provide the best random read and rebuild performance but cost more per usable terabyte. For pools with 6+ disks, RAID-Z2 or RAID-Z3 is strongly recommended because the probability of a second disk failure during rebuild increases with pool size.
How do I automate ZFS snapshots?
Use zfs-periodic or the zfstools package:
shpkg install zfstools
Or use a simple cron-based approach:
sh# /etc/cron.d/zfs-snapshots 0 * * * * root zfs snapshot -r tank@hourly-$(date +\%Y\%m\%d-\%H) 0 0 * * * root zfs snapshot -r tank@daily-$(date +\%Y\%m\%d) 0 0 * * 0 root zfs snapshot -r tank@weekly-$(date +\%Y\%m\%d)
Add cleanup to remove old snapshots:
sh# Keep 24 hourly, 30 daily, 12 weekly 0 1 * * * root zfs list -H -t snapshot -o name | grep hourly | head -n -24 | xargs -n1 zfs destroy 0 1 * * * root zfs list -H -t snapshot -o name | grep daily | head -n -30 | xargs -n1 zfs destroy
What is the best ZFS recordsize for my workload?
The default 128K recordsize works well for general-purpose file storage. Specific workloads benefit from tuning: 8K for PostgreSQL, 16K for MySQL/InnoDB, 1M for large sequential files (media, backups), and 64K or 128K for virtual machine disk images. Set recordsize before writing data -- changing it only affects newly written blocks.
How do I expand a ZFS pool?
You can add new vdevs to a pool, but you cannot add individual disks to an existing vdev (except to mirror):
sh# Add a new mirror vdev to an existing pool zpool add tank mirror da4 da5 # Replace drives with larger ones (one at a time in a mirror) zpool replace tank da0 da4 # wait for resilver zpool replace tank da1 da5 # wait for resilver zpool online -e tank da4 # expand to use new capacity
As of OpenZFS 2.2+, RAID-Z expansion (adding a disk to an existing RAID-Z vdev) is supported. On FreeBSD 14+, you can use zpool attach tank raidz1-0 da4 to add a disk to an existing RAID-Z vdev.