FreeBSD ZFS: What's New and Coming in 2026
ZFS on FreeBSD has always been a first-class citizen. FreeBSD was the first operating system outside of Solaris to ship ZFS support, and that heritage continues to pay dividends. In 2026, the combination of OpenZFS 2.3.x, FreeBSD 14.2, and the forthcoming FreeBSD 15 brings a collection of features that fundamentally expand what ZFS can do -- dRAID for large-scale resilver acceleration, block cloning for instant file copies, RAIDZ expansion for adding disks to existing vdevs, and a steady stream of performance improvements that compound across releases. This review covers every significant change, explains what each means for FreeBSD administrators, and looks at the roadmap for the rest of the year.
The OpenZFS and FreeBSD Relationship
OpenZFS is the upstream project that provides ZFS for both FreeBSD and Linux. FreeBSD tracks OpenZFS closely, and the FreeBSD ZFS maintainers are active contributors to the OpenZFS codebase. Starting with FreeBSD 13, ZFS on FreeBSD switched from the legacy illumos-based code to OpenZFS, unifying the codebase across platforms. This means that features landing in OpenZFS are available on FreeBSD shortly after merge, and FreeBSD-specific optimizations flow upstream.
FreeBSD 14.x ships with OpenZFS 2.2.x. FreeBSD 15, expected later in 2026, will ship with OpenZFS 2.3.x. Some features from 2.3 have been backported to 14-STABLE. The practical result is that FreeBSD administrators running 14.2 or later already have access to several of the features discussed below.
dRAID: Distributed Spare RAID
dRAID is the most significant new vdev type since RAIDZ. Traditional RAIDZ rebuilds (resilvering) read from all surviving disks and write to the single replacement disk. The replacement disk becomes the bottleneck -- its write throughput limits the entire resilver. With large modern disks (20+ TB), resilvering a RAIDZ2 vdev can take days, leaving the pool in a degraded state for an uncomfortable duration.
dRAID distributes spare capacity across all disks in the vdev. When a disk fails, the resilver writes are spread across every remaining disk rather than concentrated on a single replacement. The result is dramatically faster resilver times -- often 10x to 20x faster than traditional RAIDZ.
How dRAID Works
A dRAID vdev is configured with a fixed number of data disks, parity level, spare capacity, and children (physical disks). The data and parity are distributed in fixed-width permutation groups across all children. Spare capacity is pre-allocated and distributed, meaning no physical hot spare disk sits idle.
Create a dRAID1 vdev with 10 disks and 1 distributed spare:
shzpool create tank draid1:1s /dev/da0 /dev/da1 /dev/da2 /dev/da3 /dev/da4 /dev/da5 /dev/da6 /dev/da7 /dev/da8 /dev/da9
The 1s means one distributed spare. The resilver target is the distributed spare space, which spans all remaining disks.
When to Use dRAID
dRAID is designed for large pools with many disks -- typically 10 or more per vdev. For small pools (3-6 disks), traditional RAIDZ remains the better choice because dRAID's overhead in fixed-width stripe groups does not pay off at small scale. The sweet spot is storage servers with 20+ disks where resilver time is a critical concern.
Check dRAID status after a failure:
shzpool status tank zpool resilver tank
Block Cloning
Block cloning enables instant copy-on-write file copies within the same pool without duplicating the underlying data blocks. When you copy a file using cp --reflink, ZFS creates new metadata pointing to the same blocks. The blocks are only duplicated when one of the copies is modified.
This is not deduplication. Deduplication compares all writes against a global table and has significant memory and performance costs. Block cloning is explicit -- it only applies when the filesystem is told to make a reference copy. The overhead is near zero for normal operations.
Using Block Cloning on FreeBSD
Block cloning is enabled by default on pools with feature flag block_cloning active. Check your pool's feature flags:
shzpool get all tank | grep block_cloning
Enable it if needed:
shzpool set feature@block_cloning=enabled tank
Copy a file using reflink:
shcp --reflink=auto /tank/data/largefile.iso /tank/data/largefile-copy.iso
The copy completes instantly regardless of file size. Disk usage does not increase until one copy is modified.
Practical Applications
Block cloning shines for:
- VM image management -- clone base images for new VMs without duplicating 50 GB disk images.
- Backup snapshots -- tools that copy snapshot data within the same pool can use block cloning to avoid data duplication.
- Build systems -- copy source trees for parallel builds without consuming additional space.
- Container images -- jail templates and Bastille containers benefit from instant cloning.
RAIDZ Expansion
RAIDZ expansion solves a problem that has frustrated ZFS administrators since the beginning: you could not add a disk to an existing RAIDZ vdev. If you had a 4-disk RAIDZ1 and wanted more space, your only option was to add an entirely new vdev or rebuild the pool. RAIDZ expansion allows you to attach an additional disk to an existing RAIDZ vdev, expanding its capacity in place.
How It Works
The expansion process redistributes data across the original disks plus the new disk. This is an online operation -- the pool remains available during expansion. The redistribution is checkpointed, so it survives reboots and power failures.
Add a disk to an existing RAIDZ1 vdev:
shzpool attach tank raidz1-0 /dev/da4
Monitor the expansion progress:
shzpool status tank
The expansion rewrites all data in the vdev to redistribute across the new disk layout. This takes time proportional to the amount of data stored, not the raw capacity. A half-full 4-disk RAIDZ1 expanding to 5 disks takes about as long as a resilver of the same data volume.
Limitations
- You can only add one disk at a time per vdev.
- You cannot remove a disk from a RAIDZ vdev (expansion is one-way).
- The expansion process consumes I/O bandwidth. Plan for degraded performance during the operation.
- You cannot change the parity level during expansion. A RAIDZ1 stays RAIDZ1.
Performance Improvements
OpenZFS 2.2 and 2.3 include a collection of performance improvements that individually are modest but collectively are significant.
ARC Improvements
The Adaptive Replacement Cache (ARC) has been refined with better eviction heuristics, reduced lock contention on multi-core systems, and improved handling of metadata-heavy workloads. On FreeBSD systems with 64+ GB of RAM dedicated to ARC, the improvements are measurable.
Check ARC statistics:
shsysctl kstat.zfs.misc.arcstats
Key metrics to watch:
shsysctl kstat.zfs.misc.arcstats.hits sysctl kstat.zfs.misc.arcstats.misses sysctl kstat.zfs.misc.arcstats.size
Prefetch Tuning
The speculative prefetch engine has been improved for sequential read workloads. Large file reads (backups, media streaming, database sequential scans) benefit from more aggressive and accurate prefetch. The default settings are good for most workloads, but you can tune them:
shsysctl vfs.zfs.prefetch_disable=0 sysctl vfs.zfs.prefetch.max_distance=33554432
Write Throttle Improvements
The ZFS write throttle, which regulates how fast data enters the transaction group pipeline, has been refined to reduce latency spikes during heavy write bursts. This is particularly noticeable on pools backed by spinning disks where write bursts previously caused read latency to spike.
Direct I/O
OpenZFS 2.3 introduces experimental direct I/O support, allowing applications to bypass the ARC for specific workloads. This is useful for databases like PostgreSQL that manage their own buffer cache and do not benefit from double caching in both the application and ZFS ARC.
Enable direct I/O on a dataset:
shzfs set direct=standard tank/postgresql
This tells ZFS to honor O_DIRECT requests from applications, bypassing the ARC for those specific I/O operations while still using the ARC for metadata.
ZFS Send/Receive Improvements
ZFS send/receive is the backbone of ZFS-based backup and replication. Recent improvements include:
- Compressed send -- sends data in compressed form rather than decompressing and recompressing during transfer. This reduces CPU usage and network bandwidth.
- Raw send -- sends encrypted datasets without decrypting, enabling secure replication where the receiving side never has access to the decryption key.
- Resumable send/receive -- if a send/receive operation is interrupted, it can be resumed from where it left off rather than starting over.
Send a compressed, resumable snapshot:
shzfs send -c -v tank/data@snap1 | ssh backup-server zfs receive -s backup/data
Resume an interrupted receive:
shzfs send -c -t <resume_token> | ssh backup-server zfs receive -s backup/data
Get the resume token from the receiving side:
shzfs get receive_resume_token backup/data
Encryption Updates
ZFS native encryption continues to mature. In 2026, the practical status is:
- AES-256-GCM encryption with key wrapping works reliably for data-at-rest protection.
- Encrypted datasets can be sent/received in raw mode without exposing plaintext.
- Key management integrates with passphrase prompts or key files.
Create an encrypted dataset:
shzfs create -o encryption=aes-256-gcm -o keylocation=prompt -o keyformat=passphrase tank/encrypted
Load the key at boot or on demand:
shzfs load-key tank/encrypted zfs mount tank/encrypted
The main caveat remains that encryption is per-dataset, not per-pool. You must plan your dataset hierarchy to match your encryption boundaries.
FreeBSD-Specific Integration
Boot Environments
FreeBSD's boot environment system, powered by bectl, leverages ZFS snapshots and clones to provide safe system upgrades. Create a boot environment before upgrading:
shbectl create pre-upgrade freebsd-update fetch install
If the upgrade fails, reboot and select the previous boot environment from the loader menu. This is one of the most practical ZFS features for FreeBSD system administration.
List and manage boot environments:
shbectl list bectl activate pre-upgrade bectl destroy failed-upgrade
Jail Integration
ZFS datasets map naturally to FreeBSD jails. Each jail gets its own dataset with independent snapshots, quotas, and compression settings:
shzfs create -o quota=20G -o compression=zstd tank/jails/webserver
Jail managers like Bastille and iocage use ZFS datasets as their default storage backend on FreeBSD.
Tuning for FreeBSD
FreeBSD-specific ZFS tuning via sysctl and /boot/loader.conf:
sh# Set ARC max to 8 GB (in loader.conf) vfs.zfs.arc_max="8589934592" # Enable compressed ARC (stores compressed data in ARC, saving RAM) vfs.zfs.compressed_arc_enabled=1 # Tune transaction group timeout for latency-sensitive workloads vfs.zfs.txg.timeout=5
Add these to /boot/loader.conf for persistence across reboots.
The 2026 Roadmap
Looking ahead for the rest of 2026:
- FreeBSD 15 release -- will ship with OpenZFS 2.3.x, making all features discussed here available out of the box without backports.
- Persistent L2ARC -- the L2ARC (second-level cache on SSDs) survives reboots, eliminating the cold cache problem after restarts. This feature is already available in OpenZFS 2.2+ and works on FreeBSD.
- Continued dRAID maturation -- dRAID is expected to gain additional tooling and monitoring improvements.
- Direct I/O stabilization -- the direct I/O feature will move from experimental to stable.
- Improved TRIM handling -- better SSD TRIM support for pools on solid-state storage.
Migration Considerations
If you are running older FreeBSD with the legacy ZFS (pre-OpenZFS), upgrading requires pool feature flag upgrades:
shzpool upgrade tank zpool upgrade -a
Check available feature flags before upgrading:
shzpool get all tank | grep feature@
Be aware that enabling new feature flags is a one-way operation. Once enabled, older ZFS implementations cannot import the pool. Plan your upgrade path carefully if you need backward compatibility.
Verdict
ZFS on FreeBSD in 2026 is the most capable it has ever been. dRAID addresses the resilver time problem that limited ZFS adoption for large-scale storage. Block cloning eliminates unnecessary data duplication for common copy operations. RAIDZ expansion removes one of the longest-standing operational limitations of ZFS. The steady accumulation of performance improvements makes the entire stack faster without requiring configuration changes.
For FreeBSD administrators, ZFS remains the default filesystem choice. The integration with boot environments, jails, and the FreeBSD toolchain is seamless. The only scenarios where UFS still makes sense are embedded systems with very limited RAM (under 512 MB) where ZFS's memory requirements are genuinely constraining.
If you are running FreeBSD 14.2, you already have most of these features. If you are waiting for FreeBSD 15, you will get them all with a fresh install. Either way, 2026 is a strong year for ZFS on FreeBSD.
Frequently Asked Questions
How much RAM does ZFS need on FreeBSD in 2026?
The minimum practical amount is 2 GB, with 1 GB dedicated to ARC. For production servers, 8 GB or more is recommended. ZFS will use available RAM for ARC by default -- this is by design and improves performance. You can cap ARC size with vfs.zfs.arc_max in /boot/loader.conf.
Can I use dRAID on an existing pool?
No. dRAID is a vdev type that must be specified at pool creation. You cannot convert an existing RAIDZ vdev to dRAID. You can add a new dRAID vdev to an existing pool alongside existing RAIDZ vdevs, but the existing vdevs remain unchanged.
Is block cloning the same as deduplication?
No. Deduplication compares all incoming writes against a global dedup table and eliminates duplicates automatically. It has significant RAM requirements (roughly 5 GB per TB of data) and can degrade write performance. Block cloning is explicit -- it only creates shared block references when specifically requested via reflink copy. Block cloning has near-zero overhead for normal operations.
Should I enable RAIDZ expansion on production pools?
Yes, but plan for the I/O impact. The expansion process rewrites data across the new disk layout and consumes bandwidth. Run expansion during maintenance windows or low-traffic periods. The operation is safe -- it checkpoints progress and survives interruptions.
Does ZFS native encryption affect performance?
Yes, modestly. AES-256-GCM encryption adds CPU overhead of roughly 5-15% depending on workload and whether your CPU supports AES-NI hardware acceleration. All modern x86 CPUs support AES-NI, so the practical impact is small. The bigger consideration is key management complexity, not performance.
Can I use ZFS on FreeBSD with SSDs only?
Yes, and it works well. Enable TRIM support with sysctl vfs.zfs.trim.enabled=1 and consider setting ashift=12 or ashift=13 at pool creation to align with SSD sector sizes. All-SSD pools benefit significantly from the ARC and L2ARC improvements in recent OpenZFS releases.