How to Set Up ZFS Send/Receive for Remote Replication
ZFS send/receive is the native way to replicate data between FreeBSD systems. It works at the block level, is incremental, and preserves all ZFS properties including snapshots, compression settings, and deduplication tables. Unlike file-level tools like rsync, ZFS replication transfers only the changed blocks between two snapshots, making it efficient even for multi-terabyte datasets.
This guide covers manual send/receive operations, SSH transport configuration, incremental replication, automated replication with syncoid, monitoring, and disaster recovery procedures.
How ZFS Send/Receive Works
The mechanism is straightforward:
zfs sendserializes a snapshot (or the difference between two snapshots) into a data stream- That stream is piped to
zfs receiveon the target, which reconstructs the dataset
The stream contains raw ZFS data blocks. This means:
- All file data, metadata, and ZFS properties are preserved
- Compression, dedup tables, and ACLs come along
- The target gets an exact replica of the source dataset at that point in time
- Incremental sends only transfer blocks that changed between two snapshots
Prerequisites
Source Server
A FreeBSD system with ZFS datasets you want to replicate. This guide uses zroot/data as the example dataset.
Target Server
A FreeBSD system with a ZFS pool to receive the replicated data. The pool does not need to be the same size or use the same vdev configuration, but it must have enough free space.
SSH Access
The source must be able to SSH into the target (or vice versa). Set up key-based authentication:
shssh-keygen -t ed25519 -f /root/.ssh/zfs_replication -N "" ssh-copy-id -i /root/.ssh/zfs_replication.pub root@target-server.example.com
Test the connection:
shssh -i /root/.ssh/zfs_replication root@target-server.example.com "zpool status"
Manual Send/Receive: Full Replication
Create an Initial Snapshot
On the source server:
shzfs snapshot -r zroot/data@repl-2026-04-09
The -r flag creates recursive snapshots of all child datasets.
Send the Full Snapshot to a Remote Target
shzfs send -R zroot/data@repl-2026-04-09 | \ ssh -i /root/.ssh/zfs_replication root@target-server.example.com \ "zfs receive -Fduv zroot/backup"
Flags explained:
-R(replication stream): Sends all snapshots up to and including the specified one, plus all properties and child datasets-Fon receive: Forces a rollback on the target if needed-don receive: Strips the source pool name and uses the rest of the path.zroot/databecomeszroot/backup/data-uon receive: Does not mount the received dataset (useful for backup targets)-von receive: Verbose output showing progress
Verify the Replication
On the target:
shzfs list -r zroot/backup zfs list -t snapshot -r zroot/backup
You should see the same datasets and snapshots as the source.
Incremental Send/Receive
After the initial full replication, subsequent sends should be incremental. This transfers only the blocks that changed between two snapshots.
Create a New Snapshot on the Source
shzfs snapshot -r zroot/data@repl-2026-04-10
Send the Incremental Difference
shzfs send -R -i zroot/data@repl-2026-04-09 zroot/data@repl-2026-04-10 | \ ssh -i /root/.ssh/zfs_replication root@target-server.example.com \ "zfs receive -Fduv zroot/backup"
The -i flag tells ZFS to send only the changes between the two snapshots. If 50GB of data exists but only 500MB changed, only 500MB is transferred.
Verify
On the target:
shzfs list -t snapshot -r zroot/backup/data
Both snapshots should be present.
Optimizing Transfer Speed
Compression in Transit
ZFS send streams are often compressible. Use SSH compression or pipe through a fast compressor:
shzfs send -R -i zroot/data@snap1 zroot/data@snap2 | \ zstd -1 | \ ssh -i /root/.ssh/zfs_replication root@target "zstd -d | zfs receive -Fduv zroot/backup"
Install zstd on both servers:
shpkg install zstd
Use a Faster Cipher
SSH's default cipher may bottleneck high-bandwidth links. Use a faster one:
shzfs send -R -i @snap1 zroot/data@snap2 | \ ssh -c aes128-gcm@openssh.com -i /root/.ssh/zfs_replication \ root@target "zfs receive -Fduv zroot/backup"
Use mbuffer for Buffering
Network latency can cause stalls in the send/receive pipeline. mbuffer smooths this out:
shpkg install mbuffer
shzfs send -R -i @snap1 zroot/data@snap2 | \ mbuffer -s 128k -m 1G | \ ssh -i /root/.ssh/zfs_replication root@target \ "mbuffer -s 128k -m 1G | zfs receive -Fduv zroot/backup"
Use Raw Sends for Encrypted Datasets
If your source datasets are ZFS-encrypted, use raw sends to transfer the encrypted blocks directly:
shzfs send --raw -R -i @snap1 zroot/data@snap2 | \ ssh root@target "zfs receive -Fduv zroot/backup"
Raw sends preserve encryption. The target does not need the encryption key to receive the data.
Automating with Syncoid
Manually managing snapshots and send/receive is tedious. Syncoid (part of the sanoid/syncoid suite) automates the entire process.
Install Sanoid/Syncoid
shpkg install sanoid
This installs both sanoid (snapshot management) and syncoid (replication).
Basic Syncoid Usage
Replicate a dataset with a single command:
shsyncoid zroot/data root@target-server.example.com:zroot/backup/data
Syncoid automatically:
- Creates a snapshot on the source
- Determines the last common snapshot between source and target
- Sends an incremental stream
- Cleans up temporary snapshots
Recursive Replication
shsyncoid -r zroot/data root@target-server.example.com:zroot/backup/data
Using a Specific SSH Key
shsyncoid --sshkey /root/.ssh/zfs_replication \ zroot/data \ root@target-server.example.com:zroot/backup/data
Syncoid with Compression
shsyncoid --compress=zstd-fast \ --sshkey /root/.ssh/zfs_replication \ zroot/data \ root@target-server.example.com:zroot/backup/data
Scheduling Syncoid with Cron
Add to root's crontab for hourly replication:
shcrontab -e
sh# Replicate zroot/data every hour 0 * * * * /usr/local/bin/syncoid --sshkey /root/.ssh/zfs_replication --no-sync-snap zroot/data root@target:/zroot/backup/data >> /var/log/syncoid.log 2>&1 # Recursive replication of jails every 6 hours 0 */6 * * * /usr/local/bin/syncoid -r --sshkey /root/.ssh/zfs_replication zroot/jails root@target:/zroot/backup/jails >> /var/log/syncoid.log 2>&1
Snapshot Management with Sanoid
Sanoid handles automated snapshot creation and pruning, complementing syncoid's replication.
Configure Sanoid
Create /usr/local/etc/sanoid/sanoid.conf:
shmkdir -p /usr/local/etc/sanoid cat > /usr/local/etc/sanoid/sanoid.conf <<'EOF' [zroot/data] use_template = production recursive = yes [template_production] frequently = 0 hourly = 24 daily = 30 monthly = 12 yearly = 2 autosnap = yes autoprune = yes EOF
Schedule Sanoid
shcrontab -e
sh# Run sanoid every 15 minutes for snapshot management */15 * * * * /usr/local/bin/sanoid --cron >> /var/log/sanoid.log 2>&1
Monitoring Replication
Check Replication Lag
Compare the latest snapshot on source and target:
On the source:
shzfs list -t snapshot -o name,creation -s creation -r zroot/data | tail -3
On the target:
shzfs list -t snapshot -o name,creation -s creation -r zroot/backup/data | tail -3
Script to Alert on Replication Lag
Create /usr/local/sbin/check-replication.sh:
sh#!/bin/sh # /usr/local/sbin/check-replication.sh # Alert if replication is more than 2 hours behind DATASET="zroot/backup/data" MAX_AGE=7200 # 2 hours in seconds TARGET_HOST="target-server.example.com" latest_snap=$(ssh -i /root/.ssh/zfs_replication root@${TARGET_HOST} \ "zfs list -t snapshot -o name,creation -s creation -r ${DATASET} | tail -1") snap_time=$(echo "$latest_snap" | awk '{print $2, $3, $4, $5, $6}') snap_epoch=$(date -j -f "%a %b %d %H:%M %Y" "$snap_time" "+%s" 2>/dev/null) now_epoch=$(date "+%s") age=$((now_epoch - snap_epoch)) if [ "$age" -gt "$MAX_AGE" ]; then echo "ALERT: Replication lag is ${age} seconds ($(( age / 3600 )) hours)" | \ mail -s "ZFS Replication Lag: $(hostname)" admin@example.com fi
Schedule it:
sh0 * * * * /usr/local/sbin/check-replication.sh
Syncoid Log Analysis
Review syncoid output for errors:
shtail -100 /var/log/syncoid.log
Common issues:
- "No matching snapshots" -- initial full send needed
- "Cannot receive incremental" -- common snapshot was destroyed, re-seed needed
- "Dataset is busy" -- target dataset is mounted or in use
Disaster Recovery
Failing Over to the Backup
If the primary server is lost, the backup target has your data. To make it usable:
On the target:
sh# Set the mountpoint zfs set mountpoint=/data zroot/backup/data # Mount the datasets zfs mount -a # Remove the read-only flag (set by zfs receive) zfs set readonly=off zroot/backup/data
Your data is now accessible at /data on the backup server.
Reseeding After a Failover
After rebuilding the primary, reverse the replication direction temporarily, then switch back:
sh# From the (new) primary, pull from the backup target syncoid --sshkey /root/.ssh/zfs_replication \ root@target-server.example.com:zroot/backup/data \ zroot/data
Once fully synced, switch replication back to the normal direction.
Handling Diverged Datasets
If both source and target have diverged (both received writes), you cannot do a simple incremental send. You have two options:
- Rollback the target to the last common snapshot and resend:
sh# On target zfs rollback zroot/backup/data@last-common-snapshot # From source zfs send -R -i @last-common-snapshot zroot/data@current | \ ssh root@target "zfs receive -Fduv zroot/backup"
- Full reseed: Destroy the target dataset and do a full send again. This is the nuclear option for when things have diverged badly.
Security Considerations
Restrict SSH on the Target
On the target server, create a restricted user for replication:
shpw useradd -n zfsrepl -m -s /bin/sh
Allow this user to run only ZFS receive commands. Add to /usr/local/etc/sudoers.d/zfsrepl:
shzfsrepl ALL=(root) NOPASSWD: /sbin/zfs receive * zfsrepl ALL=(root) NOPASSWD: /sbin/zfs list * zfsrepl ALL=(root) NOPASSWD: /sbin/zfs get *
In authorized_keys, restrict the command:
shcommand="sudo /sbin/zfs receive -Fduv zroot/backup",restrict ssh-ed25519 AAAA... root@source
Append-Only Target
Use zfs allow to delegate only receive permissions:
shzfs allow -u zfsrepl create,mount,receive,rollback zroot/backup
This ensures the replication user cannot destroy datasets on the target.
FAQ
How much bandwidth does ZFS replication use?
Incremental sends transfer only changed blocks. A dataset with 1% daily change rate and 1TB total data sends roughly 10GB per replication cycle. Raw send streams are not compressed by ZFS send -- use SSH compression or pipe through zstd.
Can I replicate between different ZFS pool layouts?
Yes. The source could be a mirror and the target a raidz2. ZFS send/receive works at the dataset level, not the pool level. Pool geometry does not matter.
What happens if a replication is interrupted?
An interrupted receive leaves a partial dataset. Resume by sending the same incremental again -- ZFS receive will pick up where it left off (with the -s resume flag on FreeBSD 14+).
shzfs send -t <resume_token> | ssh root@target "zfs receive -s zroot/backup/data"
How do I handle replication of encrypted datasets?
Use --raw sends. The encrypted blocks are transferred as-is. The target does not need the encryption key to store the data, only to mount and read it.
Can I replicate to a cloud server?
Yes, as long as the cloud server runs FreeBSD (or any OS with ZFS). Use SSH transport as described above. Consider using zstd compression and mbuffer for WAN links.
How is this different from BorgBackup?
ZFS send/receive is block-level replication that preserves ZFS properties and is very fast for large datasets. BorgBackup is file-level, deduplicating, and works on any filesystem. Use ZFS send/receive for ZFS-to-ZFS replication; use BorgBackup for file-level backups to non-ZFS targets.
How often should I replicate?
Depends on your RPO (Recovery Point Objective). Hourly replication means at most 1 hour of data loss. For critical data, replicate every 15 minutes. Syncoid handles this efficiently since only changed blocks are transferred.