NFS on FreeBSD: Network File System Review
NFS is one of those technologies that FreeBSD does better than almost any other operating system. While NFS originates from Sun Microsystems and has deep roots in the Solaris world, FreeBSD's NFS implementation is among the most mature and performant available. It is part of the base system -- no packages needed, no ports to build, no dependencies to manage. Combined with ZFS, FreeBSD provides what might be the best NFS server platform outside of enterprise storage appliances (which, not coincidentally, often run FreeBSD internally).
This review covers NFS on FreeBSD from both server and client perspectives, with a focus on NFSv4, ZFS integration, Kerberos security, performance characteristics, and how it compares to Samba for network file sharing.
NFS Versions on FreeBSD
FreeBSD supports NFSv3 and NFSv4 (including v4.1 and v4.2). The key differences:
NFSv3:
- Stateless protocol (simpler, more resilient to server crashes)
- Uses portmapper, mountd, and lockd (multiple ports)
- Authentication by IP address and UID/GID mapping
- No built-in encryption
NFSv4:
- Stateful protocol with lease-based locking
- Single port (2049) -- firewall-friendly
- Kerberos support for authentication and encryption
- Pseudo-filesystem (single export tree)
- Better cross-platform interoperability
- ACL support
For new deployments, NFSv4 is the recommended choice. The single-port design simplifies firewall configuration, and Kerberos support addresses NFSv3's fundamental security weakness (IP-based trust).
Server Configuration
Basic NFSv4 Server
Setting up a basic NFS server on FreeBSD:
sh# Enable NFS services sysrc nfs_server_enable="YES" sysrc nfsv4_server_enable="YES" sysrc nfsd_enable="YES" sysrc mountd_enable="YES" sysrc rpcbind_enable="YES" # Set the number of nfsd threads (default is often too low) sysrc nfs_server_flags="-u -t -n 16" # -u: UDP (for NFSv3 compat) # -t: TCP # -n 16: 16 server threads (adjust based on load)
Export Configuration
The /etc/exports file defines what is shared:
sh# /etc/exports # Basic NFSv4 export # The V4 root defines the base of the pseudo-filesystem V4: /exports # Export a directory to a subnet /exports/data -network 10.0.0.0 -mask 255.255.255.0 # Export with specific options /exports/home -maproot=root -network 10.0.0.0/24 # Export read-only /exports/iso -ro -network 10.0.0.0/24 # Export to specific hosts /exports/backup -alldirs 10.0.0.10 10.0.0.11 10.0.0.12
Export options explained:
-alldirs: Allow mounting any subdirectory within the export-maproot=root: Map root on the client to root on the server (use carefully)-mapall=nobody: Map all client users to nobody (more secure)-ro: Read-only export-network/-mask: Restrict access by network
Apply export changes:
sh# Reload exports without restarting service mountd reload # Or force a full re-export exportfs -ra # Verify current exports showmount -e localhost
Starting the Server
shservice rpcbind start service nfsd start service mountd start # Verify services are running rpcinfo -p localhost
Client Configuration
Mounting NFS Shares
Basic client setup:
sh# Enable NFS client sysrc nfs_client_enable="YES" # Start the client services service nfsclient start
Mount an NFS share:
sh# Mount NFSv4 mount -t nfs -o nfsv4 server:/exports/data /mnt/data # Mount NFSv3 mount -t nfs server:/exports/data /mnt/data # Mount with specific options mount -t nfs -o nfsv4,rw,soft,intr,rsize=1048576,wsize=1048576 \ server:/exports/data /mnt/data
Persistent Mounts via fstab
sh# /etc/fstab entries # NFSv4 mount server:/exports/data /mnt/data nfs rw,nfsv4,late 0 0 # NFSv4 mount with performance options server:/exports/data /mnt/data nfs rw,nfsv4,rsize=1048576,wsize=1048576,late 0 0 # Soft mount (returns errors instead of hanging if server is unavailable) server:/exports/data /mnt/data nfs rw,nfsv4,soft,intr,late 0 0
The late option is important on FreeBSD -- it delays the mount until after the network is up, preventing boot hangs when the NFS server is unreachable.
Autofs
For dynamic mounting on demand:
sh# Enable autofs sysrc autofs_enable="YES" # Configure auto_master # /etc/auto_master /mnt/nfs /etc/auto_nfs # /etc/auto_nfs data -fstype=nfs,nfsv4,rw server:/exports/data home -fstype=nfs,nfsv4,rw server:/exports/home # Start autofs service automount restart service autounmountd restart # Access triggers the mount ls /mnt/nfs/data # Mounts automatically
ZFS Exports
Exporting ZFS datasets over NFS is where FreeBSD truly shines. ZFS has built-in NFS sharing support:
sh# Enable NFS sharing on a ZFS dataset zfs set sharenfs="on" tank/data # Share with specific options zfs set sharenfs="-network 10.0.0.0/24 -maproot=root" tank/data # Share read-only zfs set sharenfs="-ro -network 10.0.0.0/24" tank/archives # Verify the share zfs get sharenfs tank/data showmount -e localhost
ZFS's sharenfs property automatically manages /etc/zfs/exports, which is included by the NFS server. This means creating a new dataset and sharing it is a single command:
sh# Create and share in one step zfs create -o sharenfs="-network 10.0.0.0/24" tank/newshare
ZFS + NFS Best Practices
sh# Set recordsize to match NFS workload # For general file serving zfs set recordsize=128K tank/fileserver # For databases accessed over NFS zfs set recordsize=16K tank/database # Disable atime for NFS exports (reduces write overhead) zfs set atime=off tank/data # Enable compression (reduces network and disk I/O) zfs set compression=lz4 tank/data # For NFS-heavy workloads, tune the ZFS ARC # In /boot/loader.conf vfs.zfs.arc_max="8G" # Adjust based on available RAM
Snapshots Over NFS
ZFS snapshots are accessible through the .zfs/snapshot directory, which is visible to NFS clients:
sh# On the server zfs snapshot tank/data@daily-$(date +%Y%m%d) # On the client, users can access snapshots directly ls /mnt/data/.zfs/snapshot/ # Shows: daily-20260408 daily-20260407 ... # Restore a file from a snapshot cp /mnt/data/.zfs/snapshot/daily-20260408/important.doc /mnt/data/
This gives NFS clients self-service file recovery without needing server access.
Kerberos Authentication
NFSv4 with Kerberos provides proper authentication and optional encryption. This transforms NFS from a trust-based system to a cryptographically secured one.
Prerequisites
sh# Install Kerberos (MIT or Heimdal -- FreeBSD ships with Heimdal) # Heimdal is in the base system # Ensure time synchronization (Kerberos requires it) sysrc ntpd_enable="YES" service ntpd start
KDC Setup (Abbreviated)
If you do not already have a Kerberos realm:
sh# Initialize the KDC kstash # Set master password kadmin -l init EXAMPLE.COM # Create NFS service principals kadmin -l add -r nfs/server.example.com kadmin -l add -r nfs/client.example.com # Extract keytabs kadmin -l ext_keytab -k /etc/krb5.keytab nfs/server.example.com kadmin -l ext_keytab -k /tmp/client.keytab nfs/client.example.com # Copy client.keytab to the client's /etc/krb5.keytab
Server Configuration for Kerberos
sh# Enable GSS security sysrc gssd_enable="YES" sysrc nfsuserd_enable="YES" service gssd start service nfsuserd start # Update exports for Kerberos # /etc/exports V4: /exports -sec=krb5:krb5i:krb5p /exports/data -sec=krb5p -network 10.0.0.0/24 # krb5: Authentication only # krb5i: Authentication + integrity checking # krb5p: Authentication + integrity + privacy (encryption)
Client Configuration for Kerberos
sh# Enable GSS on the client sysrc gssd_enable="YES" sysrc nfsuserd_enable="YES" service gssd start service nfsuserd start # Mount with Kerberos security mount -t nfs -o nfsv4,sec=krb5p server:/exports/data /mnt/data
Performance Tuning
NFS performance on FreeBSD benefits from several tunable parameters.
Server Tuning
sh# Increase nfsd threads # Default is often 4; increase based on client count and load sysrc nfs_server_flags="-u -t -n 32" # In /etc/sysctl.conf for persistent tuning # Increase socket buffer sizes kern.ipc.maxsockbuf=16777216 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.recvbuf_max=16777216 # NFS-specific sysctls vfs.nfsd.tcphighwater=16 # Max pending TCP connections
Client Tuning
sh# Mount with optimized buffer sizes mount -t nfs -o nfsv4,rsize=1048576,wsize=1048576 server:/data /mnt/data # rsize/wsize: Read/write buffer size (max 1MB for NFSv4) # Larger values improve throughput for sequential I/O # In /etc/sysctl.conf vfs.nfs.iothreads=16 # I/O threads for async operations
Network Tuning
sh# For 10GbE or faster networks # /etc/sysctl.conf net.inet.tcp.sendspace=1048576 net.inet.tcp.recvspace=1048576 kern.ipc.maxsockbuf=16777216 # Enable jumbo frames (if switch supports it) ifconfig em0 mtu 9000 # /boot/loader.conf cc_htcp_load="YES" # Use H-TCP congestion control for high bandwidth links
Benchmarking
sh# Simple sequential write test dd if=/dev/zero of=/mnt/nfs/testfile bs=1M count=1024 conv=sync # Sequential read test dd if=/mnt/nfs/testfile of=/dev/null bs=1M # More thorough testing with fio pkg install fio # Sequential write fio --name=seqwrite --directory=/mnt/nfs --rw=write --bs=1M \ --size=1G --numjobs=1 --direct=1 # Random read (simulates typical file server workload) fio --name=randread --directory=/mnt/nfs --rw=randread --bs=4K \ --size=1G --numjobs=4 --direct=1 # Mixed workload fio --name=mixed --directory=/mnt/nfs --rw=randrw --rwmixread=70 \ --bs=8K --size=1G --numjobs=4 --direct=1
Typical performance on modern hardware (1GbE network):
- Sequential read: 110-115 MB/s (near wire speed)
- Sequential write: 100-110 MB/s
- Random 4K read: 5,000-15,000 IOPS (depends heavily on storage)
On 10GbE with ZFS on NVMe:
- Sequential read: 800-1100 MB/s
- Sequential write: 600-900 MB/s
NFS vs Samba on FreeBSD
Samba (SMB/CIFS) is the alternative for network file sharing, particularly in mixed environments with Windows clients.
When to Choose NFS
- All-Unix/BSD/Linux environment
- High-performance requirements (NFS is lighter than SMB)
- ZFS integration (native
sharenfsproperty) - Simplicity (NFS is part of the FreeBSD base system)
- Diskless boot (FreeBSD supports NFS root)
When to Choose Samba
- Windows clients need access
- Active Directory integration required
- macOS Time Machine backups (Samba supports this natively)
- Printer sharing needed
- Mixed Windows/Unix environment
Performance Comparison
On FreeBSD, NFS typically outperforms Samba by 10-30% for sequential operations and significantly more for small file operations. NFS's lighter protocol overhead and kernel-level implementation (versus Samba's userspace daemon) give it an inherent advantage.
sh# Install Samba for comparison pkg install samba416 # Samba requires more configuration # /usr/local/etc/smb4.conf needs to be created and configured # Active Directory or local user authentication must be set up
Using Both
It is common to serve the same data over both NFS and Samba:
sh# ZFS dataset shared via NFS zfs set sharenfs="-network 10.0.0.0/24" tank/shared # Same dataset shared via Samba # In /usr/local/etc/smb4.conf # [shared] # path = /tank/shared # read only = no # valid users = @staff
Be careful with locking. NFS and SMB use different locking mechanisms, and simultaneous write access from both protocols to the same files can cause data corruption. Use separate datasets or make one protocol read-only.
Security Considerations
NFS has historically been criticized for weak security. Modern FreeBSD NFS deployments can address most concerns:
sh# Restrict exports to specific networks /exports/data -network 10.0.0.0/24 # Use NFSv4 with Kerberos for authentication /exports/data -sec=krb5p # Firewall NFS ports # In /etc/pf.conf pass in on egress proto tcp from 10.0.0.0/24 to any port 2049 # For NFSv3, also allow portmapper and mountd pass in on egress proto { tcp, udp } from 10.0.0.0/24 to any port { 111, 2049 }
For environments requiring encryption, NFSv4 with krb5p encrypts all data in transit. Without Kerberos, NFS traffic is unencrypted -- use a VPN or isolated network for sensitive data.
Verdict
NFS on FreeBSD is a first-class implementation that benefits from being part of the base system and integrating deeply with ZFS. For Unix-to-Unix file sharing, it is hard to beat: the setup is straightforward, performance is excellent, and the ZFS sharenfs property makes creating and managing shares trivially easy.
NFSv4 with Kerberos addresses the historical security concerns, though setting up Kerberos adds significant complexity. For trusted networks, the simpler IP-based access control of NFSv3 or unauthenticated NFSv4 is often pragmatically sufficient.
The main limitation is the Unix-centric nature of the protocol. If you need to serve Windows clients, Samba is unavoidable. But for FreeBSD, Linux, and macOS clients, NFS provides better performance with less configuration.
Rating: 9/10 -- A best-in-class NFS implementation tightly integrated with FreeBSD and ZFS. Only minor deductions for Kerberos setup complexity and the lack of built-in encryption without Kerberos.
Frequently Asked Questions
Which NFS version should I use on FreeBSD?
NFSv4 for new deployments. It uses a single port (2049), supports Kerberos authentication, and has better locking semantics. Use NFSv3 only if you have legacy clients that do not support v4.
Does NFS work with ZFS on FreeBSD?
Yes, and the integration is excellent. Use zfs set sharenfs="options" dataset to share ZFS datasets over NFS. Snapshots are accessible to NFS clients through the .zfs/snapshot directory.
How many nfsd threads should I run?
Start with 16 for a small deployment. For busy servers with many clients, increase to 32 or 64. Monitor with nfsstat -s and increase if you see request queuing.
Is NFS secure?
NFSv3 uses IP-based trust and is not secure against network attacks. NFSv4 with Kerberos provides authentication and optional encryption. On trusted networks (isolated VLANs), unauthenticated NFS is commonly used. On untrusted networks, use Kerberos or a VPN.
Can macOS clients connect to FreeBSD NFS?
Yes. macOS has built-in NFS support. Mount with mount -t nfs server:/export /mnt or configure in /etc/auto_master for automounting. NFSv4 works well with macOS.
How does NFS performance compare to local storage?
On a 1GbE network, NFS maxes out at about 110 MB/s for sequential operations. On 10GbE, NFS approaches local storage speeds for many workloads. Latency-sensitive operations (small random I/O) will always be slower over NFS than local storage due to network round-trip time.