How to Set Up NFS File Sharing on FreeBSD
NFS (Network File System) is the native file sharing protocol for Unix and Unix-like systems. It lets you mount remote directories over the network as if they were local filesystems. On FreeBSD, NFS is built into the base system -- no packages to install, no ports to compile. It just works.
If your environment is all FreeBSD, Linux, or other Unix systems, NFS is the right choice. It is faster and simpler than Samba for Unix-to-Unix file sharing, with lower overhead and tighter integration with Unix permissions. When you combine NFS with ZFS, you get a powerful storage backend that handles snapshots, compression, and checksums underneath your network shares.
This guide covers everything you need to deploy NFS on FreeBSD: server configuration, the exports file, client mounting, NFSv4, the automounter, ZFS integration, performance tuning, security, monitoring, and troubleshooting.
NFS Overview: v3 vs v4
FreeBSD supports both NFSv3 and NFSv4. Understanding the differences helps you choose the right version for your deployment.
NFSv3
NFSv3 is the older, widely deployed version. It is simple and well-understood:
- Stateless protocol. The server does not track which clients have files open. Recovery after a crash is straightforward.
- Relies on rpcbind. NFSv3 uses Sun RPC for service discovery. The
rpcbinddaemon must be running on both server and client. - Multiple ports. NFSv3 uses port 2049 for NFS itself, plus dynamic ports for
mountd,statd, andlockd. This complicates firewall rules. - AUTH_SYS authentication. The default authentication trusts the client to report its UID/GID honestly. This is fine on a trusted LAN but unsuitable for hostile networks.
NFSv4
NFSv4 is the modern version with significant improvements:
- Stateful protocol. The server tracks open files and delegations, enabling better caching and lock management.
- Single port. Everything runs over TCP port 2049. No rpcbind required for pure NFSv4. Firewall configuration is trivial.
- Built-in security. NFSv4 supports Kerberos authentication natively via RPCSEC_GSS. You can enforce strong authentication and encryption.
- ID mapping. Instead of raw UID/GID numbers, NFSv4 uses user@domain strings mapped via
idmapd. This avoids UID mismatch problems across machines. - Pseudo-filesystem. NFSv4 presents all exports under a single root, so clients see a unified namespace.
When to Use NFS vs Samba
Use NFS when all your clients are Unix-based (FreeBSD, Linux, macOS). Use Samba when you need to serve files to Windows machines or when you need Active Directory integration for authentication. Many environments run both: NFS for the Unix servers and Samba for the desktop clients.
NFS Server Setup
Everything you need is in the FreeBSD base system. No packages required.
Enabling NFS in rc.conf
Add the following lines to /etc/rc.conf on the server:
sh# Core NFS server daemons nfs_server_enable="YES" nfsd_enable="YES" mountd_enable="YES" rpcbind_enable="YES" # Optional but recommended nfsv4_server_enable="YES" nfsuserd_enable="YES"
You can also use sysrc to set these values:
shsysrc nfs_server_enable="YES" sysrc nfsd_enable="YES" sysrc mountd_enable="YES" sysrc rpcbind_enable="YES" sysrc nfsv4_server_enable="YES" sysrc nfsuserd_enable="YES"
Here is what each service does:
- nfsd -- The NFS server daemon. Handles file read/write requests from clients.
- mountd -- The mount daemon. Processes mount requests and checks them against
/etc/exports. - rpcbind -- The RPC portmapper. Required for NFSv3. Not strictly needed for pure NFSv4-only setups, but enabling it avoids compatibility issues.
- nfsuserd -- The NFSv4 user/group ID mapping daemon. Translates between numeric UIDs/GIDs and user@domain strings.
Starting the NFS Server
After configuring /etc/rc.conf and /etc/exports (covered next), start the services:
shservice rpcbind start service nfsd start service mountd start service nfsuserd start
Or reboot. The rc system will start them in the correct order.
The /etc/exports File
The /etc/exports file controls which directories the server shares and who can access them. Each line defines one export. The syntax is:
shell/path -options host1 host2 ...
Basic Syntax and Examples
Export a single directory to one client:
shell/data -ro 192.168.1.10
This exports /data as read-only to the host at 192.168.1.10.
Export a directory read-write to an entire subnet:
shell/data -alldirs -maproot=root 192.168.1.0/24
-alldirsallows clients to mount any subdirectory within/data, not just the root.-maproot=rootmaps the remote root user to the local root user. Without this, remote root is mapped tonobodyby default (root squashing).
Export multiple directories to multiple clients:
shell/home -alldirs 192.168.1.0/24 /var/shared -ro -network 10.0.0.0 -mask 255.255.255.0 /projects -mapall=nobody 192.168.1.50 192.168.1.51
- The first line exports
/homeand all its subdirectories to the 192.168.1.0/24 subnet. - The second line exports
/var/sharedread-only to the 10.0.0.0/24 network using the older-network/-masksyntax. - The third line exports
/projectsto two specific hosts, mapping all remote users tonobody.
Common Export Options
| Option | Description |
|--------|-------------|
| -ro | Read-only export |
| -alldirs | Allow mounting subdirectories |
| -maproot=user | Map remote root to specified user |
| -mapall=user | Map all remote users to specified user |
| -network / -mask | Specify allowed network and netmask |
| -sec=krb5:krb5i:krb5p | Require Kerberos security flavors |
Hostname Formats
You can specify clients using:
- IP addresses:
192.168.1.10 - CIDR notation:
192.168.1.0/24 - Hostnames:
client1.example.com - Network/mask:
-network 192.168.1.0 -mask 255.255.255.0
Applying Changes
After editing /etc/exports, reload the export list without restarting the server:
shservice mountd reload
This tells mountd to re-read the exports file. Active mounts are not disrupted.
Exporting ZFS Datasets
ZFS and NFS work well together on FreeBSD. You can either export ZFS datasets through /etc/exports or use ZFS's built-in NFS sharing properties.
Method 1: Traditional /etc/exports
Treat ZFS mount points like any other directory:
shell/tank/data -alldirs -maproot=root 192.168.1.0/24 /tank/media -ro 192.168.1.0/24
This is the straightforward approach and gives you full control over export options.
Method 2: ZFS sharenfs Property
ZFS can manage NFS exports directly. Set the sharenfs property on a dataset:
shzfs set sharenfs="-alldirs,-maproot=root,192.168.1.0/24" tank/data
This automatically adds the export to the running NFS server. To verify:
shzfs get sharenfs tank/data showmount -e localhost
To stop sharing:
shzfs set sharenfs=off tank/data
The advantage of sharenfs is that exports follow the dataset. If you move or replicate the dataset to another server, the sharing configuration goes with it. The downside is that the export syntax is slightly different from what you write in /etc/exports, which can cause confusion.
For most deployments, using /etc/exports directly is simpler and easier to audit. Use sharenfs when you need exports tightly bound to datasets, such as in automated provisioning.
ZFS Considerations
- Each ZFS dataset is a separate filesystem. If you export
/tankwith-alldirs, clients can mount subdirectories of/tankbut not child datasets like/tank/data(which is a different filesystem). Export each dataset separately. - Snapshots. ZFS snapshots are accessible under the
.zfs/snapshotdirectory. Clients can read snapshots directly if the export allows it. - Compression. ZFS compression (lz4, zstd) happens transparently below NFS. The NFS protocol sends uncompressed data over the network, but storage on disk is compressed.
NFS Client Setup
Enabling the NFS Client
On the client machine, add to /etc/rc.conf:
shnfs_client_enable="YES" rpcbind_enable="YES"
Or with sysrc:
shsysrc nfs_client_enable="YES" sysrc rpcbind_enable="YES"
Start the services:
shservice rpcbind start service nfsclient start
Manual Mounting with mount_nfs
Test the mount manually before adding it to /etc/fstab:
shmount_nfs 192.168.1.1:/data /mnt/data
For NFSv4:
shmount_nfs -o nfsv4 192.168.1.1:/data /mnt/data
To specify additional options:
shmount_nfs -o rw,tcp,rsize=65536,wsize=65536,intr 192.168.1.1:/data /mnt/data
Verify the mount:
shmount | grep nfs df -h /mnt/data
Persistent Mounts with /etc/fstab
Add the NFS mount to /etc/fstab so it persists across reboots:
shell192.168.1.1:/data /mnt/data nfs rw,tcp,intr 0 0
For NFSv4:
shell192.168.1.1:/data /mnt/data nfs rw,nfsv4,tcp,intr 0 0
A more complete example with performance options:
shell192.168.1.1:/data /mnt/data nfs rw,tcp,rsize=65536,wsize=65536,intr 0 0 192.168.1.1:/media /mnt/media nfs ro,tcp,rsize=65536,intr 0 0 192.168.1.1:/home /mnt/home nfs rw,nfsv4,tcp,intr 0 0
The intr option allows pending NFS operations to be interrupted if the server becomes unreachable. Without it, processes accessing the mount will hang indefinitely.
To mount all NFS entries from fstab without rebooting:
shmount -a -t nfs
NFSv4 Configuration
NFSv4 requires additional configuration for ID mapping to work correctly.
Configuring nfsuserd
The nfsuserd daemon handles the translation between numeric UIDs/GIDs and user@domain strings that NFSv4 uses. Enable it on both server and client:
shsysrc nfsuserd_enable="YES" sysrc nfsuserd_flags="-domain example.com"
The domain must match on all machines participating in the NFSv4 environment. If server and client use different domains, ownership will show as nobody.
Start the daemon:
shservice nfsuserd start
Setting the NFSv4 Domain
Every machine in the NFSv4 environment must agree on a single domain string. Set it via the nfsuserd_flags in /etc/rc.conf:
shsysrc nfsuserd_flags="-domain example.com"
Alternatively, set the vfs.nfsd.nfsuserd_domain sysctl:
shsysctl vfs.nfsd.nfsuserd_domain=example.com
Use the same domain everywhere. It does not need to match your DNS domain -- it just needs to be consistent.
NFSv4 Pseudo-Filesystem
NFSv4 introduces a pseudo-filesystem that acts as a virtual root for all exports. On FreeBSD, the pseudo-root is defined using the V4 flag in /etc/exports:
shellV4: /exports -sec=sys /exports/data -alldirs 192.168.1.0/24 /exports/media -ro 192.168.1.0/24
The first line sets /exports as the NFSv4 root. Clients connect to / on the server and see the exported directories underneath. A client would mount with:
shmount_nfs -o nfsv4 192.168.1.1:/ /mnt/nfs
And see /data and /media as subdirectories of /mnt/nfs.
Automounter: amd vs autofs
An automounter mounts NFS shares on demand when a user accesses them and unmounts them after a period of inactivity. FreeBSD includes two automounters.
amd (am-utils)
amd is the traditional BSD automounter. It has been part of FreeBSD for decades. Configuration uses map files with their own syntax:
Enable in /etc/rc.conf:
shsysrc amd_enable="YES" sysrc amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map"
A basic /etc/amd.map:
shell/defaults type:=nfs;opts:=rw,tcp,intr data rhost:=192.168.1.1;rfs:=/data media rhost:=192.168.1.1;rfs:=/media
With this configuration, accessing /host/data automatically mounts 192.168.1.1:/data.
autofs
autofs is the newer automounter, available since FreeBSD 10.1. It is more compatible with the Linux autofs implementation and is generally easier to configure:
Enable in /etc/rc.conf:
shsysrc autofs_enable="YES" sysrc automount_enable="YES"
Edit /etc/auto_master:
shell/nfs /etc/auto_nfs
Create /etc/auto_nfs:
shelldata -rw,tcp,intr 192.168.1.1:/data media -ro,tcp,intr 192.168.1.1:/media
Start the service:
shservice automount start service autofs start
Now accessing /nfs/data triggers an automatic mount of 192.168.1.1:/data.
Which Automounter to Use
Use autofs for new deployments. It is simpler to configure, better maintained, and compatible with the Linux autofs maps you will find in most documentation. Use amd only if you have existing amd map files you need to maintain.
Performance Tuning
NFS performance depends on network bandwidth, server disk speed, and protocol configuration. Here are the main tuning levers.
nfsd Thread Count
The nfsd daemon runs multiple server threads to handle concurrent requests. The default is 4 threads, which is too low for any real workload. Increase it:
shsysrc nfs_server_flags="-u -t -n 16"
-uenables UDP (for NFSv3 compatibility).-tenables TCP.-n 16runs 16 server threads.
For busy servers, 16 to 32 threads is a reasonable starting point. Monitor with nfsstat and increase if you see request queuing.
Read/Write Block Size
The rsize and wsize mount options control the maximum data block size for read and write operations. Larger values reduce protocol overhead:
shmount_nfs -o rsize=65536,wsize=65536 192.168.1.1:/data /mnt/data
The default on FreeBSD is typically 32768 (32 KB). Increasing to 65536 (64 KB) improves throughput for large sequential transfers. Values above 65536 rarely help and can hurt latency for small I/O.
TCP vs UDP
NFSv3 supports both TCP and UDP. NFSv4 is TCP-only.
- TCP is reliable, handles large transfers well, and works across routed networks. Use TCP for everything.
- UDP has slightly lower latency for small operations on a local LAN but is unreliable for large transfers. Avoid UDP unless you have a specific reason.
Force TCP on the client:
shmount_nfs -o tcp 192.168.1.1:/data /mnt/data
Async vs Sync Exports
By default, NFS exports on FreeBSD are synchronous: the server writes data to stable storage before acknowledging the write to the client. This is safe but slow.
Async mode acknowledges writes before they hit disk:
shell/data -async -alldirs 192.168.1.1
Async dramatically improves write performance but risks data loss if the server crashes between the acknowledgment and the disk write. If your server has a UPS and ZFS (which has its own write caching via the ZIL), the practical risk is low. For non-critical data like build artifacts or caches, async is a reasonable trade-off.
Kernel Tuning
For high-throughput NFS servers, adjust these sysctls:
sh# Increase NFS server socket buffer sizes sysctl vfs.nfsd.tcphighwater=1048576 # Increase the maximum number of NFS server threads sysctl vfs.nfsd.maxthreads=64 # Increase the NFS client read-ahead sysctl vfs.nfs.iodmin=4 sysctl vfs.nfs.iodmax=16
Add these to /etc/sysctl.conf to make them persist across reboots.
Security
NFS was designed for trusted networks. Deploying it securely requires attention to firewall rules and, ideally, Kerberos authentication.
Firewall Rules
If you run PF on your NFS server, you need to allow the relevant ports. For NFSv4 (TCP-only, single port):
shellpass in on $int_if proto tcp from $trusted_net to (self) port 2049
For NFSv3, you need additional ports for rpcbind, mountd, and statd. Lock mountd to a fixed port to simplify firewall rules:
shsysrc mountd_flags="-p 4000" sysrc rpc_statd_flags="-p 4001" sysrc rpc_lockd_flags="-p 4002"
Then in PF:
shell# NFS v3 ports pass in on $int_if proto tcp from $trusted_net to (self) port { 111 2049 4000 4001 4002 } pass in on $int_if proto udp from $trusted_net to (self) port { 111 2049 4000 4001 4002 }
Port 111 is rpcbind. Port 2049 is NFS. Ports 4000-4002 are the fixed ports you assigned to mountd, statd, and lockd.
Kerberos Authentication
NFSv4 supports Kerberos authentication via RPCSEC_GSS. This replaces the trust-the-client AUTH_SYS model with cryptographic authentication. Three security levels are available:
- krb5 -- Authentication only. The client proves its identity.
- krb5i -- Authentication plus integrity. Data is signed to detect tampering.
- krb5p -- Authentication, integrity, and privacy. Data is encrypted.
Setting up Kerberos NFS requires a working KDC (Key Distribution Center), keytabs on server and client, and the gssd daemon. Export with Kerberos:
shell/data -sec=krb5:krb5i:krb5p 192.168.1.0/24
Enable gssd on both server and client:
shsysrc gssd_enable="YES" service gssd start
Full Kerberos NFS setup is a topic of its own. The key point is: if your NFS traffic crosses untrusted networks, Kerberos is not optional.
Network Isolation
The simplest security measure: put NFS traffic on a dedicated VLAN or network segment. NFS servers should not be directly reachable from the internet. Use a FreeBSD NAS on a private storage network and route traffic through your firewall.
Monitoring NFS
nfsstat
The nfsstat command shows NFS statistics for both server and client:
sh# Server-side statistics nfsstat -s # Client-side statistics nfsstat -c # Continuous monitoring every 5 seconds nfsstat -s -w 5
Watch for high numbers of retransmissions (client side) or request queue depths (server side). These indicate network problems or an overloaded server.
showmount
The showmount command queries the server's export list and active mounts:
sh# Show all exports showmount -e 192.168.1.1 # Show all mounted directories showmount -d 192.168.1.1 # Show all clients with active mounts showmount -a 192.168.1.1
Note that showmount uses the mountd RPC service and does not work for pure NFSv4 connections that bypass rpcbind.
rpcinfo
Check which RPC services are registered on the server:
shrpcinfo -p 192.168.1.1
This shows the port numbers for nfs, mountd, rpcbind, statd, and lockd. Useful for verifying that all required daemons are running and that fixed ports are configured correctly.
Troubleshooting
Mount Failures
Symptom: mount_nfs: 192.168.1.1:/data: Operation not permitted
Causes:
- The client IP is not listed in
/etc/exports. Check withshowmount -eon the server. - The
mountddaemon is not running. Check withservice mountd status. - A firewall is blocking port 2049 or the mountd port. Test with
rpcinfo -p server_ip.
Fix: Verify the export, restart mountd, and check firewall rules:
sh# On the server showmount -e localhost service mountd restart service nfsd restart
Stale NFS File Handles
Symptom: Stale NFS file handle errors when accessing mounted files.
Causes:
- The exported directory was deleted or recreated on the server.
- The server was rebooted and the filesystem has a different file handle.
- A ZFS dataset was destroyed and recreated.
Fix: Unmount and remount on the client:
shumount -f /mnt/data mount_nfs 192.168.1.1:/data /mnt/data
The -f flag forces the unmount even if the mount is busy.
Permission Issues
Symptom: Files show ownership as nobody:nobody or operations fail with Permission denied.
Causes:
- UID/GID mismatch between server and client. NFS uses numeric UIDs. If user
aliceis UID 1001 on the server but UID 1005 on the client, permissions break. - Root squashing is active. By default, remote root is mapped to
nobody. Use-maproot=rootin/etc/exportsif you need remote root access. - NFSv4 ID mapping is misconfigured. Check that
nfsuserdis running on both ends with the same-domainvalue.
Fix: Synchronize UIDs/GIDs across machines, or use NFSv4 with properly configured nfsuserd.
Hung Mounts
Symptom: Processes accessing the NFS mount hang and become unkillable.
Causes:
- The NFS server is down or unreachable.
- The mount was created without the
introption.
Fix: If possible, bring the server back. Otherwise, force-unmount:
shumount -f /mnt/data
For future mounts, always use the intr option so operations can be interrupted with Ctrl+C.
Slow Performance
Symptom: File transfers are much slower than expected.
Diagnostic steps:
sh# Check if using TCP or UDP mount | grep nfs # Check NFS statistics for retransmissions nfsstat -c # Test raw network speed with iperf iperf3 -c 192.168.1.1
Common fixes:
- Increase
rsizeandwsizeto 65536. - Switch to TCP if using UDP.
- Increase
nfsdthread count on the server. - Check for network congestion or duplex mismatches.
- Enable async exports if write performance is poor and data loss risk is acceptable.
Complete Example: NFS Server with ZFS
Here is a full working configuration for a FreeBSD NAS serving files over NFS.
Server /etc/rc.conf:
sh# NFS server nfs_server_enable="YES" nfsd_enable="YES" mountd_enable="YES" rpcbind_enable="YES" nfsv4_server_enable="YES" nfsuserd_enable="YES" nfsuserd_flags="-domain home.lan" nfs_server_flags="-u -t -n 16" mountd_flags="-p 4000"
Server /etc/exports:
shellV4: /tank -sec=sys /tank/data -alldirs -maproot=root 192.168.1.0/24 /tank/media -ro 192.168.1.0/24 /tank/home -alldirs 192.168.1.0/24 /tank/backup -alldirs -maproot=root 192.168.1.10
Client /etc/rc.conf:
shnfs_client_enable="YES" rpcbind_enable="YES" nfsuserd_enable="YES" nfsuserd_flags="-domain home.lan"
Client /etc/fstab:
shell192.168.1.1:/tank/data /mnt/data nfs rw,tcp,rsize=65536,wsize=65536,intr 0 0 192.168.1.1:/tank/media /mnt/media nfs ro,tcp,rsize=65536,intr 0 0 192.168.1.1:/tank/home /mnt/home nfs rw,nfsv4,tcp,intr 0 0
Create mount points and mount:
shmkdir -p /mnt/data /mnt/media /mnt/home mount -a -t nfs
Frequently Asked Questions
What ports does NFS use?
NFSv4 uses only TCP port 2049. NFSv3 uses port 2049 plus port 111 (rpcbind) and dynamic ports for mountd, statd, and lockd. You can pin the dynamic ports to fixed numbers using mountd_flags, rpc_statd_flags, and rpc_lockd_flags in /etc/rc.conf.
Can I use NFS with jails?
Yes. You can NFS-mount directories into a FreeBSD jail by adding the mount to the jail's fstab. The host handles the NFS client connection. You can also run an NFS server inside a jail, though this requires enabling the allow.nfs jail parameter and is more complex. See the FreeBSD jails guide for jail networking details.
Should I use NFS or Samba for my home NAS?
If all your clients are Linux, FreeBSD, or macOS, use NFS. It is faster, simpler, and native to Unix systems. If you have Windows clients, use Samba. Many NAS builds run both NFS and Samba simultaneously to serve all client types.
How do I share the same directory over both NFS and Samba?
Export it in /etc/exports and define it as a share in smb4.conf. FreeBSD handles concurrent access from both protocols. Make sure file locking is consistent -- NFS and Samba use different locking mechanisms, so avoid having both NFS and Samba clients write to the same file simultaneously.
How many nfsd threads do I need?
Start with 16 threads for a general-purpose file server. Monitor with nfsstat -s and watch for request queuing. If you see the server consistently busy, increase to 32 or 64 threads. Each thread consumes minimal resources, so over-provisioning is harmless.
Is NFS secure enough for production use?
NFS with AUTH_SYS (the default) trusts the client network entirely. Any machine that can reach the NFS port can spoof UIDs and access files. For production use on untrusted networks, enable NFSv4 with Kerberos authentication (krb5p) for authentication and encryption. On a trusted private network behind a PF firewall, AUTH_SYS is acceptable.
How do I check if NFS is working?
From the client, run showmount -e server_ip to see available exports. Then try a manual mount with mount_nfs server_ip:/export /mnt/test. Check nfsstat -c for client-side statistics and nfsstat -s on the server. If mounts fail, use rpcinfo -p server_ip to verify the RPC services are reachable.
Summary
NFS is the natural choice for file sharing between Unix systems on FreeBSD. The server runs entirely from the base system with no external dependencies. NFSv4 simplifies firewall configuration and adds real security with Kerberos support. Combined with ZFS on the backend, you get a storage platform with snapshots, checksums, and compression underneath your network shares.
Start with the complete example above, verify it works with manual mounts, then add fstab entries for persistence. Tune thread counts and block sizes based on your workload. Keep NFS traffic on a dedicated network segment and consider Kerberos if your threat model demands it.