FreeBSD Disk Management and Partitioning Guide
Disk management on FreeBSD is built on GEOM -- a modular storage framework that handles everything from partitioning to encryption to software RAID. GEOM sits between the filesystem layer and the physical disk drivers, providing a consistent interface regardless of how your storage is configured.
This guide covers the full stack: identifying disks, partitioning with GPT and MBR, creating filesystems with ZFS and UFS, configuring RAID, encrypting drives with GELI, managing swap, and working with NVMe storage.
Identifying Disks
Before you partition anything, identify what you have:
sh# List all disks geom disk list # Compact view camcontrol devlist # NVMe drives nvmecontrol devlist # View all GEOM providers geom part list
FreeBSD names disks by their controller:
| Prefix | Controller |
|---|---|
| ada | AHCI/SATA |
| da | SCSI/USB/SAS |
| nvd or nda | NVMe |
| vtbd | Virtio (VM) |
| mmcsd | SD/eMMC |
Check disk details:
shdiskinfo -v ada0 camcontrol identify ada0
GPT Partitioning
GPT (GUID Partition Table) is the standard for modern FreeBSD installations. It supports disks larger than 2TB, allows up to 128 partitions, and stores a backup partition table at the end of the disk.
Creating a Standard FreeBSD Layout
sh# Destroy existing partition table (if any) gpart destroy -F ada0 # Create a new GPT scheme gpart create -s gpt ada0 # Add boot partition (512KB for BIOS boot) gpart add -t freebsd-boot -s 512k -l boot0 ada0 # Add EFI System Partition (for UEFI boot, 260MB) gpart add -t efi -s 260m -l efi0 ada0 # Add swap partition (4GB) gpart add -t freebsd-swap -s 4g -l swap0 ada0 # Add root partition (rest of disk) gpart add -t freebsd-zfs -l zfs0 ada0
Write the boot code:
sh# For BIOS boot gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 # For UEFI boot newfs_msdos -F 32 /dev/gpt/efi0 mount -t msdos /dev/gpt/efi0 /mnt mkdir -p /mnt/EFI/BOOT cp /boot/loader.efi /mnt/EFI/BOOT/BOOTX64.efi umount /mnt
Viewing the Partition Table
shgpart show ada0 gpart show -l ada0 # Show labels gpart show -p ada0 # Show provider names
Example output:
shell=> 40 976773088 ada0 GPT (465G) 40 1024 1 freebsd-boot (512K) 1064 532480 2 efi (260M) 533544 8388608 3 freebsd-swap (4.0G) 8922152 967850976 4 freebsd-zfs (461G)
GPT Labels
Labels make disk management readable. Reference partitions by label instead of device number:
sh# Access by label ls /dev/gpt/ # zfs0 swap0 efi0 boot0
Use labels in /etc/fstab and ZFS pool configurations for portability -- if you move a disk to a different controller, the labels remain valid.
Modifying Partitions
sh# Delete a partition gpart delete -i 4 ada0 # Resize a partition (only if the filesystem supports online resize) gpart resize -i 4 -s 400g ada0 # Add a new partition to free space gpart add -t freebsd-zfs -s 50g -l data0 ada0
MBR Partitioning
MBR is the legacy partitioning scheme. Use it only for disks under 2TB on systems that require BIOS boot compatibility, or for USB drives that need to work across operating systems.
sh# Create MBR scheme gpart create -s mbr ada1 # Add a FreeBSD slice (the entire disk) gpart add -t freebsd ada1 # Create BSD disklabel inside the slice gpart create -s bsd ada1s1 # Add partitions inside the BSD disklabel gpart add -t freebsd-ufs -s 20g ada1s1 # ada1s1a gpart add -t freebsd-swap -s 4g ada1s1 # ada1s1b gpart add -t freebsd-ufs ada1s1 # ada1s1d (rest)
MBR uses slices (s1, s2, etc.) containing BSD disklabels with partitions (a through h). The convention: a is root, b is swap, d and above are data.
The GEOM Framework
GEOM is FreeBSD's modular disk I/O framework. Every disk operation passes through GEOM classes that transform, redirect, or protect data.
Key GEOM Classes
| Class | Purpose |
|---|---|
| PART | Partitioning (GPT, MBR) |
| MIRROR | RAID 1 (software mirroring) |
| STRIPE | RAID 0 (striping) |
| RAID3 | RAID 3 (parity) |
| CONCAT | Concatenation (JBOD) |
| ELI | Encryption (GELI) |
| LABEL | Disk labels |
| CACHE | Read caching |
| NOP | Testing/passthrough |
| MULTIPATH | Multipath I/O |
GEOM classes stack. You can layer encryption on top of a mirror, or put a partition table on a concatenated volume. Each class creates a new GEOM provider that higher classes can consume.
GEOM Mirror (RAID 1)
Create a software mirror:
sh# Load the module kldload geom_mirror echo 'geom_mirror_load="YES"' >> /boot/loader.conf # Create a mirror from two disks gmirror label -v gm0 ada1 ada2 # The mirror appears as /dev/mirror/gm0 newfs -U /dev/mirror/gm0 mount /dev/mirror/gm0 /data
Manage the mirror:
sh# Check status gmirror status # Replace a failed disk gmirror remove gm0 ada2 gmirror insert gm0 ada3 # Rebuild status gmirror status gm0
GEOM Stripe (RAID 0)
shkldload geom_stripe gstripe label -v st0 ada1 ada2 newfs -U /dev/stripe/st0
Striping doubles throughput but offers no redundancy. A single disk failure destroys all data.
GEOM Concat
shkldload geom_concat gconcat label -v cc0 ada1 ada2 newfs -U /dev/concat/cc0
Concatenation combines disks sequentially -- fills the first disk, then continues to the second. Useful for extending storage without reconfiguring.
ZFS Pool Management
ZFS is the recommended filesystem for FreeBSD. It combines volume management, filesystem creation, data integrity, compression, and snapshots in a single integrated system.
Creating ZFS Pools
Single disk:
shzpool create tank ada1
Mirror (RAID 1):
shzpool create tank mirror ada1 ada2
RAID-Z1 (single parity, like RAID 5):
shzpool create tank raidz1 ada1 ada2 ada3
RAID-Z2 (double parity, like RAID 6):
shzpool create tank raidz2 ada1 ada2 ada3 ada4
RAID-Z3 (triple parity):
shzpool create tank raidz3 ada1 ada2 ada3 ada4 ada5
Multi-vdev Pools
For performance, create pools with multiple vdevs:
sh# Two mirrors (4 disks total, like RAID 10) zpool create tank mirror ada1 ada2 mirror ada3 ada4 # Two RAIDZ1 groups (6 disks total) zpool create tank raidz1 ada1 ada2 ada3 raidz1 ada4 ada5 ada6
Adding Cache and Log Devices
sh# Add an SSD read cache (L2ARC) zpool add tank cache nvd0 # Add an SSD write log (SLOG) zpool add tank log mirror nvd1 nvd2
The SLOG should be mirrored for data safety. The L2ARC does not need mirroring -- losing it only loses the cache.
ZFS Dataset Management
sh# Create datasets zfs create tank/data zfs create tank/data/documents zfs create tank/data/backups # Set properties zfs set compression=lz4 tank/data zfs set atime=off tank/data zfs set recordsize=1M tank/data/backups zfs set quota=100G tank/data/documents # Snapshots zfs snapshot tank/data@2026-04-09 zfs snapshot -r tank/data@checkpoint # Recursive # List snapshots zfs list -t snapshot # Rollback zfs rollback tank/data@2026-04-09 # Clone from snapshot zfs clone tank/data@2026-04-09 tank/data-clone
ZFS Pool Health
sh# Check pool status zpool status # Scrub to verify data integrity zpool scrub tank # IO statistics zpool iostat -v 5 # Pool history zpool history tank
UFS Filesystems
UFS (Unix File System) is FreeBSD's traditional filesystem. It is simpler than ZFS, uses less RAM, and is appropriate for single-disk systems or embedded deployments.
Creating UFS Filesystems
sh# Create a UFS2 filesystem with soft updates newfs -U /dev/gpt/data0 # Create with journaled soft updates (SUJ) newfs -j /dev/gpt/data0 # Tune block size for large files newfs -b 65536 -f 8192 /dev/gpt/data0
Mounting UFS
Add to /etc/fstab:
shell/dev/gpt/data0 /data ufs rw 2 2
Mount immediately:
shmount /dev/gpt/data0 /data
UFS Maintenance
sh# Check filesystem consistency fsck_ufs -y /dev/gpt/data0 # Enable soft updates on existing filesystem tunefs -j enable /dev/gpt/data0 # Grow filesystem after partition resize growfs /dev/gpt/data0
GELI Disk Encryption
GELI is FreeBSD's native disk encryption layer. It operates at the GEOM level, encrypting entire partitions or disks transparently.
Encrypting a Data Partition
sh# Load the module kldload geom_eli echo 'geom_eli_load="YES"' >> /boot/loader.conf # Initialize encryption (AES-256-XTS) geli init -s 4096 -l 256 /dev/gpt/data0 # Attach (decrypt) the device geli attach /dev/gpt/data0 # Enter passphrase when prompted # The decrypted device appears as /dev/gpt/data0.eli # Create filesystem on the encrypted device newfs -U /dev/gpt/data0.eli mount /dev/gpt/data0.eli /data
Encrypting a ZFS Pool
sh# Initialize encryption on each disk geli init -s 4096 -l 256 /dev/gpt/zfs0 geli init -s 4096 -l 256 /dev/gpt/zfs1 # Attach geli attach /dev/gpt/zfs0 geli attach /dev/gpt/zfs1 # Create pool on encrypted devices zpool create tank mirror /dev/gpt/zfs0.eli /dev/gpt/zfs1.eli
Key-Based Encryption (No Passphrase)
For servers that need to boot unattended:
sh# Generate a random key file dd if=/dev/random of=/root/geli.key bs=64 count=1 # Initialize with key file geli init -s 4096 -l 256 -K /root/geli.key /dev/gpt/data0 # Attach with key file geli attach -k /root/geli.key /dev/gpt/data0
Add to /etc/rc.conf for automatic attachment at boot:
shsysrc geli_devices="gpt/data0" sysrc geli_gpt_data0_flags="-k /root/geli.key"
Swap Management
Creating Swap
Swap is typically created during partitioning:
shgpart add -t freebsd-swap -s 4g -l swap0 ada0
Enable in /etc/fstab:
shell/dev/gpt/swap0 none swap sw 0 0
Activate immediately:
shswapon /dev/gpt/swap0
Encrypted Swap
For security, encrypt swap with a random key (regenerated each boot):
shell# /etc/fstab /dev/gpt/swap0.eli none swap sw 0 0
sh# /etc/rc.conf sysrc geli_swap_flags="-a aes-xts -l 256 -s 4096 -d"
The -d flag detaches on last close, and the random key means swap contents are unrecoverable after reboot.
Swap on ZFS
ZFS zvols can serve as swap:
shzfs create -V 4G -o org.freebsd:swap=on zroot/swap swapon /dev/zvol/zroot/swap
This works but is not recommended for production. ZFS swap can deadlock under extreme memory pressure because ZFS itself needs RAM to function.
NVMe Storage
NVMe drives on FreeBSD appear as nvd (legacy driver) or nda (CAM-based driver) devices.
NVMe Management
sh# List NVMe devices nvmecontrol devlist # View drive details nvmecontrol identify nvme0 # Check SMART data nvmecontrol logpage -p 0x02 nvme0 # Firmware update nvmecontrol firmware -s 1 -f firmware.bin nvme0 nvmecontrol firmware -a 1 nvme0
NVMe Namespaces
Modern NVMe drives support multiple namespaces (virtual drives on one physical device):
sh# List namespaces nvmecontrol nslist nvme0 # Create a namespace (if controller supports it) nvmecontrol ns create -s 500000000 -c 500000000 -f 0 nvme0 nvmecontrol ns attach -c 0 -n 2 nvme0
NVMe Performance Tuning
sh# Align partitions to NVMe block size gpart add -t freebsd-zfs -a 4k ada0 # Set ZFS recordsize for NVMe zfs set recordsize=128k tank/data # Enable TRIM for ZFS zpool set autotrim=on tank
For ZFS on NVMe, use a 128K recordsize (default) for general use. For databases, use 8K or 16K to match the database page size.
Practical Layouts
Simple Server (Single NVMe)
shgpart create -s gpt nvd0 gpart add -t freebsd-boot -s 512k nvd0 gpart add -t efi -s 260m nvd0 gpart add -t freebsd-swap -s 8g -l swap0 nvd0 gpart add -t freebsd-zfs -l zfs0 nvd0 zpool create -o ashift=12 zroot /dev/gpt/zfs0 zfs create zroot/ROOT zfs create zroot/ROOT/default zfs create zroot/var zfs create zroot/tmp zfs create zroot/usr zfs create zroot/usr/home
NAS Server (6 Disks)
sh# Boot mirror on two SSDs zpool create zroot mirror nvd0 nvd1 # Data pool on four HDDs zpool create tank raidz1 ada0 ada1 ada2 ada3 # Add SSD cache zpool add tank cache nvd2 zfs create tank/shares zfs create tank/shares/documents zfs create tank/shares/media zfs set compression=lz4 tank/shares zfs set atime=off tank/shares
Database Server (Separate Pools)
sh# OS pool (mirrored SSDs) zpool create zroot mirror nvd0 nvd1 # Database pool (mirrored NVMe, tuned for random I/O) zpool create -o ashift=12 dbpool mirror nvd2 nvd3 zfs create -o recordsize=8k -o primarycache=metadata dbpool/pgdata zfs create -o recordsize=128k dbpool/pgwal
FAQ
Should I use GPT or MBR?
Use GPT unless you have a specific reason not to. GPT supports disks larger than 2TB, provides better redundancy (backup table), uses GUIDs instead of fragile partition numbers, and supports labels. MBR is only needed for legacy BIOS systems that cannot boot from GPT.
How much swap space do I need?
For servers: match RAM up to 8GB, then 50% of RAM above that. A 32GB server needs about 16-20GB swap. For ZFS systems, at least 2GB is recommended even on machines with plenty of RAM, because ZFS's ARC can create memory pressure during heavy I/O.
Can I resize a ZFS pool?
You cannot shrink a ZFS pool. You can expand it by replacing disks with larger ones (one at a time, waiting for resilver) or by adding new vdevs. For mirrors, replace each disk and the pool grows to the size of the smallest disk in each mirror vdev.
How do I check for bad sectors?
For ZFS, run a scrub: zpool scrub tank. For UFS, run fsck_ufs. For raw disk checks, use smartctl from the smartmontools package: smartctl -a /dev/ada0.
Is GELI encryption fast enough for production?
Yes, on modern hardware with AES-NI acceleration. GELI detects and uses AES-NI automatically. Throughput overhead is typically under 5% for sequential I/O. Random I/O overhead is slightly higher. For NVMe drives, GELI can become a bottleneck above 2-3 GB/s, but this only matters for extreme workloads.
Can I convert UFS to ZFS without reinstalling?
Not in-place. The process involves backing up your data, repartitioning the disk for ZFS, creating the pool, and restoring data. Use a live USB or a secondary disk for the migration. The FreeBSD installer can set up ZFS root during a fresh install.