FreeBSD.software
Home/Blog/FreeBSD Performance Tuning: Complete Guide
guide2026-03-29

FreeBSD Performance Tuning: Complete Guide

Complete guide to FreeBSD performance tuning. Covers CPU scheduling, memory management, ZFS ARC, network stack tuning, disk I/O optimization, sysctl reference, and benchmarking.

# FreeBSD Performance Tuning: Complete Guide

FreeBSD defaults are conservative. They target a general-purpose workstation that boots on everything from a Raspberry Pi to a 512-core server. That means every production deployment leaves performance on the table -- often a lot of it.

This guide is the definitive FreeBSD performance tuning reference. It covers CPU scheduling, memory management, ZFS ARC sizing, network stack optimization, disk I/O tuning, web server and database configuration, benchmarking methodology, and complete sysctl.conf and loader.conf templates you can deploy today.

Every value in this guide has a reason. No cargo-cult tuning. No "set this to 1 for speed." Each parameter includes what it does, why you would change it, and what the tradeoff is.

Before you touch anything: measure first. Tuning without data is guessing. Start with [FreeBSD monitoring tools](/blog/freebsd-server-monitoring-guide/) to establish baselines, then come back here.

---

Performance Tuning Philosophy

The single most important rule of performance tuning: **measure, then tune, then measure again.**

Changing sysctls without profiling is like optimizing code you have not benchmarked. You will waste time, introduce regressions, and convince yourself you made things faster when you did not.

The process is:

1. **Establish a baseline.** Use vmstat, iostat, netstat -s, top -SHP, and systat to capture current behavior under real load. Record numbers.

2. **Identify the bottleneck.** Is the system CPU-bound? Memory-bound? Waiting on disk? Saturating the network? The tuning for each is completely different.

3. **Change one thing.** Apply a single tuning parameter. Never change five things at once.

4. **Measure again.** Compare to baseline. If the metric you care about did not improve, revert.

5. **Document what you changed and why.** Future you will thank present you.

FreeBSD makes this easier than most operating systems. The sysctl interface exposes thousands of tunable parameters at runtime with no reboot required. The loader.conf system handles the handful of settings that must be set at boot time. Both are plain text files.

---

CPU Tuning

Scheduler Selection

FreeBSD uses the ULE scheduler by default (since FreeBSD 7.1). ULE handles most workloads well, including NUMA systems with hundreds of cores. Unless you have measured a specific scheduling problem, keep ULE.

Verify your scheduler:

sh

sysctl kern.sched.name

# kern.sched.name: ULE

CPU Affinity with cpuset

For latency-sensitive workloads -- real-time audio processing, high-frequency trading, packet forwarding -- pin processes to specific cores:

sh

# Pin nginx workers to cores 0-3

cpuset -l 0-3 -p $(pgrep -o nginx)

# Pin interrupt processing to core 4

cpuset -l 4 -x 5 # IRQ 5 bound to core 4

To reserve cores for specific tasks, use cpuset in combination with your application's worker configuration. NGINX, for example, supports worker_cpu_affinity natively.

Power Management vs. Performance

By default, FreeBSD may use power-saving CPU frequencies. On a server, you almost always want maximum performance:

sh

# Check current frequency driver

sysctl dev.cpu.0.freq_levels

# Set maximum performance

sysctl dev.cpu.0.freq=3600 # Set to your CPU's max frequency in MHz

For persistent configuration, use powerd with the hiadaptive mode or disable it entirely on dedicated servers:

sh

# /etc/rc.conf

performance_cx_lowest="Cmax"

economy_cx_lowest="Cmax"

powerd_enable="YES"

powerd_flags="-a hiadaptive"

Or if you never want frequency scaling:

sh

# /boot/loader.conf

hw.acpi.cpu.cx_lowest="Cmax"

Interrupt Affinity and Distribution

On multi-queue NICs, distribute interrupts across cores for better network throughput:

sh

# Check current interrupt distribution

vmstat -i

# Distribute NIC interrupts across cores 0-7

# For Intel NICs (igb/ixgbe)

sysctl dev.igb.0.rx_queue_0.interrupt_rate=8000

FreeBSD 14+ supports RSS (Receive Side Scaling) natively. Enable it in loader.conf:


# /boot/loader.conf

net.isr.maxthreads="-1" # Match number of CPUs

net.isr.bindthreads="1" # Bind netisr threads to CPUs

---

Memory Management

Virtual Memory Tuning

FreeBSD's VM subsystem is highly tunable. These sysctls control how aggressively the system pages out, how much memory stays wired, and how the page daemon behaves.

sh

# Reduce swappiness -- keep more data in RAM

# Range 0-200. Default 120. Lower = less aggressive paging.

sysctl vm.pageout_oom_seq=120

# Increase the inactive queue scan rate

# Default 0 (auto). Set higher for memory-heavy workloads.

sysctl vm.max_wired=-1

# Allow more memory to be wired (non-pageable)

# Useful for ZFS ARC, database shared buffers

sysctl vm.max_user_wired=0 # 0 = unlimited

Swap Configuration

Swap is not optional on FreeBSD, even with abundant RAM. The kernel uses swap to page out truly idle memory, freeing RAM for ARC and active workloads.

Size swap at 2x RAM for systems under 8 GB, 1x RAM for 8-32 GB, and a fixed 16-32 GB for larger systems. Place swap on the fastest available storage.

For [ZFS](/blog/zfs-freebsd-guide/) systems, use a ZVOL for swap:

sh

zfs create -V 16G -o org.freebsd:swap=on zroot/swap

# Add to /etc/fstab:

# /dev/zvol/zroot/swap none swap sw 0 0

Superpages (Large Pages)

FreeBSD supports 2 MB superpages on amd64. They reduce TLB misses for large-memory workloads like databases and virtual machines:

sh

# Check superpage usage

sysctl vm.pmap.pg_ps_enabled

# vm.pmap.pg_ps_enabled: 1 (enabled by default)

# Monitor superpage reservations

sysctl vm.stats.vm.v_page_size

vmstat -z | grep -i super

Superpages are enabled by default on FreeBSD 13+. The kernel automatically promotes contiguous 4 KB pages to 2 MB superpages when beneficial. No manual configuration needed for most workloads.

---

ZFS ARC Tuning

ZFS uses the Adaptive Replacement Cache (ARC) as its primary read cache. It lives in kernel memory and, by default, will grow to consume nearly all available RAM. On a dedicated file server, this is fine. On a multi-purpose server running databases and applications, it will starve them.

Setting ARC Size

The most important ZFS tunable. Set vfs.zfs.arc_max to leave enough RAM for your applications:

sh

# Check current ARC usage

sysctl vfs.zfs.arc_max

sysctl kstat.zfs.misc.arcstats.size

# Set ARC maximum to 8 GB (in bytes)

sysctl vfs.zfs.arc_max=8589934592

# Make persistent in /boot/loader.conf

# vfs.zfs.arc_max="8589934592"

Rules of thumb:

| Server Role | ARC Size |

|---|---|

| Dedicated NAS/file server | 75-80% of RAM |

| Web server | 25-40% of RAM |

| Database server | 15-25% of RAM (leave rest for DB cache) |

| General purpose | 50% of RAM |

For a detailed ZFS setup including pool layout, compression, and snapshots, see the [ZFS guide](/blog/zfs-freebsd-guide/).

L2ARC (Level 2 ARC)

L2ARC extends the ARC to a fast SSD. It caches data that would otherwise be evicted from RAM. Useful when your working set exceeds RAM but fits on an SSD:

sh

# Add an L2ARC device

zpool add tank cache /dev/nvd1

# Tune L2ARC write speed (bytes/sec)

sysctl vfs.zfs.l2arc_write_max=104857600 # 100 MB/s

sysctl vfs.zfs.l2arc_write_boost=209715200 # 200 MB/s (initial fill)

# Control what gets cached

sysctl vfs.zfs.l2arc_noprefetch=0 # Cache prefetched data too

L2ARC consumes about 70 bytes of ARC (RAM) per cached block to store metadata. On a 1 TB L2ARC with 128K recordsize, that is roughly 560 MB of RAM overhead. Factor this in.

SLOG (Separate Log Device)

A SLOG device accelerates synchronous writes. Critical for NFS servers, databases, and any workload using O_SYNC:

sh

# Add a SLOG device (use a high-endurance NVMe SSD)

zpool add tank log mirror /dev/nvd2 /dev/nvd3

Always mirror the SLOG. A lost SLOG means lost uncommitted transactions.

Recordsize and Compression

Match recordsize to your workload:

sh

# Large sequential reads (media, backups): 1M

zfs set recordsize=1M tank/media

# Database (PostgreSQL 8K pages): 16K

zfs set recordsize=16K tank/pgdata

# General purpose: 128K (default)

zfs set recordsize=128K tank/general

# Enable compression everywhere -- lz4 is essentially free

zfs set compression=lz4 tank

Additional ZFS performance sysctls:

sh

# Prefetch tuning

sysctl vfs.zfs.prefetch_disable=0 # Keep prefetch enabled

sysctl vfs.zfs.top_maxinflight=128 # Max concurrent I/Os for top-level vdev

# Transaction group tuning

sysctl vfs.zfs.txg.timeout=5 # Seconds between TXG syncs (default 5)

sysctl vfs.zfs.vdev.async_write_active_min_dirty_percent=30

---

Network Stack Tuning

FreeBSD's network stack is one of its greatest strengths. Netflix serves a third of North American internet traffic from FreeBSD servers. Here is how to tune it.

TCP Buffer Sizes

The default TCP buffer sizes (32 KB send, 32 KB receive) are too small for high-bandwidth or high-latency links. Increase them:

sh

# Maximum socket buffer sizes

sysctl kern.ipc.maxsockbuf=16777216 # 16 MB

# TCP send/receive buffer auto-tuning

sysctl net.inet.tcp.sendbuf_max=16777216 # 16 MB max send buffer

sysctl net.inet.tcp.recvbuf_max=16777216 # 16 MB max receive buffer

sysctl net.inet.tcp.sendbuf_auto=1 # Enable auto-tuning

sysctl net.inet.tcp.recvbuf_auto=1 # Enable auto-tuning

sysctl net.inet.tcp.sendbuf_inc=16384 # Auto-tune increment

sysctl net.inet.tcp.recvbuf_inc=65536 # Auto-tune increment

sendfile Optimization

FreeBSD's sendfile() sends files directly from the filesystem to the network socket, bypassing userspace copies. NGINX and other web servers use this automatically. Tune the related parameters:

sh

# Allow sendfile to use more memory for async operations

sysctl kern.ipc.nsfbufs=10240

sysctl kern.ipc.nsfbufsused # Monitor current usage

# Enable zero-copy sendfile for supported NICs

sysctl kern.ipc.zero_copy.send=1

Accept Filters

Accept filters tell the kernel to delay delivering a connection to the application until useful data arrives. This reduces context switches and wakes:

sh

# Load accept filter modules

kldload accf_http

kldload accf_data

kldload accf_dns

# Make persistent

# /boot/loader.conf

# accf_http_load="YES"

# accf_data_load="YES"

# accf_dns_load="YES"

NGINX, Apache, and other servers use these automatically when available. The HTTP accept filter delays the connection handoff until a complete HTTP request header arrives. This is a free performance win.

SO_REUSEPORT

SO_REUSEPORT allows multiple sockets to bind to the same port. The kernel load-balances incoming connections across them. NGINX uses this with multiple worker processes:

sh

# Enable in NGINX

# listen 80 reuseport;

# listen 443 ssl reuseport;

No sysctl required. The application opts in via socket option.

RSS (Receive Side Scaling)

RSS distributes incoming packets across multiple CPU cores at the NIC hardware level. This is the most impactful network tuning on multi-core systems:

sh

# /boot/loader.conf

net.isr.maxthreads="-1" # One thread per CPU

net.isr.bindthreads="1" # Bind threads to CPUs

net.isr.dispatch="deferred" # Process in netisr threads, not interrupt context

# For Intel NICs, set queue count to match cores

hw.igb.num_queues="8"

hw.ix.num_queues="8"

TCP Congestion Control

FreeBSD supports multiple congestion control algorithms:

sh

# List available algorithms

sysctl net.inet.tcp.cc.available

# net.inet.tcp.cc.available: newreno, cubic, htcp, cdg, vegas, dctcp, bbr

# Set default (cubic is a good general-purpose choice)

sysctl net.inet.tcp.cc.algorithm=cubic

# For data center networks, consider BBR

kldload tcp_bbr

sysctl net.inet.tcp.cc.algorithm=bbr

Additional Network Tunables

sh

# Enable TCP Fast Open

sysctl net.inet.tcp.fastopen.enabled=1

sysctl net.inet.tcp.fastopen.server_enable=1

sysctl net.inet.tcp.fastopen.client_enable=1

# Increase the SYN queue for high-traffic servers

sysctl net.inet.tcp.syncache.hashsize=1024

sysctl net.inet.tcp.syncache.bucketlimit=100

# Disable delayed ACK for latency-sensitive workloads

# sysctl net.inet.tcp.delayed_ack=0 # Only if latency > throughput matters

# Enable TCP timestamps and window scaling

sysctl net.inet.tcp.rfc1323=1

---

Disk I/O Optimization

TRIM for SSDs

TRIM tells the SSD which blocks are no longer in use, maintaining long-term write performance. Enable it for both UFS and ZFS:

sh

# UFS: enable TRIM

tunefs -t enable /dev/ada0p2

# ZFS: enable TRIM (on by default in FreeBSD 13+)

sysctl vfs.zfs.trim.enabled=1

sysctl vfs.zfs.trim.max_interval=1 # Seconds between batched TRIMs

I/O Scheduler (GEOM_SCHED)

FreeBSD's default I/O elevator works well for SSDs. For spinning disks under mixed workloads, load the geom_sched module:

sh

kldload geom_sched

sysctl kern.geom.sched.bfq.max_budget=65536

For NVMe drives, the default noop scheduler is correct. NVMe command queuing is far more capable than any software scheduler.

NVMe Tuning

NVMe drives support deep command queues. FreeBSD exposes tuning parameters:

sh

# Check NVMe controller info

nvmecontrol identify nvme0

# View and adjust NVMe parameters

sysctl dev.nvme.0.num_io_queues # Should match or exceed CPU count

sysctl hw.nvme.use_nvd=0 # Use nda(4) instead of nvd(4) on FreeBSD 14+

For NVMe-over-Fabrics or high-IOPS workloads:

sh

# Increase the maximum number of I/O queues

# /boot/loader.conf

hw.nvme.0.num_io_queues="16"

GEOM Cache

For workloads with heavy read patterns on slow storage, geom_cache provides a block-level cache:

sh

kldload geom_cache

geom cache create -b 131072 -s 4294967296 datacache /dev/ada0 /dev/nvd0

# Caches /dev/ada0 (HDD) using /dev/nvd0 (SSD) with 128K block size, 4GB cache

For most modern setups with ZFS, the ARC and L2ARC are superior caching solutions. Use GEOM cache only for UFS on hybrid storage.

---

File Descriptor and Connection Limits

FreeBSD defaults are conservative. A stock system allows 32,768 open files system-wide. A busy web server can hit this within minutes.

System-Wide Limits

sh

# Maximum open files system-wide

sysctl kern.maxfiles=524288

sysctl kern.maxfilesperproc=262144

# Maximum processes

sysctl kern.maxproc=131072

sysctl kern.maxprocperuid=65536

Socket and Connection Limits

sh

# Maximum pending connections (listen backlog)

sysctl kern.ipc.somaxconn=4096

# Maximum number of mbuf clusters

sysctl kern.ipc.nmbclusters=1048576

# Shared memory

sysctl kern.ipc.shmmax=2147483648 # 2 GB

sysctl kern.ipc.shmall=524288 # Pages (2 GB / 4096)

# Semaphores (for PostgreSQL and other databases)

sysctl kern.ipc.semmni=256

sysctl kern.ipc.semmns=512

sysctl kern.ipc.semmnu=256

Per-User Limits with login.conf

The sysctl limits set the system ceiling. Per-user limits in /etc/login.conf control what individual users can actually consume:


# /etc/login.conf

default:\

:openfiles-cur=65536:\

:openfiles-max=262144:\

:maxproc-cur=16384:\

:maxproc-max=65536:\

:memoryuse-cur=unlimited:\

:memoryuse-max=unlimited:\

:tc=auth-defaults:

After editing login.conf, rebuild the database:

sh

cap_mkdb /etc/login.conf

---

NGINX / Web Server Performance on FreeBSD

FreeBSD is a natural fit for high-performance web serving. Netflix chose it for a reason. Here is how to configure [NGINX on FreeBSD](/blog/nginx-freebsd-production-setup/) for maximum throughput.

NGINX Configuration for FreeBSD

nginx

# /usr/local/etc/nginx/nginx.conf

worker_processes auto; # One per CPU core

worker_rlimit_nofile 65536;

events {

worker_connections 16384;

use kqueue; # FreeBSD's native event mechanism

multi_accept on;

}

http {

sendfile on;

sendfile_max_chunk 512k;

tcp_nopush on;

tcp_nodelay on;

# Accept filter -- kernel holds connection until HTTP headers arrive

listen 80 reuseport accept_filter=httpready;

listen 443 ssl reuseport accept_filter=dataready;

# Keepalive

keepalive_timeout 65;

keepalive_requests 1000;

# Buffer tuning

client_body_buffer_size 16k;

client_header_buffer_size 1k;

large_client_header_buffers 4 8k;

# File cache

open_file_cache max=10000 inactive=60s;

open_file_cache_valid 30s;

open_file_cache_min_uses 2;

open_file_cache_errors on;

# Gzip

gzip on;

gzip_vary on;

gzip_proxied any;

gzip_comp_level 5;

gzip_min_length 256;

gzip_types text/plain text/css application/json application/javascript

text/xml application/xml application/xml+rss text/javascript

image/svg+xml;

}

The key FreeBSD-specific features here are:

- **kqueue**: FreeBSD's native event notification mechanism. Far more efficient than epoll for high connection counts. NGINX uses it automatically on FreeBSD.

- **accept_filter=httpready**: Kernel-level filtering that only wakes NGINX when a complete HTTP request is ready. Reduces syscalls dramatically under high load.

- **reuseport**: Distributes incoming connections across worker processes at the kernel level, eliminating the thundering herd problem.

- **sendfile**: Sends files directly from kernel buffers to the network socket. No userspace copies.

Static File Serving Optimization

For static file serving, ensure these kernel parameters are set:

sh

sysctl kern.ipc.zero_copy.send=1

sysctl vfs.read_max=128 # Readahead cluster count

---

Database Performance on FreeBSD

PostgreSQL on FreeBSD

[PostgreSQL on FreeBSD](/blog/postgresql-freebsd-setup/) performs exceptionally well when properly tuned. The critical interaction is between PostgreSQL's shared buffers, the OS page cache, and ZFS ARC.

#### Kernel Tunables for PostgreSQL

sh

# Shared memory -- must accommodate shared_buffers

sysctl kern.ipc.shmmax=8589934592 # 8 GB

sysctl kern.ipc.shmall=2097152 # 8 GB in 4K pages

# Semaphores

sysctl kern.ipc.semmni=256

sysctl kern.ipc.semmns=512

sysctl kern.ipc.semmnu=256

# PostgreSQL uses lots of file descriptors

sysctl kern.maxfiles=524288

#### PostgreSQL Configuration for FreeBSD

ini

# postgresql.conf -- tuned for 32 GB RAM server

shared_buffers = 8GB

effective_cache_size = 20GB # shared_buffers + expected OS cache

work_mem = 64MB

maintenance_work_mem = 2GB

wal_buffers = 64MB

# WAL tuning

checkpoint_completion_target = 0.9

max_wal_size = 4GB

min_wal_size = 1GB

# For ZFS

full_page_writes = off # ZFS is copy-on-write, no torn pages

wal_init_zero = off # ZFS handles this

When running PostgreSQL on ZFS, set the dataset recordsize to match PostgreSQL's block size:

sh

zfs set recordsize=16K zroot/pgdata

zfs set primarycache=metadata zroot/pgdata # Let PostgreSQL manage its own cache

zfs set logbias=throughput zroot/pgdata

The primarycache=metadata setting is controversial but correct for PostgreSQL. PostgreSQL has its own sophisticated buffer manager. Caching the same data in both PostgreSQL shared buffers and ZFS ARC wastes RAM.

MySQL / MariaDB on FreeBSD

For MySQL/MariaDB, the key tunables overlap with PostgreSQL but differ in the specifics:

sh

# Shared memory

sysctl kern.ipc.shmmax=4294967296

# MySQL uses many threads

sysctl kern.threads.max_threads_per_proc=4096

#### MySQL Configuration for FreeBSD

ini

# my.cnf -- tuned for 32 GB RAM server with InnoDB

[mysqld]

innodb_buffer_pool_size = 20G # 60-70% of RAM

innodb_buffer_pool_instances = 8 # One per GB of buffer pool, max 64

innodb_log_file_size = 2G

innodb_log_buffer_size = 64M

innodb_flush_method = O_DIRECT # Bypass OS cache

innodb_flush_log_at_trx_commit = 1

# Connection handling

max_connections = 500

thread_cache_size = 128

table_open_cache = 4096

# For ZFS (disable doublewrite -- ZFS is COW)

innodb_doublewrite = 0

For MySQL on ZFS:

sh

zfs set recordsize=16K zroot/mysql-data

zfs set primarycache=metadata zroot/mysql-data

zfs set compression=lz4 zroot/mysql-data

---

Benchmarking Tools

Never tune without measuring. These tools establish baselines and validate changes.

fio -- Disk I/O Benchmarking

sh

pkg install fio

# Random read IOPS (4K blocks, 16 jobs, 32 queue depth)

fio --name=randread --ioengine=posixaio --rw=randread \

--bs=4k --numjobs=16 --iodepth=32 --size=4G \

--runtime=60 --time_based --group_reporting

# Sequential write throughput

fio --name=seqwrite --ioengine=posixaio --rw=write \

--bs=1m --numjobs=4 --iodepth=16 --size=8G \

--runtime=60 --time_based --group_reporting

# Mixed random read/write (80/20 ratio, typical database workload)

fio --name=mixed --ioengine=posixaio --rw=randrw --rwmixread=80 \

--bs=8k --numjobs=8 --iodepth=32 --size=4G \

--runtime=60 --time_based --group_reporting

iperf3 -- Network Throughput

sh

pkg install iperf3

# Server side

iperf3 -s

# Client side -- basic throughput test

iperf3 -c server_ip -t 30

# Multiple parallel streams

iperf3 -c server_ip -P 8 -t 30

# UDP test with target bandwidth

iperf3 -c server_ip -u -b 10G -t 30

sysbench -- CPU and Database Benchmarking

sh

pkg install sysbench

# CPU benchmark

sysbench cpu --threads=8 --time=30 run

# Memory benchmark

sysbench memory --threads=8 --memory-total-size=10G run

# PostgreSQL OLTP benchmark

sysbench /usr/local/share/sysbench/oltp_read_write.lua \

--db-driver=pgsql --pgsql-host=127.0.0.1 \

--pgsql-user=bench --pgsql-password=bench \

--pgsql-db=benchdb --tables=10 --table-size=1000000 \

--threads=16 --time=300 run

wrk -- HTTP Load Testing

sh

pkg install wrk

# Basic HTTP load test

wrk -t8 -c400 -d30s http://localhost/

# With Lua script for POST requests

wrk -t8 -c400 -d30s -s post.lua http://localhost/api/endpoint

stress-ng -- System Stress Testing

sh

pkg install stress-ng

# CPU stress (all cores, 60 seconds)

stress-ng --cpu 0 --timeout 60s --metrics-brief

# Memory stress

stress-ng --vm 4 --vm-bytes 4G --timeout 60s --metrics-brief

# I/O stress

stress-ng --iomix 4 --iomix-bytes 2G --timeout 60s --metrics-brief

Built-in FreeBSD Tools

Do not overlook the tools already in the base system:

sh

# Real-time system overview

top -SHP

# Virtual memory statistics (1 second intervals)

vmstat 1

# I/O statistics per device

iostat -x -w 1

# Network statistics

netstat -s -p tcp

netstat -an | grep ESTABLISHED | wc -l

# Detailed system activity

systat -vmstat 1

See the full [FreeBSD monitoring guide](/blog/freebsd-server-monitoring-guide/) for detailed usage of each tool.

---

Complete /etc/sysctl.conf for High-Performance Servers

This is a production-ready sysctl.conf for a FreeBSD web server or application server. Every line has a comment explaining what it does.

sh

# /etc/sysctl.conf -- High-Performance FreeBSD Server

# Apply without reboot: sysctl -f /etc/sysctl.conf

# --- File Descriptor and Process Limits ---

kern.maxfiles=524288 # Max open files system-wide

kern.maxfilesperproc=262144 # Max open files per process

kern.maxproc=131072 # Max processes system-wide

kern.maxprocperuid=65536 # Max processes per user

# --- Network: TCP Buffers ---

kern.ipc.maxsockbuf=16777216 # Max socket buffer size (16 MB)

net.inet.tcp.sendbuf_max=16777216 # Max TCP send buffer (16 MB)

net.inet.tcp.recvbuf_max=16777216 # Max TCP receive buffer (16 MB)

net.inet.tcp.sendbuf_auto=1 # Enable send buffer auto-tuning

net.inet.tcp.recvbuf_auto=1 # Enable recv buffer auto-tuning

net.inet.tcp.sendbuf_inc=16384 # Send buffer auto-tune increment

net.inet.tcp.recvbuf_inc=65536 # Recv buffer auto-tune increment

# --- Network: Connection Handling ---

kern.ipc.somaxconn=4096 # Max listen queue depth

kern.ipc.nmbclusters=1048576 # Max mbuf clusters

net.inet.tcp.msl=5000 # TIME_WAIT duration (ms), default 30000

net.inet.tcp.fast_finwait2_recycle=1 # Recycle FIN_WAIT_2 connections faster

net.inet.tcp.finwait2_timeout=5000 # FIN_WAIT_2 timeout (ms)

net.inet.tcp.nolocaltimewait=1 # Skip TIME_WAIT for local connections

# --- Network: TCP Features ---

net.inet.tcp.rfc1323=1 # TCP timestamps and window scaling

net.inet.tcp.fastopen.enabled=1 # TCP Fast Open

net.inet.tcp.fastopen.server_enable=1 # TFO server support

net.inet.tcp.fastopen.client_enable=1 # TFO client support

net.inet.tcp.cc.algorithm=cubic # Congestion control algorithm

# --- Network: SYN Flood Protection ---

net.inet.tcp.syncache.hashsize=1024 # SYN cache hash table size

net.inet.tcp.syncache.bucketlimit=100 # Max entries per hash bucket

# --- Network: sendfile and Zero Copy ---

kern.ipc.zero_copy.send=1 # Zero-copy sendfile

kern.ipc.nsfbufs=10240 # sendfile buffer count

# --- Network: Security ---

net.inet.tcp.blackhole=2 # Drop packets to closed TCP ports (no RST)

net.inet.udp.blackhole=1 # Drop packets to closed UDP ports (no ICMP)

net.inet.icmp.drop_redirect=1 # Ignore ICMP redirects

net.inet.ip.redirect=0 # Don't send ICMP redirects

net.inet.ip.random_id=1 # Randomize IP packet IDs

# --- Shared Memory (for databases) ---

kern.ipc.shmmax=8589934592 # 8 GB max shared memory segment

kern.ipc.shmall=2097152 # 8 GB total shared memory (in pages)

kern.ipc.semmni=256 # Max semaphore identifiers

kern.ipc.semmns=512 # Max semaphores system-wide

kern.ipc.semmnu=256 # Max undo structures system-wide

# --- Virtual Memory ---

vfs.read_max=128 # Readahead cluster count

kern.ipc.shm_allow_removed=1 # Allow removed shared memory (PostgreSQL)

# --- ZFS ARC (adjust to your workload) ---

# Set in /boot/loader.conf for best results:

# vfs.zfs.arc_max="8589934592" # 8 GB ARC limit

vfs.zfs.trim.enabled=1 # Enable ZFS TRIM for SSDs

# --- Misc ---

kern.coredump=0 # Disable core dumps on production

security.bsd.unprivileged_read_msgbuf=0 # Don't leak kernel messages

security.bsd.unprivileged_proc_debug=0 # Restrict process debugging

hw.kbd.keymap_restrict_change=4 # Restrict keyboard map changes

---

Complete /boot/loader.conf for Performance

These settings must be in loader.conf because they take effect at boot time, before the kernel fully initializes.

sh

# /boot/loader.conf -- High-Performance FreeBSD Server

# --- CPU and Scheduler ---

kern.hz="1000" # Timer frequency (default 100 on VMs, 1000 on bare metal)

hw.acpi.cpu.cx_lowest="Cmax" # Keep CPUs at highest C-state

# --- Memory ---

vm.kmem_size="8G" # Kernel memory limit (adjust to RAM)

hw.realmem="34359738368" # Optional: hint total RAM to kernel

# --- ZFS ---

zfs_load="YES" # Load ZFS module

vfs.zfs.arc_max="8589934592" # 8 GB ARC (adjust per workload table above)

vfs.zfs.arc_min="4294967296" # 4 GB minimum ARC

vfs.zfs.l2arc_write_max="104857600" # 100 MB/s L2ARC fill rate

vfs.zfs.l2arc_noprefetch="0" # Cache prefetched data in L2ARC

# --- Network ---

net.isr.maxthreads="-1" # netisr threads = number of CPUs

net.isr.bindthreads="1" # Bind netisr threads to CPUs

net.isr.dispatch="deferred" # Process packets in netisr threads

cc_cubic_load="YES" # Load cubic congestion control

cc_htcp_load="YES" # Load htcp (optional)

# --- Accept Filters ---

accf_http_load="YES" # HTTP accept filter

accf_data_load="YES" # Data accept filter

accf_dns_load="YES" # DNS accept filter

# --- NIC Tuning (Intel NICs) ---

# hw.igb.num_queues="8" # Uncomment and set to core count

# hw.ix.num_queues="8" # For ixgbe (10 GbE Intel)

# --- Security ---

security.bsd.allow_destructive_dtrace="0" # Restrict DTrace

kern.elf64.aslr.enable="1" # Enable ASLR

# --- Console ---

autoboot_delay="3" # Reduce boot menu wait

boot_multicons="YES" # Multiple consoles

boot_serial="NO" # Change to YES for headless

---

Putting It All Together: Tuning Workflow

Here is the step-by-step workflow for tuning a new FreeBSD server:

1. **Install and configure base system.** Get the server running with your storage layout (ZFS pools, datasets).

2. **Establish baselines.** Run fio, iperf3, wrk, and sysbench with default settings. Record every number.

3. **Deploy /boot/loader.conf.** Copy the template above, adjust vfs.zfs.arc_max for your workload and RAM. Reboot.

4. **Deploy /etc/sysctl.conf.** Copy the template above. Run sysctl -f /etc/sysctl.conf to apply immediately.

5. **Configure /etc/login.conf.** Set file descriptor and process limits. Run cap_mkdb /etc/login.conf.

6. **Tune your application.** Configure NGINX, PostgreSQL, or whatever runs on the server using the guidelines above.

7. **Re-run benchmarks.** Compare to baselines. You should see measurable improvements in throughput, latency, or both.

8. **Monitor in production.** Use the tools from the [monitoring guide](/blog/freebsd-server-monitoring-guide/) to watch for regressions. Performance tuning is not a one-time event.

---

Frequently Asked Questions

How much does FreeBSD performance tuning actually help?

It depends on the workload and how far the defaults are from optimal. For a web server, proper TCP buffer tuning, accept filters, and kqueue usage routinely yield 30-50% throughput improvements. ZFS ARC sizing alone can make or break a database server. Network-heavy workloads on multi-core systems can see 2-5x improvement from RSS and interrupt distribution tuning.

Should I tune sysctl values on a VPS or cloud instance?

Yes, but selectively. VPS environments (AWS, DigitalOcean, Vultr) virtualize hardware, so NIC tuning and interrupt affinity have less impact. Focus on TCP buffer sizes, connection limits, file descriptor limits, and ZFS ARC sizing. Skip hardware-specific tuning like CPU frequency and NVMe queue depth.

Is it safe to set kern.ipc.somaxconn to 65535?

It is safe but rarely necessary. A value of 4096 handles very high traffic. Setting it to 65535 reserves more kernel memory for the listen queue. Only increase beyond 4096 if you measure connection drops with netstat -s -p tcp showing "listen queue overflows."

Should I disable ZFS ARC and use database caching instead?

No. Even for database servers, keep some ARC for metadata caching. Set primarycache=metadata on database datasets so ZFS caches file metadata (directory lookups, inode info) but not data blocks. The database's own buffer manager is better at caching data it understands. Completely disabling ARC (primarycache=none) hurts metadata operations and makes filesystem traversals slow.

How do I know if my server is CPU-bound, memory-bound, or I/O-bound?

Use top -SHP and look at the CPU line. If idle is near 0%, you are CPU-bound. Run vmstat 1 and check the pi and po columns -- if pages are swapping in/out, you are memory-bound. Run iostat -x -w 1 and look at %busy -- if any disk is at 100%, you are I/O-bound. Run netstat -s -p tcp and look for retransmissions or buffer overflows for network bottlenecks.

What is the single most impactful tuning for a web server?

For most FreeBSD web servers: proper ZFS ARC sizing combined with accf_http, sendfile, and TCP buffer auto-tuning. These four changes together address the most common bottlenecks (memory pressure from over-sized ARC, unnecessary kernel-to-userspace copies, and under-sized network buffers).

Can I apply sysctl changes without rebooting?

Yes. Run sysctl -f /etc/sysctl.conf to apply all values immediately. The only exception is loader.conf settings, which require a reboot because they configure the kernel at boot time. This includes kern.hz, vfs.zfs.arc_max (can also be set live but the loader.conf value takes effect at boot), and kernel module loading.

---

Conclusion

FreeBSD performance tuning is systematic, not mystical. Measure your baseline, identify the bottleneck, change one thing, and verify the improvement. The sysctl.conf and loader.conf templates in this guide give you a strong starting point for any high-performance server.

The key areas with the highest impact are:

- **ZFS ARC sizing** -- prevents memory starvation and ensures your working set stays in RAM

- **TCP buffer auto-tuning** -- removes the most common network throughput bottleneck

- **Accept filters and sendfile** -- eliminates unnecessary syscalls and data copies

- **RSS and interrupt distribution** -- uses all CPU cores for network processing

- **File descriptor limits** -- prevents connection drops under load

Start with the complete configuration files above, adjust the values for your specific hardware and workload, and always verify with benchmarks.

For related guides, see:

- [ZFS on FreeBSD](/blog/zfs-freebsd-guide/) -- pool layout, compression, snapshots, and replication

- [NGINX on FreeBSD](/blog/nginx-freebsd-production-setup/) -- full production configuration

- [PostgreSQL on FreeBSD](/blog/postgresql-freebsd-setup/) -- installation and tuning

- [FreeBSD Server Monitoring](/blog/freebsd-server-monitoring-guide/) -- monitoring tools and alerting