# How to Set Up NGINX on FreeBSD for Production
NGINX and FreeBSD are a natural pairing for production web serving. FreeBSD's kernel-level features -- kqueue for event notification, zero-copy sendfile, and a rock-solid TCP/IP stack -- give NGINX the foundation it needs to handle tens of thousands of concurrent connections with minimal overhead. If you are running a [FreeBSD VPS](/blog/best-vps-hosting-freebsd/) and need a web server that performs under pressure, this guide walks you through every step from installation to monitoring.
This is not a "hello world" tutorial. By the end, you will have a fully configured NGINX instance serving static sites, terminating SSL with Let's Encrypt, reverse proxying application servers, and tuned for production traffic.
Why NGINX on FreeBSD
FreeBSD gives NGINX three kernel-level advantages that Linux distributions cannot match in the same way:
**kqueue event notification.** NGINX uses kqueue on FreeBSD as its event-driven I/O mechanism. kqueue is more efficient than epoll for many workloads because it batches event registration and retrieval in a single system call. NGINX auto-detects kqueue on FreeBSD -- no configuration needed.
**Zero-copy sendfile.** When NGINX serves static files, FreeBSD's sendfile implementation transfers data directly from the file system cache to the network socket without copying it through userspace. This cuts CPU usage and memory bandwidth consumption significantly for static content workloads.
**Network stack maturity.** FreeBSD's TCP/IP stack has decades of tuning. Features like RACK loss detection, BBR congestion control, and efficient socket handling give NGINX a stable, high-performance networking layer out of the box.
Combined with FreeBSD's jails for isolation and ZFS for storage, NGINX on FreeBSD is a production-grade platform used by Netflix, WhatsApp, and countless hosting providers.
Installation
Install NGINX from the FreeBSD package repository:
sh
pkg install nginx
This installs the mainline NGINX package. Configuration files land in /usr/local/etc/nginx/, the binary in /usr/local/sbin/nginx, and log files default to /var/log/nginx/.
Enable NGINX to start at boot by adding the following to /etc/rc.conf:
sh
sysrc nginx_enable="YES"
Start the service:
sh
service nginx start
Verify it is running:
sh
service nginx status
curl -I http://localhost
You should see a 200 OK response with the default NGINX welcome page. Before making any configuration changes, always validate your config first:
sh
nginx -t
This parses the configuration and reports syntax errors without restarting the server. Make it a habit -- run nginx -t before every reload.
Configuration Walkthrough
The main configuration file is /usr/local/etc/nginx/nginx.conf. Here is a complete, production-ready base configuration:
nginx
# /usr/local/etc/nginx/nginx.conf
user www;
worker_processes auto;
worker_rlimit_nofile 65535;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
use kqueue;
multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
# Logging format
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'$request_time $upstream_response_time';
access_log /var/log/nginx/access.log main;
# Performance
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
keepalive_requests 1000;
types_hash_max_size 2048;
server_tokens off;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 4;
gzip_min_length 256;
gzip_types
text/plain
text/css
text/javascript
application/javascript
application/json
application/xml
application/xml+rss
image/svg+xml
font/woff2;
# Buffer tuning
client_body_buffer_size 16k;
client_header_buffer_size 1k;
client_max_body_size 16m;
large_client_header_buffers 4 8k;
# Timeouts
client_body_timeout 12;
client_header_timeout 12;
send_timeout 10;
# Rate limiting zone
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
# Include virtual host configs
include /usr/local/etc/nginx/conf.d/*.conf;
}
Key points about this configuration:
- user www -- NGINX worker processes run as the www user, which exists by default on FreeBSD.
- worker_processes auto -- spawns one worker per CPU core.
- use kqueue -- explicitly selects kqueue. NGINX detects this automatically on FreeBSD, but being explicit is clearer.
- sendfile on -- enables FreeBSD's zero-copy file serving.
- tcp_nopush on -- sends HTTP response headers and the beginning of the file body in one packet.
- server_tokens off -- hides NGINX version from response headers.
Create the directory for virtual host configs:
sh
mkdir -p /usr/local/etc/nginx/conf.d
Virtual Hosts Setup
Each site gets its own configuration file in /usr/local/etc/nginx/conf.d/. Here is a complete virtual host configuration for a static site:
nginx
# /usr/local/etc/nginx/conf.d/example.com.conf
# Redirect HTTP to HTTPS
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
# Redirect www to non-www over HTTPS
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name www.example.com;
ssl_certificate /usr/local/etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/example.com/privkey.pem;
return 301 https://example.com$request_uri;
}
# Main server block
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
# Document root
root /usr/local/www/example.com/public;
index index.html;
# SSL
ssl_certificate /usr/local/etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 8.8.8.8 valid=300s;
resolver_timeout 5s;
# Security headers
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:; font-src 'self'; connect-src 'self'; frame-ancestors 'self';" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
# Static file caching
location ~* \.(jpg|jpeg|png|gif|ico|svg|webp|woff2|woff|ttf|css|js)$ {
expires 30d;
add_header Cache-Control "public, immutable";
access_log off;
}
# Deny hidden files
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
# Rate limiting
limit_req zone=general burst=20 nodelay;
# Try static files, then 404
location / {
try_files $uri $uri/ =404;
}
# Custom error pages
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/local/www/nginx-dist;
}
# Logs
access_log /var/log/nginx/example.com-access.log main;
error_log /var/log/nginx/example.com-error.log warn;
}
Create the document root and set permissions:
sh
mkdir -p /usr/local/www/example.com/public
echo "
example.com is live
" > /usr/local/www/example.com/public/index.html
chown -R www:www /usr/local/www/example.com
Test and reload:
sh
nginx -t && service nginx reload
SSL/TLS with Let's Encrypt
Install certbot from packages:
sh
pkg install py311-certbot
Before obtaining certificates, set up a webroot directory that NGINX will serve for ACME challenges. Add a temporary server block or add this location to your existing HTTP server block:
nginx
server {
listen 80;
server_name example.com www.example.com;
location /.well-known/acme-challenge/ {
root /usr/local/www/acme;
}
location / {
return 301 https://example.com$request_uri;
}
}
Create the ACME webroot:
sh
mkdir -p /usr/local/www/acme
Obtain the certificate:
sh
certbot certonly --webroot -w /usr/local/www/acme \
-d example.com -d www.example.com \
--email admin@example.com \
--agree-tos --no-eff-email
Certificates are stored in /usr/local/etc/letsencrypt/live/example.com/. This is the FreeBSD-specific path -- on Linux it would be /etc/letsencrypt/.
For a deeper dive into certificate management, see our guide on [Let's Encrypt on FreeBSD](/blog/lets-encrypt-freebsd/).
Automatic Renewal
Set up a cron job for automatic renewal. Add this to root's crontab with crontab -e:
# Renew Let's Encrypt certificates twice daily
12 3,15 * * * /usr/local/bin/certbot renew --quiet --deploy-hook "service nginx reload"
The --deploy-hook flag ensures NGINX reloads only when a certificate is actually renewed. Test the renewal process manually:
sh
certbot renew --dry-run
Security Headers
The virtual host configuration above already includes production-grade security headers. Here is what each one does and why it matters:
**Strict-Transport-Security (HSTS).** Tells browsers to always use HTTPS for this domain. The max-age=63072000 directive sets this for 2 years. The includeSubDomains flag extends it to all subdomains. The preload flag allows submission to the HSTS preload list maintained by browsers.
nginx
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
**X-Frame-Options.** Prevents your site from being embedded in iframes on other domains, blocking clickjacking attacks.
nginx
add_header X-Frame-Options "SAMEORIGIN" always;
**X-Content-Type-Options.** Stops browsers from MIME-sniffing a response away from the declared Content-Type, preventing certain attack vectors.
nginx
add_header X-Content-Type-Options "nosniff" always;
**Referrer-Policy.** Controls how much referrer information is sent with requests. strict-origin-when-cross-origin sends the full URL for same-origin requests but only the origin for cross-origin requests.
nginx
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
**Content-Security-Policy (CSP).** The most powerful header. It defines exactly which sources the browser is allowed to load resources from. Adjust the policy to match your site's actual needs -- the example above is restrictive and assumes a self-contained static site.
nginx
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:; font-src 'self'; connect-src 'self'; frame-ancestors 'self';" always;
**Permissions-Policy.** Disables browser features you do not use (camera, microphone, geolocation). Reduces your attack surface.
nginx
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
The always parameter at the end of each add_header directive ensures the header is sent even on error responses (4xx, 5xx), not just successful ones.
Test your headers after deployment with:
sh
curl -I https://example.com
Or use an online scanner like securityheaders.com to get a grade.
Reverse Proxy Configuration
NGINX excels as a reverse proxy in front of application servers. Here is a complete configuration for proxying to a Node.js application running on port 3000. This pattern works identically for Python (Gunicorn/uvicorn), Ruby (Puma), Go, or any HTTP backend.
nginx
# /usr/local/etc/nginx/conf.d/app.example.com.conf
upstream app_backend {
server 127.0.0.1:3000;
keepalive 32;
}
server {
listen 80;
listen [::]:80;
server_name app.example.com;
return 301 https://app.example.com$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name app.example.com;
# SSL
ssl_certificate /usr/local/etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/app.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
# Security headers
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Proxy settings
location / {
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffering
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 8k;
proxy_busy_buffers_size 16k;
# Rate limiting
limit_req zone=general burst=20 nodelay;
}
# WebSocket support (if your app uses it)
location /ws {
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 86400;
}
# Serve static assets directly (bypass the app server)
location /static/ {
alias /usr/local/www/app.example.com/static/;
expires 30d;
add_header Cache-Control "public, immutable";
access_log off;
}
# Health check endpoint (no logging)
location /health {
proxy_pass http://app_backend;
access_log off;
}
# Logs
access_log /var/log/nginx/app.example.com-access.log main;
error_log /var/log/nginx/app.example.com-error.log warn;
}
Key details for the reverse proxy setup:
- The upstream block with keepalive 32 maintains persistent connections to the backend, avoiding the overhead of establishing a new TCP connection for every request.
- proxy_set_header Connection "" is required when using keepalive with proxy_http_version 1.1.
- X-Forwarded-For and X-Real-IP headers pass the client's real IP to the backend application.
- X-Forwarded-Proto tells the backend whether the original request was HTTP or HTTPS.
- The /static/ location serves files directly from disk, offloading your application server.
If your application runs behind NGINX, make sure it trusts the X-Forwarded-For header only from localhost. In a Node.js Express app, set app.set('trust proxy', 'loopback'). In a Python app behind Gunicorn, use the --forwarded-allow-ips flag.
This pattern is especially useful if you are running a [PostgreSQL on FreeBSD](/blog/postgresql-freebsd-setup/) database-backed application -- NGINX handles SSL termination and static file serving while your app focuses on business logic.
Performance Tuning
Worker Processes and Connections
nginx
worker_processes auto; # One worker per CPU core
worker_rlimit_nofile 65535; # Max open files per worker
events {
worker_connections 4096; # Max simultaneous connections per worker
use kqueue; # FreeBSD's efficient event mechanism
multi_accept on; # Accept multiple connections at once
}
The theoretical maximum number of concurrent connections is worker_processes * worker_connections. With 4 CPU cores and 4096 connections per worker, that is 16,384 concurrent connections. For reverse proxy setups, each client connection uses two file descriptors (one for client, one for backend), so halve that number.
Make sure FreeBSD's kernel allows enough file descriptors. Check and increase limits:
sh
sysctl kern.maxfiles
sysctl kern.maxfilesperproc
To increase them, add to /etc/sysctl.conf:
kern.maxfiles=131072
kern.maxfilesperproc=65536
Apply without rebooting:
sh
sysctl -f /etc/sysctl.conf
Sendfile and TCP Optimization
nginx
sendfile on; # Zero-copy file serving (FreeBSD sendfile syscall)
tcp_nopush on; # Send headers and beginning of file in one packet
tcp_nodelay on; # Disable Nagle's algorithm for keepalive connections
These three directives work together. sendfile handles the kernel-level optimization. tcp_nopush (equivalent to TCP_CORK on Linux) ensures the first packet contains both HTTP headers and the start of the response body. tcp_nodelay ensures that on keepalive connections, small packets are sent immediately without waiting for more data.
Gzip Compression
nginx
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 4;
gzip_min_length 256;
gzip_types text/plain text/css text/javascript application/javascript
application/json application/xml application/xml+rss
image/svg+xml font/woff2;
gzip_comp_level 4 is the sweet spot. Levels 1-3 compress too little. Levels 5-9 burn CPU for marginal size reduction. Level 4 gives approximately 75% of the maximum compression at a fraction of the CPU cost.
Do not gzip already-compressed formats like JPEG, PNG, or WOFF. They will actually get slightly larger.
Buffer Tuning
nginx
client_body_buffer_size 16k; # Buffer for POST body
client_header_buffer_size 1k; # Buffer for request headers
large_client_header_buffers 4 8k; # For large headers (cookies, etc.)
client_max_body_size 16m; # Max upload size
If your application handles file uploads, increase client_max_body_size accordingly. For API-only services that only accept JSON, 1m is usually sufficient.
Open File Cache
For servers handling many static files, enable NGINX's open file cache:
nginx
open_file_cache max=10000 inactive=60s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
This caches file descriptors, modification times, and existence checks, reducing system calls for frequently accessed files.
Logging and Log Rotation
Log Configuration
The main configuration already defines a detailed log format:
nginx
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'$request_time $upstream_response_time';
The $request_time and $upstream_response_time fields are critical for performance monitoring. $request_time is the total time NGINX spent processing the request. $upstream_response_time is how long the backend took to respond (only relevant for reverse proxy setups).
For high-traffic sites, consider buffering log writes:
nginx
access_log /var/log/nginx/access.log main buffer=32k flush=5s;
This batches log writes into 32KB chunks, flushing at least every 5 seconds. It reduces disk I/O significantly under heavy load.
Log Rotation with newsyslog
FreeBSD uses newsyslog for log rotation instead of logrotate. Add the following to /etc/newsyslog.conf:
# NGINX log rotation
/var/log/nginx/access.log www:www 640 14 100 * JB /var/run/nginx.pid 30
/var/log/nginx/error.log www:www 640 14 100 * JB /var/run/nginx.pid 30
/var/log/nginx/example.com-access.log www:www 640 14 100 * JB /var/run/nginx.pid 30
/var/log/nginx/example.com-error.log www:www 640 14 100 * JB /var/run/nginx.pid 30
Breaking down the fields:
- www:www -- owner:group for the rotated log files.
- 640 -- file permissions.
- 14 -- keep 14 rotated log files.
- 100 -- rotate when the log reaches 100KB (or use * for size-independent rotation).
- * -- rotate regardless of when the last rotation happened (combine with a time-based flag if you prefer daily).
- J -- compress rotated logs with bzip2.
- B -- rotate the file by renaming it (binary-safe rotation).
- /var/run/nginx.pid 30 -- send signal 30 (USR1) to the NGINX master process after rotation, which tells NGINX to reopen its log files.
The signal 30 (USR1) part is critical. Without it, NGINX continues writing to the old (now rotated) file descriptor. The USR1 signal tells NGINX to gracefully reopen log files.
Create the log directory if it does not exist:
sh
mkdir -p /var/log/nginx
chown www:www /var/log/nginx
Test newsyslog configuration:
sh
newsyslog -nv
The -n flag performs a dry run, showing what would be rotated without actually doing it.
Monitoring NGINX
stub_status Module
NGINX includes a built-in status module. Add this to your main configuration or a dedicated monitoring virtual host:
nginx
# /usr/local/etc/nginx/conf.d/status.conf
server {
listen 127.0.0.1:8080;
server_name localhost;
location /nginx_status {
stub_status;
allow 127.0.0.1;
deny all;
}
}
Query it:
sh
curl http://127.0.0.1:8080/nginx_status
Output:
Active connections: 42
server accepts handled requests
15234 15234 98432
Reading: 0 Writing: 5 Waiting: 37
- **Active connections** -- current client connections including waiting.
- **accepts/handled** -- these should be equal. If handled is less than accepts, NGINX is dropping connections.
- **Reading** -- reading request headers.
- **Writing** -- sending response to client.
- **Waiting** -- keepalive connections waiting for a new request.
Prometheus Exporter
For production monitoring with Prometheus and Grafana, install the NGINX Prometheus exporter:
sh
pkg install nginx-prometheus-exporter
If the package is not available, install it from source:
sh
pkg install go
go install github.com/nginxinc/nginx-prometheus-exporter@latest
Run the exporter, pointing it at your stub_status endpoint:
sh
/usr/local/bin/nginx-prometheus-exporter \
-nginx.scrape-uri=http://127.0.0.1:8080/nginx_status \
-web.listen-address=:9113
Create an rc.d script to run it as a service. Add to /usr/local/etc/rc.d/nginx_exporter:
sh
#!/bin/sh
# PROVIDE: nginx_exporter
# REQUIRE: DAEMON
# KEYWORD: shutdown
. /etc/rc.subr
name="nginx_exporter"
rcvar="${name}_enable"
command="/usr/local/bin/nginx-prometheus-exporter"
command_args="-nginx.scrape-uri=http://127.0.0.1:8080/nginx_status -web.listen-address=:9113"
pidfile="/var/run/${name}.pid"
start_cmd="${name}_start"
nginx_exporter_start()
{
/usr/sbin/daemon -p ${pidfile} ${command} ${command_args}
}
load_rc_config $name
run_rc_command "$1"
Enable and start:
sh
chmod +x /usr/local/etc/rc.d/nginx_exporter
sysrc nginx_exporter_enable="YES"
service nginx_exporter start
Add the scrape target to your Prometheus configuration:
yaml
scrape_configs:
- job_name: 'nginx'
static_configs:
- targets: ['your-server:9113']
Common Troubleshooting
"Address already in use" on port 80 or 443
Another process is bound to the port. Find it:
sh
sockstat -4 -l -p 80
On a fresh FreeBSD install, inetd or Apache might be running. Disable them:
sh
sysrc apache24_enable="NO"
service apache24 stop
Permission denied on log files
NGINX workers run as www. Make sure the log directory is owned correctly:
sh
chown -R www:www /var/log/nginx
502 Bad Gateway on reverse proxy
The backend is not responding. Check:
1. Is the backend process running? sockstat -4 -l -p 3000
2. Is NGINX connecting to the right address/port? Check your upstream block.
3. Check the NGINX error log: tail -f /var/log/nginx/error.log
4. Check if the backend is listening on 127.0.0.1 vs 0.0.0.0.
SSL certificate errors
Verify your certificate chain:
sh
openssl s_client -connect example.com:443 -servername example.com
Common issues:
- Certificate path is wrong in the config. FreeBSD stores Let's Encrypt certs in /usr/local/etc/letsencrypt/, not /etc/letsencrypt/.
- The intermediate certificate is missing. Use fullchain.pem, not cert.pem.
- Certificate has expired. Check: openssl x509 -enddate -noout -in /usr/local/etc/letsencrypt/live/example.com/fullchain.pem
Configuration changes not taking effect
Make sure you are reloading, not just testing:
sh
nginx -t && service nginx reload
A reload gracefully applies the new configuration. A restart (service nginx restart) terminates active connections. Prefer reload in production.
"Too many open files" errors
Increase file descriptor limits. In /etc/sysctl.conf:
kern.maxfiles=131072
kern.maxfilesperproc=65536
And ensure worker_rlimit_nofile in nginx.conf matches:
nginx
worker_rlimit_nofile 65535;
Apply and restart:
sh
sysctl -f /etc/sysctl.conf
service nginx restart
Frequently Asked Questions
Should I install NGINX from packages or ports on FreeBSD?
Use pkg install nginx for production. The binary package is pre-compiled, tested, and receives security updates through pkg audit and pkg upgrade. Ports (/usr/ports/www/nginx) are useful only if you need non-default modules compiled in, such as ngx_brotli or GeoIP2. For most production setups, the default package includes everything you need.
How do I serve multiple domains from one NGINX instance?
Create a separate .conf file in /usr/local/etc/nginx/conf.d/ for each domain. Each file contains its own server block with the appropriate server_name directive. NGINX uses the server_name to route incoming requests to the correct virtual host. The include /usr/local/etc/nginx/conf.d/*.conf; directive in the main config picks them all up automatically.
What is the difference between service nginx reload and service nginx restart?
reload sends a HUP signal to the master process. NGINX starts new worker processes with the updated configuration and gracefully shuts down old workers after they finish serving current requests. No connections are dropped. restart stops all processes and starts fresh -- active connections are terminated. Always use reload in production unless you have changed a setting that requires a full restart (which is rare).
How do I enable HTTP/2 or HTTP/3 on FreeBSD NGINX?
HTTP/2 is enabled by adding http2 to the listen directive: listen 443 ssl http2;. It works out of the box with the default NGINX package on FreeBSD. HTTP/3 (QUIC) requires NGINX to be compiled with --with-http_v3_module and a TLS library with QUIC support (such as BoringSSL or quictls). As of early 2026, HTTP/3 support is available in the NGINX mainline package on FreeBSD. Add listen 443 quic; and add_header Alt-Svc 'h3=":443"; ma=86400'; to your server block.
How do I block bad bots or specific IP addresses?
Create a blocklist file and include it in your server blocks:
nginx
# /usr/local/etc/nginx/blocklist.conf
deny 192.168.1.100;
deny 10.0.0.0/8;
# Block by user-agent in the server block
if ($http_user_agent ~* (SemrushBot|AhrefsBot|MJ12bot)) {
return 403;
}
Include it in your server block:
nginx
include /usr/local/etc/nginx/blocklist.conf;
For more sophisticated blocking, use the ngx_http_geo_module or integrate with fail2ban for automated blocking of abusive IPs.
How do I check if NGINX is using kqueue on FreeBSD?
Run nginx -V (capital V) and look for --with-kqueue in the configure arguments. On FreeBSD, kqueue is the default event method and is compiled in automatically. You can confirm it is active by checking for kqueue-related system calls:
sh
truss -p $(cat /var/run/nginx.pid) 2>&1 | grep kqueue
If you see kevent() calls, kqueue is in use.
Conclusion
You now have a production-grade NGINX setup on FreeBSD: secure, performant, and maintainable. The combination of FreeBSD's kqueue, sendfile, and robust networking with NGINX's event-driven architecture gives you a web serving platform that handles real traffic efficiently.
Start with the base configuration from this guide, adapt the virtual host templates to your domains, and use the monitoring setup to keep visibility into your server's behavior. If you are building out a full FreeBSD server stack, continue with our guides on [PostgreSQL on FreeBSD](/blog/postgresql-freebsd-setup/) and [VPS hosting for FreeBSD](/blog/best-vps-hosting-freebsd/).