FreeBSD.software
Home/Blog/How to Set Up Load Balancing on FreeBSD
tutorial2026-03-29

How to Set Up Load Balancing on FreeBSD

Complete guide to load balancing on FreeBSD. Covers HAProxy, NGINX as load balancer, relayd, health checks, SSL termination, session persistence, and choosing the right tool.

# How to Set Up Load Balancing on FreeBSD

Load balancing is the practice of distributing incoming network traffic across multiple backend servers so that no single machine becomes a bottleneck. FreeBSD is exceptionally well suited for this role. Its kqueue event system, mature TCP/IP stack, and low-overhead networking primitives give load balancers running on FreeBSD a measurable performance advantage, which is why companies like Netflix and Juniper rely on FreeBSD at the network edge.

This guide covers three practical load balancing options on FreeBSD -- HAProxy, NGINX, and the native relayd daemon -- with complete, production-ready configuration examples. You will also learn about health checks, SSL termination, session persistence, and how to make the load balancer itself highly available using CARP.

---

Load Balancing Fundamentals

Before diving into configuration, it helps to understand the two layers at which load balancers operate and the algorithms they use to distribute traffic.

Layer 4 vs Layer 7

**Layer 4 (transport layer)** load balancers make routing decisions based on IP addresses and TCP/UDP port numbers. They do not inspect the content of the traffic. This makes them fast and protocol-agnostic -- they work for HTTP, database connections, mail servers, or any TCP/UDP service. The tradeoff is that they cannot make content-aware decisions like routing based on URL path or HTTP headers.

**Layer 7 (application layer)** load balancers understand the protocol being carried. For HTTP traffic, this means they can inspect headers, cookies, URL paths, and request methods. They can route /api/* requests to one backend pool and /static/* requests to another. They can also insert or modify headers, handle SSL termination, and perform content-based health checks.

HAProxy and NGINX both support Layer 4 and Layer 7 operation. relayd supports both as well, though its Layer 7 features are more limited.

Load Balancing Algorithms

The most commonly used algorithms are:

- **Round Robin** -- Requests are distributed to backends in sequential order. Simple and effective when backends have similar capacity.

- **Least Connections** -- New requests go to the backend with the fewest active connections. Better when request processing times vary.

- **Source IP Hash** -- The client IP is hashed to deterministically select a backend. Provides a basic form of session persistence without cookies.

- **Weighted Round Robin** -- Each backend is assigned a weight proportional to its capacity. A server with weight 3 receives three times the traffic of a server with weight 1.

- **Random** -- Selects a backend at random. Surprisingly effective at large scale due to the power of random choices.

- **URI Hash** -- The request URI is hashed to select a backend. Useful for cache-friendly routing where you want the same URL to always hit the same server.

---

HAProxy on FreeBSD

HAProxy is the most widely deployed open-source load balancer and reverse proxy. It was purpose-built for high-availability, high-throughput load balancing and handles millions of connections per second in production deployments worldwide. On FreeBSD, HAProxy uses kqueue for event-driven I/O, giving it excellent performance characteristics.

Installation

sh

pkg install haproxy

sysrc haproxy_enable=YES

The main configuration file lives at /usr/local/etc/haproxy.conf.

Basic Configuration

HAProxy configuration is divided into four sections: global (process-wide settings), defaults (default values for all proxies), frontend (how incoming connections are received), and backend (where traffic is forwarded).

Here is a minimal configuration that load balances HTTP traffic across three backend web servers:


global

log /var/run/log local0

maxconn 4096

user nobody

group nobody

daemon

defaults

log global

mode http

option httplog

option dontlognull

retries 3

timeout connect 5s

timeout client 30s

timeout server 30s

frontend http_front

bind *:80

default_backend web_servers

backend web_servers

balance roundrobin

option httpchk GET /health

http-check expect status 200

server web1 10.0.1.10:8080 check inter 5s fall 3 rise 2

server web2 10.0.1.11:8080 check inter 5s fall 3 rise 2

server web3 10.0.1.12:8080 check inter 5s fall 3 rise 2

Each server line defines a backend with health checking enabled. The check keyword activates health checks, inter 5s sets the interval to 5 seconds, fall 3 means a server is marked down after 3 consecutive failures, and rise 2 means it is marked up after 2 consecutive successes.

HAProxy Stats Page

HAProxy includes a built-in statistics dashboard that shows real-time connection counts, error rates, and backend health. Add this to your configuration:


listen stats

bind *:8404

stats enable

stats uri /stats

stats refresh 10s

stats admin if LOCALHOST

stats auth admin:your_secure_password

Access it at http://your-server:8404/stats. In production, restrict access to this page using your [PF firewall](/blog/pf-firewall-freebsd/) rules or limit it to trusted networks.

Starting HAProxy

Validate the configuration and start the service:

sh

haproxy -c -f /usr/local/etc/haproxy.conf

service haproxy start

The -c flag performs a syntax check without starting the process. Always validate before restarting in production.

---

NGINX as a Load Balancer

NGINX is primarily known as a web server, but its reverse proxy and load balancing capabilities are production-grade. If you already run [NGINX on FreeBSD](/blog/nginx-freebsd-production-setup/) as your web server, using it as a load balancer avoids adding another moving part to your infrastructure.

Installation

sh

pkg install nginx

sysrc nginx_enable=YES

Upstream Blocks and Load Balancing Methods

NGINX defines backend server pools using upstream blocks. The load balancing method is set within the block:

nginx

upstream web_backend {

# Method: least_conn, ip_hash, or round-robin (default)

least_conn;

server 10.0.1.10:8080 weight=3;

server 10.0.1.11:8080 weight=2;

server 10.0.1.12:8080 weight=1;

server 10.0.1.13:8080 backup;

}

server {

listen 80;

server_name example.com;

location / {

proxy_pass http://web_backend;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto $scheme;

proxy_next_upstream error timeout http_502 http_503 http_504;

proxy_connect_timeout 5s;

proxy_read_timeout 30s;

}

}

The backup directive marks a server that only receives traffic when all primary servers are down. The weight directive controls traffic distribution -- server 10.0.1.10 receives three times as much traffic as 10.0.1.12. The proxy_next_upstream directive tells NGINX to retry the request on another backend if the first one returns an error or times out.

NGINX Health Checks

The open-source version of NGINX performs passive health checks by monitoring responses from backend servers. If a server returns errors or times out, NGINX temporarily stops sending it traffic:

nginx

upstream web_backend {

server 10.0.1.10:8080 max_fails=3 fail_timeout=30s;

server 10.0.1.11:8080 max_fails=3 fail_timeout=30s;

server 10.0.1.12:8080 max_fails=3 fail_timeout=30s;

}

After 3 failures within 30 seconds, the server is marked as unavailable for the next 30 seconds. NGINX Plus (the commercial version) adds active health checks that proactively poll backends, similar to HAProxy.

---

relayd: The Native BSD Load Balancer

relayd is a load balancer and application layer gateway that ships with OpenBSD and is available on FreeBSD through ports. It was designed specifically for BSD systems and integrates tightly with [PF](/blog/pf-firewall-freebsd/) for transparent redirection. If you prefer a BSD-native tool and your load balancing needs are straightforward, relayd is worth considering.

Installation

sh

pkg install relayd

sysrc relayd_enable=YES

relayd.conf Configuration

The relayd configuration file uses a clean, readable syntax. Here is an example that load balances HTTP traffic across three backends with health checking:


# /usr/local/etc/relayd.conf

# Macros

web1 = "10.0.1.10"

web2 = "10.0.1.11"

web3 = "10.0.1.12"

# Health check interval and timeout

interval 10

timeout 1000

prefork 5

# Define the backend table with health checks

table {

$web1

$web2

$web3

}

# HTTP health check

table check http "/health" code 200

# Redirect incoming traffic to the backend pool

redirect "web_traffic" {

listen on 0.0.0.0 port 80

forward to port 8080 mode roundrobin check http "/health" code 200

}

# Layer 7 relay with header manipulation

relay "web_relay" {

listen on 0.0.0.0 port 80

protocol http

forward to port 8080 mode loadbalance check http "/health" code 200

}

The redirect block operates at Layer 4 (using PF rules under the hood), while the relay block operates at Layer 7 and can inspect and modify HTTP traffic. The mode keyword sets the balancing algorithm -- roundrobin, loadbalance (least states), hash, or random.

Start relayd:

sh

service relayd start

relayd is simpler than HAProxy or NGINX, with fewer features, but its tight integration with PF and the BSD network stack makes it efficient for straightforward deployments.

---

SSL/TLS Termination at the Load Balancer

SSL termination means the load balancer handles TLS encryption/decryption so that backend servers receive plain HTTP. This simplifies certificate management (one certificate on the load balancer instead of one per backend) and offloads CPU-intensive cryptographic operations.

HAProxy SSL Termination


frontend https_front

bind *:443 ssl crt /usr/local/etc/haproxy/certs/example.com.pem

http-request set-header X-Forwarded-Proto https

# Redirect HTTP to HTTPS

bind *:80

http-request redirect scheme https unless { ssl_fc }

default_backend web_servers

backend web_servers

balance roundrobin

server web1 10.0.1.10:8080 check

server web2 10.0.1.11:8080 check

server web3 10.0.1.12:8080 check

The .pem file must contain the certificate, any intermediate certificates, and the private key concatenated together. HAProxy reads them from a single file.

NGINX SSL Termination

nginx

upstream web_backend {

least_conn;

server 10.0.1.10:8080;

server 10.0.1.11:8080;

server 10.0.1.12:8080;

}

server {

listen 80;

server_name example.com;

return 301 https://$host$request_uri;

}

server {

listen 443 ssl;

server_name example.com;

ssl_certificate /usr/local/etc/nginx/ssl/example.com.crt;

ssl_certificate_key /usr/local/etc/nginx/ssl/example.com.key;

ssl_protocols TLSv1.2 TLSv1.3;

ssl_ciphers HIGH:!aNULL:!MD5;

ssl_session_cache shared:SSL:10m;

ssl_session_timeout 10m;

location / {

proxy_pass http://web_backend;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto https;

}

}

For more details on NGINX SSL configuration, see our [NGINX production setup guide](/blog/nginx-freebsd-production-setup/).

---

Session Persistence

Some applications require that a user's requests consistently reach the same backend server, typically because session state is stored locally. There are several approaches to session persistence.

Source IP Affinity

The simplest method. The client's IP address determines which backend receives the request.

**HAProxy:**


backend web_servers

balance source

hash-type consistent

server web1 10.0.1.10:8080 check

server web2 10.0.1.11:8080 check

**NGINX:**

nginx

upstream web_backend {

ip_hash;

server 10.0.1.10:8080;

server 10.0.1.11:8080;

}

Source IP affinity breaks when clients are behind a shared NAT or proxy, because many users share the same source IP. It also rebalances poorly when backends are added or removed.

Cookie-Based Persistence

The load balancer inserts a cookie that identifies the backend server. This is more reliable than source IP because it works regardless of NAT.

**HAProxy:**


backend web_servers

balance roundrobin

cookie SERVERID insert indirect nocache

server web1 10.0.1.10:8080 check cookie s1

server web2 10.0.1.11:8080 check cookie s2

server web3 10.0.1.12:8080 check cookie s3

HAProxy will insert a SERVERID cookie with value s1, s2, or s3. Subsequent requests from the same client are routed to the server whose cookie value matches.

**NGINX:**

The open-source version of NGINX does not support cookie-based sticky sessions natively. NGINX Plus includes the sticky cookie directive. For open-source NGINX, use ip_hash or an external session store (Redis, Memcached) shared by all backends.

External Session Store

The most scalable approach is to avoid server-affinity entirely. Store sessions in a shared backend like Redis or a database. All application servers read and write session data from the same store. This lets the load balancer use any algorithm without worrying about session state, and simplifies scaling because you can add or remove backends freely.

---

Health Checks

Health checks are what separate a load balancer from a simple reverse proxy. Without health checks, failed backends continue receiving traffic and users see errors. There are three types.

TCP Health Checks

The load balancer opens a TCP connection to the backend. If the connection succeeds, the server is considered healthy. This is the simplest check and works for any TCP service.

**HAProxy:**


server web1 10.0.1.10:8080 check inter 5s fall 3 rise 2

**relayd:**


table check tcp

HTTP Health Checks

The load balancer sends an HTTP request and checks the response code. This verifies that the application is actually working, not just that the port is open.

**HAProxy:**


backend web_servers

option httpchk GET /health

http-check expect status 200

server web1 10.0.1.10:8080 check

**relayd:**


table check http "/health" code 200

Custom Health Checks in HAProxy

HAProxy supports advanced health check logic with multiple expectations:


backend web_servers

option httpchk

http-check send meth GET uri /health ver HTTP/1.1 hdr Host example.com

http-check expect status 200

http-check expect header Content-Type eq "application/json"

server web1 10.0.1.10:8080 check inter 10s fall 3 rise 2

Your /health endpoint should check dependencies -- database connectivity, disk space, cache availability -- and return a non-200 status if any are failing.

---

High Availability for the Load Balancer Itself

A single load balancer is a single point of failure. FreeBSD provides two mechanisms to make the load balancer itself highly available: CARP and pfsync.

CARP (Common Address Redundancy Protocol)

CARP allows multiple FreeBSD hosts to share a virtual IP address. One host is the master and responds to traffic on the virtual IP. If the master fails, a backup host takes over within seconds. This is how you build an active/passive load balancer pair.

On the primary load balancer:

sh

ifconfig carp0 create

ifconfig carp0 vhid 1 advskew 0 pass secretpassword 10.0.1.1/24

On the secondary load balancer:

sh

ifconfig carp0 create

ifconfig carp0 vhid 1 advskew 100 pass secretpassword 10.0.1.1/24

Both machines share the virtual IP 10.0.1.1. The advskew value determines priority -- the lower value becomes master. When the primary goes down, the secondary detects the absence of CARP advertisements and takes over the virtual IP.

To persist this across reboots, add to /etc/rc.conf:

sh

cloned_interfaces="carp0"

ifconfig_carp0="vhid 1 advskew 0 pass secretpassword 10.0.1.1/24"

pfsync for State Synchronization

If you use PF for connection tracking (which relayd relies on), pfsync synchronizes the state table between the primary and secondary machines. This means active connections survive a failover without being dropped.

sh

ifconfig pfsync0 syncdev em1 syncpeer 10.0.2.2

Where em1 is a dedicated sync interface between the two load balancers. For a deeper dive into CARP, pfsync, and failover configurations, see our [FreeBSD high availability guide](/blog/freebsd-high-availability/).

---

Comparison: HAProxy vs NGINX vs relayd

| Feature | HAProxy | NGINX (open-source) | relayd |

|---|---|---|---|

| **Layer 4 load balancing** | Yes | Yes (stream module) | Yes |

| **Layer 7 load balancing** | Yes | Yes | Limited |

| **Active health checks** | Yes | Passive only (Plus has active) | Yes |

| **SSL termination** | Yes | Yes | Yes |

| **Cookie-based persistence** | Yes | Plus only | No |

| **Stats/monitoring dashboard** | Built-in | Stub status only | relayctl |

| **Configuration complexity** | Medium | Low-medium | Low |

| **Throughput (high concurrency)** | Excellent | Excellent | Good |

| **WebSocket support** | Yes | Yes | Limited |

| **HTTP/2 backend support** | Yes (2.4+) | Yes | No |

| **Native BSD integration** | No | No | Yes (PF, CARP) |

| **Community/docs** | Very large | Very large | Small (BSD-focused) |

| **FreeBSD pkg available** | Yes | Yes | Yes |

**When to use which:**

- **HAProxy** -- Best for dedicated load balancing. Superior health checking, real-time stats dashboard, cookie-based persistence, and the most granular traffic management. The right choice when load balancing is the primary job.

- **NGINX** -- Best when you need a load balancer and a web server in one process, or when your team already knows NGINX. If you follow our [guide to choosing a web server](/blog/best-web-server-freebsd/), you may already have NGINX deployed.

- **relayd** -- Best for simple setups on BSD systems where you want minimal dependencies and tight PF integration. If you already manage your network with PF and CARP, relayd fits naturally.

---

Complete HAProxy Configuration Example

This is a production-ready HAProxy configuration for a web application with multiple backend pools, SSL termination, health checks, session persistence, rate limiting, and a stats page.

# /usr/local/etc/haproxy.conf

global

log /var/run/log local0 info

maxconn 10000

user nobody

group nobody

daemon

tune.ssl.default-dh-param 2048

ssl-default-bind-options ssl-min-ver TLSv1.2

ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384

defaults

log global

mode http

option httplog

option dontlognull

option forwardfor

option http-server-close

retries 3

timeout connect 5s

timeout client 30s

timeout server 30s

timeout http-request 10s

timeout queue 30s

default-server inter 5s fall 3 rise 2

# Stats dashboard

listen stats

bind *:8404

stats enable

stats uri /stats

stats refresh 10s

stats admin if LOCALHOST

stats auth admin:change_this_password

stats show-legends

# HTTP to HTTPS redirect

frontend http_redirect

bind *:80

http-request redirect scheme https code 301

# Main HTTPS frontend

frontend https_front

bind *:443 ssl crt /usr/local/etc/haproxy/certs/example.com.pem

# Security headers

http-response set-header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"

http-response set-header X-Content-Type-Options nosniff

http-response set-header X-Frame-Options DENY

# Rate limiting: track requests per source IP

stick-table type ip size 100k expire 30s store http_req_rate(10s)

http-request track-sc0 src

http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }

# Route based on path

acl is_api path_beg /api/

acl is_static path_beg /static/ /images/ /css/ /js/

acl is_websocket hdr(Upgrade) -i websocket

use_backend api_servers if is_api

use_backend static_servers if is_static

use_backend websocket_servers if is_websocket

default_backend app_servers

# Application servers with cookie-based persistence

backend app_servers

balance roundrobin

cookie SERVERID insert indirect nocache httponly secure

option httpchk GET /health

http-check expect status 200

server app1 10.0.1.10:8080 check cookie app1 weight 3

server app2 10.0.1.11:8080 check cookie app2 weight 3

server app3 10.0.1.12:8080 check cookie app3 weight 2

server app4 10.0.1.13:8080 check cookie app4 backup

# API servers with least-connections balancing

backend api_servers

balance leastconn

option httpchk GET /api/health

http-check expect status 200

http-request set-header X-Forwarded-Proto https

server api1 10.0.2.10:9090 check

server api2 10.0.2.11:9090 check

server api3 10.0.2.12:9090 check

# Static content servers

backend static_servers

balance roundrobin

option httpchk GET /static/health.txt

http-check expect status 200

server static1 10.0.3.10:8080 check

server static2 10.0.3.11:8080 check

# WebSocket servers

backend websocket_servers

balance source

option httpchk GET /ws/health

timeout server 3600s

timeout tunnel 3600s

server ws1 10.0.4.10:8080 check

server ws2 10.0.4.11:8080 check

This configuration demonstrates content-based routing (API, static files, WebSockets each go to different backend pools), cookie-based session persistence for the application tier, rate limiting per source IP, and extended timeouts for WebSocket connections.

---

Complete NGINX Load Balancer Configuration Example

Here is the equivalent NGINX configuration for a multi-pool load balancer with SSL termination:

nginx

# /usr/local/etc/nginx/nginx.conf

worker_processes auto;

worker_rlimit_nofile 65535;

events {

worker_connections 4096;

use kqueue;

multi_accept on;

}

http {

# Logging

log_format lb '$remote_addr - $upstream_addr [$time_local] '

'"$request" $status $body_bytes_sent '

'"$http_referer" "$http_user_agent" '

'upstream_response_time=$upstream_response_time';

access_log /var/log/nginx/lb_access.log lb;

error_log /var/log/nginx/lb_error.log warn;

# Timeouts

proxy_connect_timeout 5s;

proxy_read_timeout 30s;

proxy_send_timeout 30s;

# Buffers

proxy_buffer_size 128k;

proxy_buffers 4 256k;

proxy_busy_buffers_size 256k;

# Backend pools

upstream app_backend {

least_conn;

server 10.0.1.10:8080 weight=3 max_fails=3 fail_timeout=30s;

server 10.0.1.11:8080 weight=3 max_fails=3 fail_timeout=30s;

server 10.0.1.12:8080 weight=2 max_fails=3 fail_timeout=30s;

server 10.0.1.13:8080 backup;

}

upstream api_backend {

least_conn;

server 10.0.2.10:9090 max_fails=3 fail_timeout=30s;

server 10.0.2.11:9090 max_fails=3 fail_timeout=30s;

server 10.0.2.12:9090 max_fails=3 fail_timeout=30s;

}

upstream static_backend {

server 10.0.3.10:8080 max_fails=3 fail_timeout=30s;

server 10.0.3.11:8080 max_fails=3 fail_timeout=30s;

}

upstream websocket_backend {

ip_hash;

server 10.0.4.10:8080;

server 10.0.4.11:8080;

}

# Rate limiting

limit_req_zone $binary_remote_addr zone=global:10m rate=10r/s;

# HTTP to HTTPS redirect

server {

listen 80;

server_name example.com;

return 301 https://$host$request_uri;

}

# Main HTTPS server

server {

listen 443 ssl;

server_name example.com;

ssl_certificate /usr/local/etc/nginx/ssl/example.com.crt;

ssl_certificate_key /usr/local/etc/nginx/ssl/example.com.key;

ssl_protocols TLSv1.2 TLSv1.3;

ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;

ssl_session_cache shared:SSL:10m;

ssl_session_timeout 10m;

ssl_prefer_server_ciphers on;

# Security headers

add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

add_header X-Content-Type-Options nosniff always;

add_header X-Frame-Options DENY always;

# Rate limiting

limit_req zone=global burst=20 nodelay;

# API traffic

location /api/ {

proxy_pass http://api_backend;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto https;

proxy_next_upstream error timeout http_502 http_503 http_504;

}

# Static content

location ~* ^/(static|images|css|js)/ {

proxy_pass http://static_backend;

proxy_set_header Host $host;

proxy_cache_valid 200 1h;

expires 1d;

}

# WebSocket connections

location /ws/ {

proxy_pass http://websocket_backend;

proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;

proxy_set_header Connection "upgrade";

proxy_set_header Host $host;

proxy_read_timeout 3600s;

}

# Default: application servers

location / {

proxy_pass http://app_backend;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto https;

proxy_next_upstream error timeout http_502 http_503 http_504;

}

}

# Stub status for monitoring

server {

listen 127.0.0.1:8080;

location /nginx_status {

stub_status;

allow 127.0.0.1;

deny all;

}

}

}

Test the configuration before reloading:

sh

nginx -t

service nginx reload

---

Monitoring Load Balancers

A load balancer that you cannot observe is a liability. At minimum, you should monitor:

- **Backend health** -- How many backends are up vs down at any given time.

- **Request rate** -- Requests per second hitting the load balancer.

- **Error rate** -- Percentage of responses that are 4xx or 5xx.

- **Latency** -- Time from receiving the client request to delivering the backend response.

- **Connection counts** -- Current active connections and connection queue depth.

HAProxy Monitoring

HAProxy's built-in stats page provides all of these metrics visually. For programmatic access, enable the stats socket:

global

stats socket /var/run/haproxy.sock mode 660 level admin

Then query it:

sh

echo "show stat" | socat stdio /var/run/haproxy.sock

echo "show info" | socat stdio /var/run/haproxy.sock

HAProxy also exposes a Prometheus-compatible metrics endpoint when compiled with the Prometheus exporter module. On FreeBSD:


frontend prometheus

bind *:8405

http-request use-service prometheus-exporter if { path /metrics }

no log

NGINX Monitoring

NGINX's stub_status module (shown in the configuration above) provides basic metrics. For richer data, use the nginx-module-vts third-party module or export metrics to Prometheus using the nginx-prometheus-exporter.

sh

pkg install nginx-prometheus-exporter

relayd Monitoring

Use relayctl to inspect relayd's state:

sh

relayctl show summary

relayctl show redirects

relayctl show relays

relayctl show hosts

These commands show which backends are up, current connection counts, and relay statistics. For alerting, script these commands with a cron job and trigger alerts when backends go down.

---

Frequently Asked Questions

Which load balancer should I use on FreeBSD?

Use **HAProxy** if load balancing is the primary function of the server and you need advanced features like cookie-based persistence, detailed statistics, or content-based routing across many backend pools. Use **NGINX** if you want a combined web server and load balancer, or your team already manages NGINX. Use **relayd** for simple, BSD-native setups that integrate with PF and CARP without external dependencies.

Can I load balance non-HTTP traffic on FreeBSD?

Yes. Both HAProxy and NGINX support Layer 4 (TCP/UDP) load balancing for any protocol. In HAProxy, set mode tcp in your frontend and backend. In NGINX, use the stream module. relayd also supports generic TCP relay. You can load balance database connections, mail servers, DNS, or any other TCP/UDP service.

How do I handle SSL certificates for multiple domains?

HAProxy supports SNI (Server Name Indication). Place certificate files in a directory and point HAProxy to it:


bind *:443 ssl crt /usr/local/etc/haproxy/certs/

HAProxy will automatically select the correct certificate based on the requested hostname. NGINX uses separate server blocks with different ssl_certificate directives for each domain.

What is the performance difference between HAProxy and NGINX for load balancing?

For most deployments, the performance difference is negligible. Both handle tens of thousands of concurrent connections efficiently on FreeBSD using kqueue. HAProxy is marginally more efficient for pure proxying workloads because it was purpose-built for that role. NGINX may use slightly more memory per connection because of its more general-purpose architecture. In practice, your backend application will be the bottleneck long before the load balancer is.

How do I test my load balancer configuration?

Start with haproxy -c -f /usr/local/etc/haproxy.conf or nginx -t to validate syntax. Then use curl to verify routing:

sh

# Check which backend handled the request

curl -v http://example.com/

# Verify sticky sessions

curl -c cookies.txt -b cookies.txt http://example.com/

curl -c cookies.txt -b cookies.txt http://example.com/

# Load test

pkg install vegeta

echo "GET http://example.com/" | vegeta attack -duration=30s -rate=100 | vegeta report

Monitor the stats page (HAProxy) or access logs (NGINX) during testing to confirm traffic is distributed as expected.

How many backend servers can a FreeBSD load balancer handle?

There is no hard limit. HAProxy and NGINX both support hundreds of backend servers per pool. The practical limits are the number of file descriptors (increase kern.maxfiles and kern.maxfilesperproc in sysctl.conf), available memory for connection tracking, and network bandwidth. A single FreeBSD load balancer can realistically handle dozens of backend pools with hundreds of servers each.

Should I use active or passive health checks?

Active health checks (HAProxy, relayd) are preferred because they detect failures proactively -- before a user request hits a dead backend. Passive checks (NGINX open-source) only detect failures after a real user request fails. If you use NGINX and need active checks, either upgrade to NGINX Plus or add an external health checker like Consul or custom scripts that remove unhealthy backends from the upstream configuration.

---

Summary

FreeBSD provides a strong foundation for load balancing thanks to its kqueue event system, efficient network stack, and native support for CARP failover. The three main tools -- HAProxy, NGINX, and relayd -- cover the full spectrum from simple to complex deployments.

Start with a single load balancer and basic round-robin distribution. Add health checks immediately -- they are non-negotiable for production. Add SSL termination at the load balancer to simplify certificate management. When your traffic justifies it, deploy a second load balancer with CARP for [high availability](/blog/freebsd-high-availability/).

The configurations in this guide are production-ready starting points. Adjust backend counts, health check intervals, timeouts, and balancing algorithms based on your actual traffic patterns and application behavior.