FreeBSD.software
Home/Guides/HAProxy on FreeBSD: Load Balancer Review
review·2026-04-09·13 min read

HAProxy on FreeBSD: Load Balancer Review

In-depth review of HAProxy on FreeBSD: installation, Layer 4/7 balancing, health checks, SSL termination, performance, and comparison with NGINX and relayd.

HAProxy on FreeBSD: Load Balancer Review

HAProxy is the most widely deployed open-source load balancer in production today. It sits in front of infrastructure at GitHub, Stack Overflow, Reddit, Airbnb, and thousands of smaller operations that need reliable traffic distribution without commercial licensing costs. On FreeBSD, HAProxy benefits from kqueue-based event notification, a mature TCP/IP stack, and the same networking foundation that powers Netflix's CDN edge nodes. This review covers HAProxy's capabilities, FreeBSD-specific installation and tuning, Layer 4 and Layer 7 balancing configurations, health checks, SSL termination, and how it compares with NGINX and relayd for load balancing duties.

What HAProxy Does

HAProxy is a TCP/HTTP reverse proxy and load balancer. It accepts incoming connections, evaluates routing rules, and forwards traffic to backend servers. It does not serve static files, execute CGI scripts, or cache content. Its single purpose is proxying and balancing, and it executes that purpose exceptionally well.

Key capabilities:

  • Layer 4 (TCP) load balancing -- route any TCP or UDP traffic (databases, mail, custom protocols) based on IP and port without protocol inspection.
  • Layer 7 (HTTP) load balancing -- inspect HTTP headers, cookies, URL paths, and query strings to make routing decisions.
  • Health checks -- active checks against backends using TCP connect, HTTP request/response, or custom scripts. Unhealthy backends are removed from rotation automatically.
  • SSL/TLS termination -- terminate TLS at the proxy, offloading cryptographic work from backend servers.
  • Stick tables -- in-memory key-value tables for session persistence, rate limiting, and connection tracking.
  • ACLs and content switching -- route requests to different backend pools based on any combination of request attributes.
  • Runtime API -- a Unix socket or TCP interface for querying stats, draining servers, changing weights, and managing backends without restarts.
  • Multithreading -- HAProxy 2.x and later use multiple threads within a single process, scaling across CPU cores on FreeBSD.

HAProxy does not natively support UDP load balancing in all configurations (Layer 4 UDP support was added in 2.3+ but remains less mature than TCP). For pure UDP balancing, consider relayd or a dedicated DNS load balancer.

Installation on FreeBSD

HAProxy is available as a binary package and through the ports tree. The binary package is the fastest path to a working installation.

Binary Package Installation

sh
pkg install haproxy

This installs HAProxy under /usr/local/sbin/haproxy with the configuration directory at /usr/local/etc/haproxy/. As of early 2026, the FreeBSD package repository ships HAProxy 2.9.x for FreeBSD 14.x.

Enable HAProxy in /etc/rc.conf:

sh
sysrc haproxy_enable="YES"

Ports Installation (Custom Build Options)

If you need specific build options (Lua scripting, Prometheus exporter, QUIC support), build from ports:

sh
cd /usr/ports/net/haproxy make config make install clean

The ports build lets you enable or disable OpenSSL, Lua 5.4, Prometheus exporter, and device-atlas integration. For most deployments, the binary package includes everything you need.

Verify the Installation

sh
haproxy -vv

This prints the version, build options, and available features. Confirm that USE_OPENSSL=1 appears if you plan to terminate TLS at HAProxy.

Configuration Fundamentals

HAProxy's configuration file lives at /usr/local/etc/haproxy.conf by default. The configuration is divided into four sections: global, defaults, frontend, and backend.

Minimal Working Configuration

sh
cat > /usr/local/etc/haproxy.conf << 'EOF' global log /var/run/log local0 maxconn 10000 nbthread 4 user nobody group nobody daemon defaults log global mode http option httplog option dontlognull timeout connect 5s timeout client 30s timeout server 30s retries 3 frontend http_front bind *:80 default_backend http_back backend http_back balance roundrobin option httpchk GET /health server web1 10.0.0.11:8080 check server web2 10.0.0.12:8080 check server web3 10.0.0.13:8080 check EOF

Validate the configuration before starting:

sh
haproxy -c -f /usr/local/etc/haproxy.conf

Start HAProxy:

sh
service haproxy start

Global Section Tuning for FreeBSD

The global section controls process-level behavior. On FreeBSD, the key tunables are:

  • nbthread -- set this to the number of CPU cores available. HAProxy uses kqueue on FreeBSD for event-driven I/O, and each thread handles its own kqueue instance.
  • maxconn -- the maximum number of concurrent connections. Each connection consumes approximately 17 KB of memory for HTTP mode, more with SSL. A machine with 4 GB of RAM can comfortably handle 100,000+ connections.
  • tune.ssl.default-dh-param 2048 -- set the DH parameter size for SSL. Use 2048 or higher.

FreeBSD kernel tuning for high connection counts:

sh
sysctl kern.ipc.somaxconn=65535 sysctl net.inet.tcp.msl=3000

Add these to /etc/sysctl.conf for persistence.

Layer 4 vs Layer 7 Balancing

Layer 4 (TCP Mode)

Use TCP mode when you need to load-balance non-HTTP protocols or when you want maximum throughput with minimal overhead. In TCP mode, HAProxy does not inspect the payload.

shell
defaults mode tcp timeout connect 5s timeout client 30s timeout server 30s frontend postgres_front bind *:5432 default_backend postgres_back backend postgres_back balance leastconn option tcp-check server pg1 10.0.0.21:5432 check server pg2 10.0.0.22:5432 check

TCP mode is appropriate for PostgreSQL, MySQL, Redis, SMTP, IMAP, and any protocol where HAProxy does not need to understand the content. The leastconn algorithm works well for database connections, where some queries take milliseconds and others take seconds.

Layer 7 (HTTP Mode)

HTTP mode gives you content-aware routing. You can split traffic by URL path, hostname, headers, or cookies.

shell
frontend http_front bind *:80 acl is_api path_beg /api acl is_static path_beg /static acl is_admin hdr(host) -i admin.example.com use_backend api_servers if is_api use_backend static_servers if is_static use_backend admin_servers if is_admin default_backend web_servers backend api_servers balance roundrobin server api1 10.0.0.31:8080 check server api2 10.0.0.32:8080 check backend static_servers balance uri server static1 10.0.0.41:8080 check server static2 10.0.0.42:8080 check backend web_servers balance roundrobin cookie SERVERID insert indirect nocache server web1 10.0.0.11:8080 check cookie s1 server web2 10.0.0.12:8080 check cookie s2

The cookie SERVERID insert indirect nocache line implements cookie-based session persistence. HAProxy inserts a SERVERID cookie in the response, and subsequent requests from the same client are routed to the same backend server. This is essential for applications that store session state on the server rather than in a shared store.

Health Checks

HAProxy's health checking is one of its strongest features. Out of the box, it supports several check types.

TCP Connect Check

The simplest check. HAProxy opens a TCP connection to the backend port. If the connection succeeds, the server is healthy:

shell
server web1 10.0.0.11:8080 check inter 5s fall 3 rise 2
  • inter 5s -- check every 5 seconds
  • fall 3 -- mark as down after 3 consecutive failures
  • rise 2 -- mark as up after 2 consecutive successes

HTTP Health Check

More reliable for web services. HAProxy sends an HTTP request and validates the response:

shell
backend http_back option httpchk GET /health HTTP/1.1\r\nHost:\ example.com http-check expect status 200 server web1 10.0.0.11:8080 check

You can also check response body content:

shell
http-check expect string "status":"ok"

Agent Checks

HAProxy can query an external agent running on the backend server. The agent returns a weight, status, or drain command. This is useful for application-aware health reporting:

shell
server web1 10.0.0.11:8080 check agent-check agent-port 9999 agent-inter 10s

The agent is a simple TCP service that returns a string like up 75% (set weight to 75%) or drain (stop sending new connections but finish existing ones).

SSL/TLS Termination

HAProxy handles TLS termination efficiently with OpenSSL. This offloads cryptographic operations from backend servers, simplifies certificate management, and allows HAProxy to inspect HTTP content for routing.

Basic SSL Frontend

shell
frontend https_front bind *:443 ssl crt /usr/local/etc/haproxy/certs/example.com.pem bind *:80 http-request redirect scheme https unless { ssl_fc } default_backend http_back

The .pem file must contain the certificate, private key, and any intermediate certificates concatenated in that order:

sh
cat example.com.crt intermediate.crt example.com.key > /usr/local/etc/haproxy/certs/example.com.pem chmod 600 /usr/local/etc/haproxy/certs/example.com.pem

Modern TLS Configuration

Disable weak ciphers and enforce TLS 1.2+:

shell
global ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets tune.ssl.default-dh-param 2048

Let's Encrypt Integration

Use acme.sh or certbot on FreeBSD, then reload HAProxy after certificate renewal:

sh
pkg install acme.sh acme.sh --issue -d example.com --webroot /usr/local/www/acme acme.sh --install-cert -d example.com \ --fullchain-file /usr/local/etc/haproxy/certs/example.com.pem \ --key-file /usr/local/etc/haproxy/certs/example.com.key \ --reloadcmd "cat /usr/local/etc/haproxy/certs/example.com.pem /usr/local/etc/haproxy/certs/example.com.key > /usr/local/etc/haproxy/certs/combined.pem && service haproxy reload"

HAProxy supports hitless reloads. Running service haproxy reload starts a new process that takes over listening sockets from the old process without dropping connections. This is one of HAProxy's most operationally valuable features.

Statistics and Monitoring

Enable the built-in stats page for real-time monitoring:

shell
frontend stats bind *:8404 stats enable stats uri /stats stats refresh 10s stats admin if TRUE stats auth admin:your_secure_password

Access http://your-server:8404/stats for a dashboard showing per-backend and per-server connection counts, request rates, error rates, response times, and health check status.

For Prometheus integration, enable the built-in Prometheus exporter:

shell
frontend prometheus bind *:8405 http-request use-service prometheus-exporter if { path /metrics } no log

This exposes metrics at /metrics in Prometheus exposition format, ready for scraping. See the FreeBSD server monitoring guide for how to connect this to a full Prometheus and Grafana stack.

Performance on FreeBSD

HAProxy on FreeBSD leverages kqueue for event-driven I/O. In benchmarks on a 4-core FreeBSD 14.x system with HAProxy 2.9.x:

  • HTTP request rate: 200,000+ requests/sec in HTTP mode (small responses, keep-alive enabled)
  • TCP connection rate: 80,000+ new connections/sec in TCP mode
  • SSL handshakes: 15,000-25,000 TLS 1.3 handshakes/sec depending on cipher and key type (ECDSA is faster than RSA)
  • Memory usage: approximately 17 KB per connection in HTTP mode, 34 KB with SSL
  • Latency overhead: sub-millisecond added latency for proxied requests on a local network

The nbthread setting should match your CPU core count. HAProxy distributes connections across threads using kqueue, and each thread maintains its own connection table. On FreeBSD, this scales nearly linearly up to 8-16 cores for HTTP workloads.

For maximum performance, ensure the kernel is tuned:

sh
# /etc/sysctl.conf kern.ipc.somaxconn=65535 net.inet.tcp.fast_finwait2_recycle=1 net.inet.tcp.finwait2_timeout=5000 kern.ipc.maxsockbuf=16777216 net.inet.tcp.sendspace=65536 net.inet.tcp.recvspace=65536

HAProxy vs NGINX Load Balancing

Both HAProxy and NGINX can load-balance HTTP and TCP traffic. The differences are practical rather than fundamental.

Configuration model. HAProxy's configuration is purpose-built for proxying. Every directive relates to connection handling, routing, or health checking. NGINX's configuration serves double duty as a web server and proxy, which means load balancing config is embedded within server and location blocks designed for content serving. For complex routing with many backends, HAProxy's configuration is cleaner and more readable.

Health checks. HAProxy's health checks are more flexible out of the box. It supports TCP, HTTP, agent-based, and custom scripted checks with fine-grained timing controls (inter, fall, rise, fastinter, downinter). NGINX open-source edition only supports passive health checks (detecting failures from real traffic). Active health checks require NGINX Plus (commercial) or the third-party nginx_upstream_check_module.

Session persistence. HAProxy offers cookie-based, stick-table-based, and source-IP-based persistence natively. NGINX open-source supports ip_hash and hash directives; cookie-based sticky sessions require NGINX Plus.

Runtime management. HAProxy's runtime API lets you drain servers, change weights, enable/disable backends, and query detailed statistics through a socket or TCP connection without restarting or reloading. NGINX requires a reload for most configuration changes.

SSL performance. Both use OpenSSL. Performance is comparable. HAProxy has a slight edge in TLS session resumption and ticket handling due to its dedicated proxy architecture.

When to choose HAProxy over NGINX: dedicated load balancing role, complex health checks, runtime server management, high-concurrency TCP proxying. When to choose NGINX: you need a web server and a load balancer on the same machine, or you are already using NGINX and the load balancing requirements are simple.

For a full NGINX production setup, see the NGINX on FreeBSD guide.

HAProxy vs relayd

relayd is the native relay daemon in OpenBSD, ported to FreeBSD. It provides Layer 4 and Layer 7 load balancing with a simpler configuration syntax than HAProxy.

Strengths of relayd: it is part of the base system on OpenBSD (and available as a port on FreeBSD), has a minimal attack surface, integrates tightly with PF for Layer 4 redirections, and handles basic HTTP and TCP load balancing with straightforward config.

Limitations of relayd compared to HAProxy: no cookie-based session persistence, limited ACL and content switching, no built-in stats dashboard, no runtime API for live management, fewer health check options, no Lua scripting, and lower maximum throughput under heavy load. relayd's multithreading and connection handling are less mature than HAProxy's.

When to choose relayd: you need a minimal, base-system-only load balancer on OpenBSD, or a simple TCP relay with PF integration on FreeBSD. For anything beyond basic round-robin or failover, HAProxy is the better tool.

For a broader comparison of load balancing approaches on FreeBSD, see the FreeBSD load balancing guide.

Operational Tips

Hitless reloads. service haproxy reload performs a seamless configuration reload. The new process inherits listening sockets from the old process. In-flight connections finish on the old process while new connections are handled by the new process. Test this before you need it in production.

Log to syslog. HAProxy logs to syslog by default on FreeBSD. Configure /etc/syslog.conf to route HAProxy logs:

sh
# /etc/syslog.conf local0.* /var/log/haproxy.log

Then restart syslog:

sh
service syslogd restart

Rate limiting with stick tables. Protect backends from abuse:

shell
frontend http_front bind *:80 stick-table type ip size 200k expire 30s store http_req_rate(10s) http-request track-sc0 src http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 } default_backend http_back

This denies requests from any IP that exceeds 100 requests per 10-second window.

Graceful server drain. Before taking a backend server offline for maintenance:

sh
echo "set server http_back/web1 state drain" | socat stdio /var/run/haproxy/admin.sock

This stops new connections to web1 but lets existing connections finish. Once the connection count reaches zero, the server is safe to shut down.

FAQ

Q: Does HAProxy work with FreeBSD jails?

A: Yes. HAProxy runs well inside a jail. Bind it to the jail's IP address and ensure the jail has network access to backend servers. No special configuration is needed.

Q: Can HAProxy load-balance UDP traffic on FreeBSD?

A: HAProxy added UDP support in version 2.3, but it is less mature than TCP handling. For DNS or other UDP load balancing on FreeBSD, consider relayd or dedicated tools like dnsdist.

Q: How do I monitor HAProxy with Prometheus?

A: Enable the built-in Prometheus exporter as shown above. It exposes per-frontend, per-backend, and per-server metrics at /metrics. No additional exporters are needed.

Q: What is the maximum number of connections HAProxy can handle on FreeBSD?

A: With proper kernel tuning (kern.maxfiles, kern.ipc.somaxconn), HAProxy on FreeBSD can handle 500,000+ concurrent connections on modern hardware. Memory is typically the limiting factor at approximately 17 KB per HTTP connection.

Q: Should I run HAProxy in a jail or on the host?

A: For dedicated load balancers, running on the host avoids an extra layer of network indirection. For multi-service machines, a jail provides isolation. The performance difference is negligible for most workloads.

Q: How do I use Let's Encrypt with HAProxy?

A: Use acme.sh with the standalone or webroot method, then concatenate the fullchain and private key into a single PEM file. Configure HAProxy to reload after renewal. See the SSL termination section above.

Q: Can I use HAProxy and PF together?

A: Yes. PF handles firewall rules and HAProxy handles load balancing. They operate at different layers and complement each other. Use PF to restrict access to HAProxy's management ports and stats page.

Get more FreeBSD guides

Weekly tutorials, security advisories, and package updates. No spam.