FreeBSD.software
Home/Guides/How to Set Up High Availability on FreeBSD with CARP
tutorial·2026-03-29·18 min read

How to Set Up High Availability on FreeBSD with CARP

Complete guide to high availability on FreeBSD with CARP. Covers virtual IPs, failover configuration, pfsync for firewall state, multi-service HA, and testing failover scenarios.

How to Set Up High Availability on FreeBSD with CARP

Downtime costs money. Whether you run a public-facing web application, a critical database, or a border firewall, a single point of failure is an unacceptable risk. FreeBSD ships with CARP (Common Address Redundancy Protocol) in the base system, giving you automatic IP failover without any third-party software. Combined with pfsync for stateful firewall synchronization, you can build production-grade high availability clusters entirely with tools already on your FreeBSD servers.

This guide walks through every step: enabling CARP, configuring virtual IPs, synchronizing firewall state, and building real HA topologies for web servers, databases, and firewalls. Every configuration snippet is tested against FreeBSD 14.x.


What Is CARP and How Does It Work?

CARP is an IP-level redundancy protocol originally developed by the OpenBSD project and ported to FreeBSD. It allows multiple hosts on the same network segment to share a virtual IP address. One host acts as master and responds to traffic on that IP. The remaining hosts are backups that monitor the master via multicast advertisements. When the master stops advertising -- because it crashed, lost network connectivity, or was deliberately shut down -- the backup with the lowest advskew value takes over, typically within one to three seconds.

Key concepts:

  • Virtual Host ID (vhid): A numeric identifier (1--255) that groups CARP interfaces across hosts. Both the master and backup must use the same vhid for a given virtual IP.
  • Advertisement interval (advbase/advskew): The master sends a multicast advertisement every advbase seconds (default 1). The advskew value (0--254) adds a fractional delay. A lower total interval means higher priority.
  • Preemption: When enabled, a host with a lower advskew will take the master role back from a higher-advskew host once it comes online again.
  • Shared password: CARP advertisements are authenticated with a shared password per vhid, preventing rogue hosts from joining the group.

CARP operates at layer 3 using IP protocol number 112. It does not require any special switch configuration, but both hosts must be on the same broadcast domain (same VLAN or physical network).


Prerequisites

Before you begin, you need:

  • Two FreeBSD 14.x servers (physical or virtual) on the same subnet.
  • A dedicated IP address for each server's real interface (for management and inter-node communication).
  • One or more virtual IP addresses that will float between the two nodes.
  • Root access on both servers.
  • A working PF firewall configuration if you plan to synchronize firewall state.

Example network layout used throughout this guide:

| Host | Role | Real IP | Virtual IP |

|------|------|---------|------------|

| node1 | Primary (master) | 192.168.1.10/24 | 192.168.1.100/24 |

| node2 | Backup | 192.168.1.11/24 | 192.168.1.100/24 |

The virtual IP 192.168.1.100 is the address clients connect to. It will automatically move to whichever node is currently the CARP master.


Enabling CARP in the Kernel

CARP support is compiled into the FreeBSD GENERIC kernel, so no custom kernel build is required. You do need to enable the relevant sysctl tunables and load the carp interface module.

Step 1: Load the CARP Module

On both nodes, verify the module loads:

bash
kldload carp

To load it automatically at boot, add to /boot/loader.conf:

conf
# /boot/loader.conf carp_load="YES"

Step 2: Enable CARP via sysctl

Allow CARP to function and optionally enable preemption:

bash
sysctl net.inet.carp.allow=1 sysctl net.inet.carp.preempt=1
  • net.inet.carp.allow=1 -- permits CARP on this host (default is 1 on FreeBSD 14).
  • net.inet.carp.preempt=1 -- enables preemption so the primary reclaims the master role after recovery.

Make these persistent in /etc/sysctl.conf on both nodes:

conf
# /etc/sysctl.conf net.inet.carp.allow=1 net.inet.carp.preempt=1

When net.inet.carp.preempt is set to 1, FreeBSD also demotion-links all CARP interfaces: if any single physical interface goes down, all CARP interfaces on the host are demoted to backup. This prevents split-service scenarios where a node is master for some virtual IPs but has lost upstream connectivity.


Configuring Virtual IPs on Primary and Backup

On node1 (Primary)

Create the CARP virtual interface:

bash
ifconfig carp0 create ifconfig carp0 vhid 1 advskew 0 pass secretpass 192.168.1.100/24

This tells node1 to join CARP group vhid 1 with advskew 0 (highest priority). Because the advskew is the lowest, node1 will be the master when both nodes are healthy.

To make this persistent across reboots, add to /etc/rc.conf on node1:

conf
# /etc/rc.conf on node1 cloned_interfaces="carp0" ifconfig_carp0="vhid 1 advskew 0 pass secretpass 192.168.1.100/24"

On node2 (Backup)

bash
ifconfig carp0 create ifconfig carp0 vhid 1 advskew 100 pass secretpass 192.168.1.100/24

The higher advskew value of 100 ensures node2 remains backup as long as node1 is alive. Persistent configuration on node2:

conf
# /etc/rc.conf on node2 cloned_interfaces="carp0" ifconfig_carp0="vhid 1 advskew 100 pass secretpass 192.168.1.100/24"

Verifying CARP Status

On node1, you should see:

bash
ifconfig carp0
shell
carp0: flags=49<UP,LOOPBACK,RUNNING> metric 0 mtu 1500 inet 192.168.1.100 netmask 0xffffff00 carp: MASTER vhid 1 advbase 1 advskew 0 groups: carp status: master

On node2:

shell
carp0: flags=49<UP,LOOPBACK,RUNNING> metric 0 mtu 1500 inet 192.168.1.100 netmask 0xffffff00 carp: BACKUP vhid 1 advbase 1 advskew 100 groups: carp status: backup

Preemption Settings and Demotion

Preemption controls whether a recovered node automatically reclaims the master role. With net.inet.carp.preempt=1, the node with the lowest advskew always becomes master when it is available. Without preemption, whichever node became master during a failover stays master until it fails.

For most production environments, enable preemption. It ensures your preferred primary always handles traffic when healthy, giving you predictable behavior.

Demotion Counter

FreeBSD maintains a demotion counter (net.inet.carp.demotion) that artificially raises advskew for all CARP interfaces on the host. Applications and scripts can increment this counter to gracefully move traffic away before maintenance:

bash
# Gracefully demote this node (add 240 to advskew) sysctl net.inet.carp.demotion=240 # After maintenance, restore sysctl net.inet.carp.demotion=0

This is far cleaner than shutting down interfaces and lets you do rolling upgrades with zero downtime.


Synchronizing Firewall State with pfsync

If your HA nodes run the PF firewall, active connections will break during failover unless you synchronize PF state tables between nodes. That is exactly what pfsync does.

pfsync replicates PF state entries over a dedicated network link. When the backup takes over, it already knows about every established TCP connection, UDP flow, and NAT mapping. Clients experience no interruption for existing sessions.

Dedicated Sync Interface

Best practice is to use a separate physical interface (or VLAN) for pfsync traffic, keeping it off the production network. Assume both nodes have a second NIC (em1) on subnet 10.0.0.0/30:

| Host | Sync IP |

|------|---------|

| node1 | 10.0.0.1/30 |

| node2 | 10.0.0.2/30 |

Configuring pfsync on node1

bash
ifconfig pfsync0 create ifconfig pfsync0 syncdev em1 syncpeer 10.0.0.2 up

Persistent configuration:

conf
# /etc/rc.conf on node1 cloned_interfaces="carp0 pfsync0" ifconfig_em1="inet 10.0.0.1/30" ifconfig_pfsync0="syncdev em1 syncpeer 10.0.0.2 up"

Configuring pfsync on node2

conf
# /etc/rc.conf on node2 cloned_interfaces="carp0 pfsync0" ifconfig_em1="inet 10.0.0.2/30" ifconfig_pfsync0="syncdev em1 syncpeer 10.0.0.1 up"

PF Rules for pfsync

Allow pfsync traffic on the sync interface in your /etc/pf.conf:

shell
pass quick on em1 proto pfsync pass quick on em1 proto carp

Reload PF:

bash
pfctl -f /etc/pf.conf

Verify state synchronization:

bash
systat pfsync

You should see state insertions and updates flowing between nodes in real time.


Testing Failover

Before trusting CARP in production, test every failure scenario.

Test 1: Unplug the Primary Network Cable

Physically disconnect node1's production NIC (or administratively down it):

bash
# On node1 ifconfig em0 down

Within 1--3 seconds, node2 should promote to master. Verify:

bash
# On node2 ifconfig carp0 | grep status

Expected output: status: master.

Bring node1 back:

bash
ifconfig em0 up

With preemption enabled, node1 reclaims master within seconds.

Test 2: Kill a Critical Process

If you are running CARP to protect a web server, stop the service on the master:

bash
service nginx stop

CARP itself will not detect application-level failures -- it only monitors network availability. You need a health-check script that increments the demotion counter when the service goes down. More on this below.

Test 3: Full Node Reboot

Reboot node1:

bash
shutdown -r now

Node2 becomes master almost instantly. After node1 finishes booting, preemption brings the virtual IP back.

Test 4: Verify Existing Connections Survive (pfsync)

From a client, open a long-lived SSH or HTTP connection through the virtual IP. Then fail over. If pfsync is working correctly, the session continues without interruption on the new master.


HA for Web Servers: CARP + NGINX

A common production setup pairs CARP with NGINX on both nodes. The virtual IP floats, and both nodes serve identical content.

Architecture

shell
Client --> 192.168.1.100 (CARP VIP) --> NGINX (master node) --> NGINX (backup node, standby)

Install and configure NGINX identically on both nodes. Use shared storage (NFS, ZFS replication, or rsync) to keep web content in sync.

Application-Aware Failover Script

CARP does not know if NGINX is running. Create a health-check script at /usr/local/sbin/carp-health.sh:

bash
#!/bin/sh # /usr/local/sbin/carp-health.sh # Demote CARP if NGINX is not responding DEMOTE_AMOUNT=240 check_nginx() { /usr/bin/fetch -qo /dev/null -T 3 http://127.0.0.1/ 2>/dev/null return $? } if check_nginx; then # Service is healthy, ensure demotion is cleared current=$(sysctl -n net.inet.carp.demotion) if [ "$current" -ge "$DEMOTE_AMOUNT" ]; then sysctl net.inet.carp.demotion=0 fi else # Service is down, demote this node sysctl net.inet.carp.demotion=$DEMOTE_AMOUNT logger -t carp-health "NGINX health check failed, CARP demoted" fi

Make it executable and run it every 5 seconds via cron:

bash
chmod +x /usr/local/sbin/carp-health.sh

Add to /etc/crontab:

shell
* * * * * root /usr/local/sbin/carp-health.sh * * * * * root sleep 5 && /usr/local/sbin/carp-health.sh * * * * * root sleep 10 && /usr/local/sbin/carp-health.sh * * * * * root sleep 15 && /usr/local/sbin/carp-health.sh * * * * * root sleep 20 && /usr/local/sbin/carp-health.sh * * * * * root sleep 25 && /usr/local/sbin/carp-health.sh * * * * * root sleep 30 && /usr/local/sbin/carp-health.sh * * * * * root sleep 35 && /usr/local/sbin/carp-health.sh * * * * * root sleep 40 && /usr/local/sbin/carp-health.sh * * * * * root sleep 45 && /usr/local/sbin/carp-health.sh * * * * * root sleep 50 && /usr/local/sbin/carp-health.sh * * * * * root sleep 55 && /usr/local/sbin/carp-health.sh

This gives you health checks every 5 seconds. For tighter intervals, use a daemon loop instead of cron.

For more advanced load distribution across both nodes, see our guide on load balancing.


HA for Databases: CARP + PostgreSQL Streaming Replication

Floating a virtual IP in front of a PostgreSQL cluster gives clients a single connection endpoint that automatically follows the primary database.

Architecture

shell
App Server --> 192.168.1.100 (CARP VIP) --> PostgreSQL primary (node1) --> PostgreSQL standby (node2)

Node1 runs PostgreSQL as the streaming replication primary. Node2 runs a hot standby. The CARP VIP points to whichever node currently holds the primary database role.

Key Configuration Points

  1. Set up PostgreSQL streaming replication between node1 and node2 following the PostgreSQL setup guide.
  1. Bind PostgreSQL to all interfaces (or specifically to the CARP VIP) so it accepts connections on the virtual IP:
shell
# postgresql.conf listen_addresses = '*'
  1. Write a promotion script (/usr/local/sbin/promote-pg.sh) on the standby:
bash
#!/bin/sh # Promote PostgreSQL standby to primary /usr/local/bin/pg_ctl promote -D /var/db/postgres/data16 logger -t pg-promote "PostgreSQL promoted to primary on $(hostname)"
  1. Tie promotion to CARP state changes. FreeBSD runs /usr/local/sbin/carp-state-change.sh via devd when CARP state transitions occur. Create /etc/devd/carp.conf:
shell
notify 100 { match "system" "CARP"; match "subsystem" "carp0"; match "type" "MASTER"; action "/usr/local/sbin/promote-pg.sh"; };

Restart devd:

bash
service devd restart

When node2 becomes the CARP master (because node1 failed), devd fires the promotion script, and PostgreSQL on node2 becomes the new primary -- all automatically.

Important: Database failover is a one-way operation. After promotion, the old primary must be rebuilt as a standby before it can rejoin the cluster. Automated re-sync is beyond CARP's scope; tools like Patroni or custom scripts handle that layer.


HA for Routers and Firewalls: CARP + pfsync + NAT

CARP was purpose-built for this use case. Two FreeBSD routers share a virtual gateway IP. Clients use the VIP as their default gateway. When the active router fails, the backup takes over with full state -- including NAT translations and established connections.

Network Layout

| Interface | node1 | node2 | VIP |

|-----------|-------|-------|-----|

| WAN (em0) | 203.0.113.10 | 203.0.113.11 | 203.0.113.1 |

| LAN (em1) | 192.168.1.10 | 192.168.1.11 | 192.168.1.1 |

| Sync (em2) | 10.0.0.1/30 | 10.0.0.2/30 | -- |

rc.conf for node1 (Primary Router)

conf
# /etc/rc.conf on node1 gateway_enable="YES" pf_enable="YES" cloned_interfaces="carp0 carp1 pfsync0" # WAN real IP ifconfig_em0="inet 203.0.113.10/24" # LAN real IP ifconfig_em1="inet 192.168.1.10/24" # Sync link ifconfig_em2="inet 10.0.0.1/30" # WAN CARP VIP ifconfig_carp0="vhid 1 advskew 0 pass wansecret 203.0.113.1/24" # LAN CARP VIP ifconfig_carp1="vhid 2 advskew 0 pass lansecret 192.168.1.1/24" # pfsync ifconfig_pfsync0="syncdev em2 syncpeer 10.0.0.2 up"

rc.conf for node2 (Backup Router)

conf
# /etc/rc.conf on node2 gateway_enable="YES" pf_enable="YES" cloned_interfaces="carp0 carp1 pfsync0" ifconfig_em0="inet 203.0.113.11/24" ifconfig_em1="inet 192.168.1.11/24" ifconfig_em2="inet 10.0.0.2/30" ifconfig_carp0="vhid 1 advskew 100 pass wansecret 203.0.113.1/24" ifconfig_carp1="vhid 2 advskew 100 pass lansecret 192.168.1.1/24" ifconfig_pfsync0="syncdev em2 syncpeer 10.0.0.1 up"

PF Configuration with NAT

On both nodes, use an identical /etc/pf.conf:

shell
# /etc/pf.conf ext_if = "carp0" int_if = "carp1" sync_if = "em2" set skip on lo0 set skip on $sync_if # NAT for LAN clients nat on $ext_if from 192.168.1.0/24 to any -> ($ext_if) # Allow pfsync and CARP on sync interface pass quick on em2 proto pfsync pass quick on em2 proto carp pass quick proto carp # Default deny with stateful filtering block in all pass out all keep state # Allow LAN traffic pass in on $int_if from 192.168.1.0/24

With pfsync active, NAT state tables replicate continuously. When the backup takes over, clients behind the NAT do not lose their connections. Downloads continue. SSH sessions stay alive. This is the gold standard for FreeBSD firewall HA.


Monitoring CARP Status

Using ifconfig

The simplest check:

bash
ifconfig carp0 | grep -E "status|carp:"

Using sysctl

bash
sysctl net.inet.carp.demotion

A demotion value of 0 means healthy. Any positive value means the node has been demoted.

Logging CARP Events

CARP state changes are logged to syslog. Watch them in real time:

bash
tail -f /var/log/messages | grep -i carp

SNMP Monitoring

For Nagios, Zabbix, or Prometheus-based monitoring, query the CARP status via a simple script:

bash
#!/bin/sh # /usr/local/sbin/check-carp.sh STATUS=$(ifconfig carp0 | grep -o 'status: [a-z]*' | awk '{print $2}') case "$STATUS" in master) echo "OK: CARP master"; exit 0 ;; backup) echo "OK: CARP backup"; exit 0 ;; init) echo "WARNING: CARP init"; exit 1 ;; *) echo "CRITICAL: CARP unknown"; exit 2 ;; esac

Preventing Split-Brain

Split-brain occurs when both nodes simultaneously believe they are master. This typically happens when the network link between them fails but both still have upstream connectivity.

Mitigation Strategies

  1. Dedicated sync link. Use a direct crossover cable or dedicated VLAN between the two nodes for CARP advertisements and pfsync. This eliminates switch failures as a cause of split-brain.
  1. Multiple CARP groups with preemption. When net.inet.carp.preempt=1 is set, losing any interface demotes all CARP groups on that host. This prevents partial-master states.
  1. Fencing (STONITH). For critical workloads like databases, implement a fencing mechanism. If a node suspects split-brain, it can power-cycle the other node via IPMI before assuming the master role. This guarantees only one master exists.
  1. Avoid asymmetric routing. Ensure both nodes use the same upstream gateway and that the virtual IP is the only address clients connect to.
  1. Short advertisement intervals. The default advbase=1 is fine for most setups. Avoid setting it higher than 2 seconds, as that increases the detection window where split-brain could occur.

Complete Example: Two-Node HA Web Cluster

This section ties everything together into a complete, copy-paste-ready deployment.

Goal

Two FreeBSD servers running NGINX behind a floating CARP VIP with PF firewall state synchronization. If the primary dies, the backup takes over within seconds with no dropped connections.

node1 /etc/rc.conf

conf
hostname="web1.example.com" # Network ifconfig_em0="inet 192.168.1.10 netmask 255.255.255.0" ifconfig_em1="inet 10.0.0.1 netmask 255.255.255.252" defaultrouter="192.168.1.1" # CARP and pfsync cloned_interfaces="carp0 pfsync0" ifconfig_carp0="vhid 1 advskew 0 pass MyS3cur3P@ss 192.168.1.100/24" ifconfig_pfsync0="syncdev em1 syncpeer 10.0.0.2 up" # Services nginx_enable="YES" pf_enable="YES" sshd_enable="YES"

node2 /etc/rc.conf

conf
hostname="web2.example.com" # Network ifconfig_em0="inet 192.168.1.11 netmask 255.255.255.0" ifconfig_em1="inet 10.0.0.2 netmask 255.255.255.252" defaultrouter="192.168.1.1" # CARP and pfsync cloned_interfaces="carp0 pfsync0" ifconfig_carp0="vhid 1 advskew 100 pass MyS3cur3P@ss 192.168.1.100/24" ifconfig_pfsync0="syncdev em1 syncpeer 10.0.0.1 up" # Services nginx_enable="YES" pf_enable="YES" sshd_enable="YES"

/etc/sysctl.conf (both nodes)

conf
net.inet.carp.allow=1 net.inet.carp.preempt=1 net.inet.ip.forwarding=1

/boot/loader.conf (both nodes)

conf
carp_load="YES" pf_load="YES" pflog_load="YES"

/etc/pf.conf (both nodes)

shell
ext_if = "em0" sync_if = "em1" vip = "192.168.1.100" set skip on lo0 set skip on $sync_if # Allow CARP and pfsync pass quick proto carp pass quick on $sync_if proto pfsync # Default deny inbound block in on $ext_if # Allow established connections pass out all keep state # Allow HTTP/HTTPS to VIP pass in on $ext_if proto tcp to $vip port { 80, 443 } keep state # Allow SSH to real IPs (management) pass in on $ext_if proto tcp to ($ext_if) port 22 keep state

NGINX Configuration (both nodes)

Configure NGINX to listen on all addresses (it will respond when the CARP VIP is active on that node):

nginx
# /usr/local/etc/nginx/nginx.conf server { listen 80; listen 443 ssl; server_name example.com; root /usr/local/www/example.com; index index.html; ssl_certificate /usr/local/etc/ssl/example.com.crt; ssl_certificate_key /usr/local/etc/ssl/example.com.key; }

Content Synchronization

Keep web content identical on both nodes using rsync over the sync interface:

bash
# Run on node1 every minute via cron rsync -az --delete /usr/local/www/example.com/ 10.0.0.2:/usr/local/www/example.com/

Deploy and Verify

  1. Apply the configurations on both nodes and reboot.
  2. After boot, verify on node1: ifconfig carp0 shows status: master.
  3. Verify on node2: ifconfig carp0 shows status: backup.
  4. From a client, curl http://192.168.1.100 should return your website.
  5. On node1, run ifconfig em0 down. Within seconds, node2 becomes master and the same curl command works.
  6. Bring node1 back: ifconfig em0 up. Preemption restores node1 as master.

Troubleshooting Common Issues

CARP stays in INIT state. Check that the CARP password and vhid match on both nodes. Verify that PF is not blocking protocol 112 (CARP). Add pass quick proto carp to your PF rules.

Both nodes are MASTER. This is split-brain. Check network connectivity between nodes. Verify the sync link is up. Ensure both nodes use identical vhid and password values.

Failover is slow (more than 5 seconds). Check advbase and advskew values. The backup detects master failure after 3 * advbase + advskew/256 seconds. With defaults, this is about 3 seconds.

Connections drop on failover. You are not running pfsync, or pfsync is not synchronizing. Verify with systat pfsync and check that the sync interface is passing traffic.

CARP VIP does not respond to ARP. Ensure the CARP interface is UP and in MASTER state. Some virtualization platforms (VMware, VirtualBox) require promiscuous mode on the virtual switch.


Frequently Asked Questions

Does CARP require identical hardware on both nodes?

No. CARP is a software protocol that only requires both nodes to be on the same network segment and run compatible FreeBSD versions. The nodes can have different CPU, RAM, or disk configurations. However, for a web or database cluster, the backup should have enough resources to handle the full production load during failover.

Can I run more than two nodes in a CARP group?

Yes. You can have multiple backups with different advskew values. The node with the lowest advskew becomes master. The others remain backup in priority order. In practice, two-node setups cover most use cases.

How fast is CARP failover?

Typically 1--3 seconds. The backup detects master failure after missing three consecutive advertisements (default advbase=1, so about 3 seconds). With tuned settings, sub-second failover is possible but increases network overhead.

Does CARP work with IPv6?

Yes. FreeBSD's CARP implementation supports both IPv4 and IPv6 virtual addresses. The configuration syntax is the same -- just use an IPv6 address in the ifconfig command.

Can I use CARP across different subnets or VLANs?

No. CARP relies on multicast advertisements on the local broadcast domain. Both nodes must be on the same subnet. For cross-site failover, you need a different approach such as BGP-based anycast or DNS failover.

How does CARP compare to keepalived or VRRP?

CARP is the BSD-native answer to VRRP (which is patented). It serves the same purpose -- virtual IP failover -- but is built into the FreeBSD base system with no external dependencies. Keepalived is a Linux tool that implements VRRP. If you are already on FreeBSD, CARP is the natural and well-integrated choice.

Is CARP suitable for cloud environments?

Most cloud providers (AWS, GCP, Azure) do not support multicast, which CARP requires. For cloud HA, use the provider's native load balancer or floating IP API instead. CARP works well in bare-metal hosting, colocation, and on-premises environments.


Summary

CARP gives FreeBSD administrators a robust, zero-dependency high availability solution for any network service. Combined with pfsync for stateful firewall synchronization, it delivers seamless failover that preserves active connections. Whether you are protecting a web server with NGINX, a PostgreSQL database, or a border firewall running PF, the pattern is the same: configure matching CARP interfaces on two nodes, tune advskew for priority, enable preemption, and synchronize state.

The key decisions are straightforward: use a dedicated sync link, enable preemption, add application-aware health checks, and test every failure scenario before going to production. With those pieces in place, your FreeBSD infrastructure handles hardware failures, network outages, and maintenance windows without dropping a single connection.

Get more FreeBSD guides

Weekly tutorials, security advisories, and package updates. No spam.