How to Monitor FreeBSD Servers: Complete Guide 2026
Monitoring is not optional for production FreeBSD servers. Without monitoring, you discover problems when users complain or services crash. With proper monitoring, you detect degradation before it becomes an outage, plan capacity based on real data, and maintain an audit trail of system behavior over time.
This guide covers the complete monitoring stack for FreeBSD servers in 2026: built-in command-line tools for immediate diagnostics, choosing the right monitoring platform, quick-start instructions for the major options (Prometheus, Zabbix, Netdata, Grafana), building effective dashboards, and alerting best practices that reduce noise and catch real problems.
Built-in Tools Checklist
Before installing any monitoring platform, FreeBSD provides powerful built-in tools. Every administrator should know these for immediate diagnostics and troubleshooting.
System Overview
sh# Uptime and load average uptime # Process listing sorted by CPU top -o cpu # Process listing sorted by memory top -o res # System resource usage summary vmstat 1 # Interrupt rates vmstat -i
CPU Monitoring
sh# Per-CPU utilization top -P # CPU time breakdown (user/system/idle/interrupt) vmstat 1 # Process CPU accounting ps -auxww --sort=-%cpu | head -20 # System call tracing for a process truss -p <pid>
Memory Monitoring
sh# Memory summary sysctl hw.physmem hw.usermem sysctl vm.stats.vm.v_free_count vm.stats.vm.v_inactive_count # Swap usage swapinfo -h # Top processes by memory ps -auxww --sort=-rss | head -20 # ZFS ARC memory usage sysctl kstat.zfs.misc.arcstats.size sysctl vfs.zfs.arc_max
Disk and Storage
sh# Filesystem usage df -h # ZFS pool status zpool status zpool list # ZFS dataset usage zfs list -o name,used,avail,refer,compressratio # Disk I/O statistics iostat -x 2 # GEOM-level disk stats gstat # ZFS I/O stats per pool zpool iostat -v 2
Network Monitoring
sh# Interface statistics netstat -i # Active connections netstat -an # Listening services sockstat -4 -l # Network throughput per interface systat -ifstat # Routing table netstat -rn # Firewall state (PF) pfctl -s info pfctl -s state
Log Monitoring
sh# System messages tail -f /var/log/messages # Authentication log tail -f /var/log/auth.log # Mail log tail -f /var/log/maillog # All logs combined tail -f /var/log/messages /var/log/auth.log /var/log/security
Health Check Script
A quick health check script you can run manually or via cron:
sh#!/bin/sh # /usr/local/bin/freebsd-healthcheck.sh echo "=== System Health Check ===" echo "Hostname: $(hostname)" echo "Uptime: $(uptime | awk '{print $3,$4}' | sed 's/,//')" echo "" echo "=== Load Average ===" uptime | awk -F'load averages:' '{print $2}' echo "" echo "=== Memory ===" echo "Physical: $(sysctl -n hw.physmem | awk '{printf "%.1f GB", $1/1073741824}')" echo "ARC Size: $(sysctl -n kstat.zfs.misc.arcstats.size 2>/dev/null | awk '{printf "%.1f GB", $1/1073741824}')" echo "Swap Used: $(swapinfo -h 2>/dev/null | tail -1 | awk '{print $3}')" echo "" echo "=== ZFS Pools ===" zpool list 2>/dev/null || echo "No ZFS pools" echo "" echo "=== Disk Usage (>80%) ===" df -h | awk 'NR>1 && int($5) > 80 {print $0}' echo "" echo "=== Failed Services ===" service -e | while read svc; do service $(basename $svc) status > /dev/null 2>&1 || echo "DOWN: $svc" done echo "" echo "=== Security Audit ===" pkg audit -q 2>/dev/null | head -5 echo ""
Make it executable and run:
shchmod +x /usr/local/bin/freebsd-healthcheck.sh /usr/local/bin/freebsd-healthcheck.sh
Choosing a Monitoring Platform
The monitoring landscape has several categories. Choosing the right platform depends on your infrastructure size, team expertise, and requirements.
Decision Matrix
| Factor | Prometheus + Grafana | Zabbix | Netdata |
|---|---|---|---|
| Infrastructure size | 10-10,000+ servers | 10-10,000+ servers | 1-100 servers |
| Setup complexity | Medium | High | Low |
| FreeBSD support | Good (via pkg) | Good (server + agent) | Good (via pkg) |
| Query language | PromQL (powerful) | Limited built-in | None (auto-configured) |
| Dashboard customization | Excellent (Grafana) | Good (built-in) | Good (built-in) |
| Alerting | Alertmanager (flexible) | Built-in (comprehensive) | Built-in (basic) |
| Long-term storage | TSDB (configurable) | SQL database | Limited (streaming) |
| Cloud/container native | Yes | Somewhat | Yes |
| SNMP support | Via exporters | Excellent (native) | Via plugins |
| Community & docs | Large | Large | Growing |
Recommendation by Use Case
- Small fleet (1-10 servers), minimal setup: Netdata. Install and immediately see metrics.
- Medium fleet (10-100 servers), need dashboards and alerts: Prometheus + Grafana. The most flexible option.
- Enterprise with mixed infrastructure (servers + network devices): Zabbix. Best SNMP and template support.
- Already using Prometheus/Grafana elsewhere: Extend to FreeBSD. The ecosystem is the same.
Quick-Start: Prometheus + Node Exporter
Install on Each FreeBSD Server
shpkg install node_exporter sysrc node_exporter_enable="YES" sysrc node_exporter_args="--collector.zfs --collector.cpu --collector.filesystem --collector.loadavg --collector.meminfo --collector.netdev" service node_exporter start
Install Prometheus (Monitoring Server)
shpkg install prometheus sysrc prometheus_enable="YES"
Edit /usr/local/etc/prometheus.yml:
shglobal: scrape_interval: 15s scrape_configs: - job_name: "freebsd" static_configs: - targets: - "server1:9100" - "server2:9100" - "server3:9100"
shservice prometheus start
Access the web UI at http://monitoring-server:9090.
Quick-Start: Zabbix
Install Zabbix Server
shpkg install zabbix70-server zabbix70-frontend-php83 zabbix70-agent
Set up the database (PostgreSQL recommended):
shpkg install postgresql16-server sysrc postgresql_enable="YES" service postgresql initdb service postgresql start su - postgres -c "createuser zabbix" su - postgres -c "createdb -O zabbix zabbix" # Import schema cd /usr/local/share/zabbix/server/database/postgresql su - postgres -c "psql -d zabbix -f /usr/local/share/zabbix70/server/database/postgresql/schema.sql" su - postgres -c "psql -d zabbix -f /usr/local/share/zabbix70/server/database/postgresql/images.sql" su - postgres -c "psql -d zabbix -f /usr/local/share/zabbix70/server/database/postgresql/data.sql"
Configure the Zabbix server:
sh# Edit /usr/local/etc/zabbix7/zabbix_server.conf # Set DBHost, DBName, DBUser, DBPassword sysrc zabbix_server_enable="YES" sysrc zabbix_agentd_enable="YES" service zabbix_server start service zabbix_agentd start
Install Zabbix Agent on Each Server
shpkg install zabbix70-agent sysrc zabbix_agentd_enable="YES"
Edit /usr/local/etc/zabbix7/zabbix_agentd.conf:
shServer=10.0.1.50 ServerActive=10.0.1.50 Hostname=server1.example.com
shservice zabbix_agentd start
Quick-Start: Netdata
Install on Each Server
shpkg install netdata sysrc netdata_enable="YES" service netdata start
Access the local dashboard at http://server:19999. Netdata auto-detects FreeBSD system metrics, ZFS, network interfaces, and running services. No configuration needed for basic monitoring.
Optional: Netdata Cloud
For centralized monitoring across servers, connect to Netdata Cloud (free for up to 5 nodes):
shnetdata-claim.sh -token=YOUR_TOKEN -rooms=YOUR_ROOM -url=https://app.netdata.cloud
Customizing Collectors
Edit /usr/local/etc/netdata/netdata.conf:
sh[plugins] freebsd = yes proc = yes [plugin:freebsd] zfs pools state = yes zfs pools usage = yes zfs arcstats = yes
Quick-Start: Grafana
Grafana is a visualization platform. It does not collect metrics itself -- it queries data sources like Prometheus, Zabbix, or InfluxDB.
Install
shpkg install grafana10 sysrc grafana_enable="YES" service grafana start
Access at http://server:3000 (default login: admin/admin).
Add Prometheus as Data Source
In the Grafana web UI:
- Navigate to Configuration > Data Sources
- Add Prometheus
- Set URL to
http://localhost:9090(or your Prometheus server address) - Click Save & Test
Import FreeBSD Dashboard
Import the Node Exporter Full dashboard (ID: 1860) from grafana.com:
- Navigate to Dashboards > Import
- Enter dashboard ID: 1860
- Select your Prometheus data source
- Click Import
This provides a comprehensive FreeBSD monitoring dashboard with CPU, memory, disk, network, and system metrics.
Building Effective Dashboards
Dashboard Design Principles
One dashboard per role: Create separate dashboards for system overview, database performance, network status, and application health. Do not put everything on one screen.
Top-down layout: Start with high-level summary panels at the top (overall fleet health, total error rate), then drill down to specific metrics below.
Consistent time ranges: Use dashboard-level time range controls. Align all panels to the same time window.
Use variables: Template dashboards with variables for server name, datacenter, and role. One dashboard template serves all servers.
Recommended Panels for FreeBSD
System Overview Dashboard:
- Fleet health table (all servers, status, uptime)
- Aggregate CPU utilization heatmap
- Memory utilization gauge per server
- ZFS pool health status
- Alert firing count
Per-Server Dashboard:
- CPU breakdown (user, system, idle, interrupt)
- Memory breakdown (active, inactive, wired, free, ARC)
- ZFS ARC hit rate
- Disk I/O throughput and IOPS
- Network throughput per interface
- Filesystem capacity bars
- Top processes by CPU and memory
ZFS Dashboard:
- Pool health matrix
- ARC size vs target size
- ARC hit ratio over time
- L2ARC hit ratio
- Pool I/O latency histograms
- Compression ratio per dataset
- Scrub status and last scrub time
Grafana Tips for FreeBSD
Create a variable for server selection:
sh# In Grafana dashboard settings > Variables # Name: instance # Type: Query # Data source: Prometheus # Query: label_values(node_uname_info{sysname="FreeBSD"}, instance)
This creates a dropdown that lists only FreeBSD servers, useful in mixed environments.
Alerting Best Practices
Alert on Symptoms, Not Causes
Bad alert: "CPU is above 80%." The server might be doing useful work.
Good alert: "HTTP response time p95 is above 2 seconds for 5 minutes." This indicates actual user impact.
The exception is resource exhaustion alerts: disk space below 10%, memory above 95% for extended periods, and ZFS pool health degraded are worth alerting on directly.
Alert Thresholds for FreeBSD
sh# In Prometheus alerting rules groups: - name: freebsd-alerts rules: # Disk space: alert at 15% remaining (warning) and 5% (critical) - alert: DiskSpaceLow expr: node_filesystem_avail_bytes{fstype="zfs"} / node_filesystem_size_bytes < 0.15 for: 10m labels: severity: warning - alert: DiskSpaceCritical expr: node_filesystem_avail_bytes{fstype="zfs"} / node_filesystem_size_bytes < 0.05 for: 5m labels: severity: critical # ZFS pool health - alert: ZpoolDegraded expr: zpool_health == 0 for: 1m labels: severity: critical # Server unreachable - alert: ServerDown expr: up{job="freebsd"} == 0 for: 3m labels: severity: critical # High load relative to CPU count - alert: HighLoadAverage expr: node_load15 / count by (instance) (node_cpu_seconds_total{mode="idle"}) > 2 for: 15m labels: severity: warning # Swap usage (indicates memory pressure) - alert: SwapActive expr: node_memory_swap_used_bytes > 0 for: 30m labels: severity: warning
Reduce Alert Fatigue
Use for durations: Never alert on instantaneous spikes. Require the condition to persist for 3-15 minutes minimum.
Severity levels: Use at least two levels (warning and critical). Only page on-call for critical. Send warnings to a channel that is reviewed during business hours.
Group related alerts: If a server goes down, you do not need separate alerts for "server down," "disk unreachable," "service unreachable," and "network timeout." Use Alertmanager inhibition rules:
sh# In alertmanager.yml inhibit_rules: - source_match: alertname: ServerDown target_match_re: alertname: '(DiskSpaceLow|HighLoadAverage|ServiceDown)' equal: ['instance']
Silence during maintenance: Use Alertmanager silences before planned maintenance:
sh# Create a silence via API amtool silence add --alertmanager.url=http://localhost:9093 \ --comment="Scheduled maintenance" \ --duration=2h \ instance="server1:9100"
Notification Channels
For FreeBSD infrastructure teams:
- Email: For warnings and daily summaries
- Slack/Mattermost: For real-time team awareness
- PagerDuty/Opsgenie: For critical alerts requiring immediate response
- Webhook: For integration with ticketing systems
Configure multiple channels with escalation:
sh# In alertmanager.yml route: receiver: 'team-slack' group_wait: 30s routes: - match: severity: critical receiver: 'pagerduty-oncall' continue: true - match: severity: critical receiver: 'team-slack' - match: severity: warning receiver: 'team-slack'
Monitoring Checklist
For each FreeBSD server in production, verify:
- [ ] node_exporter (or Zabbix agent, or Netdata) installed and running
- [ ] System metrics collected: CPU, memory, disk I/O, network
- [ ] ZFS pool health monitored with alerting
- [ ] Disk space alerts configured (warning at 15%, critical at 5%)
- [ ] Server reachability monitored (up/down check)
- [ ] Log files rotated and monitored
- [ ] pkg audit scheduled for security vulnerability scanning
- [ ] Alerting tested (send a test alert, verify delivery)
- [ ] Dashboard accessible and showing current data
- [ ] Backup of monitoring configuration (prometheus.yml, alert rules)
FAQ
What is the lightest-weight monitoring option for FreeBSD?
Netdata is the easiest to deploy and requires no separate monitoring server for a few machines. For truly minimal monitoring, use FreeBSD's built-in tools (vmstat, iostat, top, zpool status) with a cron-based health check script that sends email alerts. The script shown above in the Built-in Tools section is a reasonable starting point.
Can I monitor FreeBSD jails separately?
Yes. For shared-IP jails, run node_exporter on the host and use process-level metrics to distinguish jail workloads. For VNET jails, run a separate node_exporter instance inside each jail on a unique port. Zabbix agent can run inside jails with separate configurations. Netdata can run per-jail as well.
How do I monitor ZFS scrub status?
ZFS scrub progress is not directly exposed by node_exporter. Use a textfile collector with a cron script:
sh#!/bin/sh OUTPUT="/var/tmp/node_exporter/zfs_scrub.prom" zpool status | awk '/scan:.*scrub/{ if (/in progress/) print "zfs_scrub_active 1" else if (/repaired/) print "zfs_scrub_active 0" }' > "$OUTPUT"
How much overhead does monitoring add to a FreeBSD server?
node_exporter uses approximately 10-20 MB of RAM and negligible CPU (below 0.5% on modern hardware). Zabbix agent is similarly lightweight at 5-15 MB. Netdata is heavier, using 50-150 MB of RAM for its per-second collection and built-in dashboard. The Prometheus server itself uses 1-4 GB of RAM depending on the number of time series it stores.
Should I monitor FreeBSD-specific metrics differently from Linux?
Yes, in several areas. FreeBSD exposes ZFS ARC statistics through sysctl rather than procfs. Memory management categories differ (active, inactive, wired, free vs Linux's cached, buffers, available). Network interface naming follows a different convention. Ensure your node_exporter build includes FreeBSD-specific collectors, which the FreeBSD package does by default.
How do I set up email alerting without a third-party service?
FreeBSD includes sendmail in the base system. Configure Alertmanager to deliver via localhost:
sh# In alertmanager.yml global: smtp_smarthost: "localhost:25" smtp_from: "monitoring@example.com" smtp_require_tls: false
Or install dma (DragonFly Mail Agent) for a lighter alternative:
shpkg install dma
Configure /usr/local/etc/dma/dma.conf with your smarthost and point Alertmanager at localhost.