# How to Set Up Prometheus and Grafana on FreeBSD
Prometheus and Grafana have become the default monitoring stack for good reason. Prometheus collects metrics with a pull-based model that scales cleanly. Grafana turns those metrics into dashboards that actually help you diagnose problems. Together they replace a mess of ad hoc scripts, SNMP traps, and email alerts with a single, queryable time-series system.
FreeBSD is an excellent platform for running this stack. The base system's resource predictability, ZFS's snapshot capabilities for data protection, and the clean rc.conf service management make the entire setup more straightforward than on most Linux distributions. No systemd unit files. No snap packages. Predictable paths under /usr/local/etc/.
This guide covers the complete setup: Prometheus server, node_exporter for host metrics, Grafana for visualization, Alertmanager for notifications, PromQL queries tuned for FreeBSD, and production hardening. Every command targets FreeBSD 14.x. Every path is FreeBSD-native.
If you are evaluating monitoring tools more broadly, start with our [FreeBSD server monitoring guide](/blog/freebsd-server-monitoring-guide/) for an overview of built-in tools and third-party options.
Table of Contents
1. [Why Prometheus and Grafana](#why-prometheus-and-grafana)
2. [Installing Prometheus](#installing-prometheus)
3. [Configuring prometheus.yml](#configuring-prometheusyml)
4. [Installing and Configuring node_exporter](#installing-and-configuring-node_exporter)
5. [Installing Grafana](#installing-grafana)
6. [Adding Prometheus as a Grafana Data Source](#adding-prometheus-as-a-grafana-data-source)
7. [Building Your First Dashboard](#building-your-first-dashboard)
8. [Useful PromQL Queries for FreeBSD](#useful-promql-queries-for-freebsd)
9. [Setting Up Alertmanager](#setting-up-alertmanager)
10. [Alert Routing: Email, Slack, PagerDuty](#alert-routing-email-slack-pagerduty)
11. [Monitoring Additional Services](#monitoring-additional-services)
12. [Production Hardening](#production-hardening)
13. [FAQ](#faq)
---
Why Prometheus and Grafana
Traditional monitoring tools like Nagios and Zabbix use an agent-push model: agents on each host send data to a central server. Prometheus inverts this. The Prometheus server pulls (scrapes) metrics from HTTP endpoints on each target at a configurable interval. This design has several practical advantages.
**Pull-based collection is easier to reason about.** The Prometheus server controls the scrape schedule. If a target goes down, Prometheus knows immediately because the scrape fails. You do not need to distinguish between "the agent stopped sending data" and "the network is partitioned" -- the failure mode is the same from Prometheus's perspective.
**Time-series storage is built for operational data.** Prometheus stores every scraped metric as a time series -- a sequence of timestamped values identified by a metric name and a set of key-value labels. Its custom TSDB is optimized for high-cardinality write patterns and fast range queries. Unlike relational databases repurposed for monitoring, Prometheus was designed from the start for exactly this workload.
**PromQL is powerful.** Prometheus ships with a purpose-built query language. You can compute rates, aggregate across instances, forecast resource exhaustion, and correlate metrics from different exporters in a single expression. This matters when you are troubleshooting at 3 AM and need answers, not clicks.
**Grafana completes the picture.** Grafana connects to Prometheus as a data source and provides dashboards, alerting, and exploration tools. Its panel editor lets you build visualizations using PromQL directly. It also supports dozens of other data sources if you later add Loki for logs or InfluxDB for long-term storage.
The combination of Prometheus (metrics collection, storage, querying) and Grafana (visualization, alerting, dashboards) gives you a monitoring platform that scales from a single server to thousands of nodes.
---
Installing Prometheus
FreeBSD provides Prometheus as a binary package. Install it with pkg:
sh
pkg install prometheus
This installs the Prometheus server binary at /usr/local/bin/prometheus and the default configuration at /usr/local/etc/prometheus.yml.
Enable and start the service:
sh
sysrc prometheus_enable="YES"
service prometheus start
Prometheus listens on port 9090 by default. Verify it is running:
sh
fetch -qo - http://localhost:9090/-/healthy
You should see Prometheus Server is Healthy. in the response.
You can also pass additional flags through rc.conf. For example, to set the data retention period and storage path:
sh
sysrc prometheus_args="--storage.tsdb.retention.time=30d --storage.tsdb.path=/var/db/prometheus"
Make sure the storage directory exists and is owned by the prometheus user:
sh
mkdir -p /var/db/prometheus
chown prometheus:prometheus /var/db/prometheus
Restart after changing arguments:
sh
service prometheus restart
---
Configuring prometheus.yml
The configuration file at /usr/local/etc/prometheus.yml controls everything: scrape intervals, targets, alerting rules, and remote storage. Here is a complete working configuration:
yaml
# /usr/local/etc/prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_timeout: 10s
rule_files:
- "/usr/local/etc/prometheus/rules/*.yml"
alerting:
alertmanagers:
- static_configs:
- targets:
- "localhost:9093"
scrape_configs:
# Prometheus monitors itself
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
# FreeBSD host metrics via node_exporter
- job_name: "node"
static_configs:
- targets: ["localhost:9100"]
labels:
instance: "fbsd-web-01"
environment: "production"
# Add more targets as needed
# - job_name: "node-remote"
# static_configs:
# - targets: ["192.168.1.10:9100", "192.168.1.11:9100"]
Key configuration points:
- **scrape_interval**: 15 seconds is a good default. Going below 10 seconds increases storage and CPU usage with diminishing diagnostic value.
- **evaluation_interval**: How often Prometheus evaluates alerting rules. Keep it in sync with scrape_interval.
- **rule_files**: Path to alert rule files. We will create these when setting up Alertmanager.
- **labels**: Add instance and environment labels to distinguish hosts in queries and dashboards.
Create the rules directory:
sh
mkdir -p /usr/local/etc/prometheus/rules
chown -R prometheus:prometheus /usr/local/etc/prometheus
Validate the configuration before restarting:
sh
promtool check config /usr/local/etc/prometheus.yml
If the output says SUCCESS, reload Prometheus:
sh
service prometheus reload
---
Installing and Configuring node_exporter
node_exporter exposes hardware and OS metrics as a Prometheus-compatible HTTP endpoint. It is the standard way to collect CPU, memory, disk, and network metrics.
sh
pkg install node_exporter
Enable and start the service:
sh
sysrc node_exporter_enable="YES"
service node_exporter start
node_exporter listens on port 9100 by default. Verify it is working:
sh
fetch -qo - http://localhost:9100/metrics | head -20
You should see lines like node_cpu_seconds_total, node_memory_active_bytes, and node_filesystem_avail_bytes.
Enabling FreeBSD-Specific Collectors
node_exporter supports collectors that are particularly useful on FreeBSD. You can enable or disable them through rc.conf:
sh
sysrc node_exporter_args="--collector.zfs --collector.devstat --collector.meminfo --collector.netdev --collector.cpu --collector.filesystem --collector.loadavg --collector.uname --no-collector.arp --no-collector.bonding --no-collector.ipvs"
service node_exporter restart
The --collector.zfs collector exposes ZFS ARC hit rates, pool usage, and dataset statistics -- metrics you will not get from a Linux-centric default configuration. The --collector.devstat collector provides per-device I/O statistics from FreeBSD's devstat subsystem.
Binding to a Specific Address
On a multi-homed server, restrict node_exporter to the management interface:
sh
sysrc node_exporter_args="--web.listen-address=10.0.0.5:9100 --collector.zfs --collector.devstat"
service node_exporter restart
This prevents metrics from being exposed on public interfaces.
---
Installing Grafana
Install Grafana from packages:
sh
pkg install grafana
This installs Grafana 11.x (the version available in FreeBSD packages at the time of writing) with the configuration at /usr/local/etc/grafana.ini.
Enable and start the service:
sh
sysrc grafana_enable="YES"
service grafana start
Grafana listens on port 3000 by default. Open http://your-server-ip:3000 in a browser. The default credentials are admin / admin. You will be prompted to change the password on first login.
Securing the Initial Setup
Edit /usr/local/etc/grafana.ini to set your domain and disable anonymous access:
ini
[server]
http_addr = 127.0.0.1
http_port = 3000
domain = monitoring.example.com
root_url = https://monitoring.example.com/
[security]
admin_user = admin
admin_password = your-strong-password-here
disable_gravatar = true
[auth.anonymous]
enabled = false
[analytics]
reporting_enabled = false
check_for_updates = false
Setting http_addr = 127.0.0.1 binds Grafana to localhost only, which is correct if you are placing it behind a reverse proxy. See the [production hardening](#production-hardening) section for the NGINX configuration.
Restart Grafana after editing the configuration:
sh
service grafana restart
---
Adding Prometheus as a Grafana Data Source
You can configure the data source through the Grafana web UI, but provisioning it through a YAML file is repeatable and version-controllable.
Create the provisioning directory and datasource file:
sh
mkdir -p /usr/local/etc/grafana/provisioning/datasources
Write the datasource configuration:
yaml
# /usr/local/etc/grafana/provisioning/datasources/prometheus.yml
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://localhost:9090
isDefault: true
editable: false
jsonData:
timeInterval: "15s"
httpMethod: POST
Set timeInterval to match your Prometheus scrape_interval so that Grafana's $__rate_interval variable calculates correctly.
Restart Grafana to load the provisioned datasource:
sh
service grafana restart
Verify by navigating to **Connections > Data Sources** in the Grafana UI. You should see "Prometheus" listed as the default data source.
---
Building Your First Dashboard
A useful starter dashboard for a FreeBSD host covers four metrics: CPU usage, memory usage, disk usage, and network throughput. Here are the PromQL queries for each panel.
CPU Usage
promql
100 - (avg by(instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
This computes the percentage of CPU time spent doing actual work (everything except idle) averaged across all CPU cores. Use a time-series (graph) panel. Set the Y-axis unit to percent (0-100).
Memory Usage
promql
(1 - (node_memory_inactive_bytes + node_memory_free_bytes) / node_memory_size_bytes) * 100
On FreeBSD, inactive memory is reclaimable, so it is effectively free for application purposes. This query treats both inactive and free memory as available, which gives you a realistic picture of memory pressure rather than an inflated usage number.
Disk Usage by Filesystem
promql
100 - (node_filesystem_avail_bytes{fstype=~"ufs|zfs"} / node_filesystem_size_bytes{fstype=~"ufs|zfs"} * 100)
Filter by fstype to show only real filesystems (UFS and ZFS), excluding tmpfs and devfs. Use a bar gauge or table panel with one row per mount point.
Network Throughput
For received bytes per second:
promql
irate(node_network_receive_bytes_total{device!~"lo.*"}[5m]) * 8
For transmitted bytes per second:
promql
irate(node_network_transmit_bytes_total{device!~"lo.*"}[5m]) * 8
Multiplying by 8 converts bytes to bits per second, the standard unit for network throughput. Exclude loopback interfaces with device!~"lo.*". Use a time-series panel with two queries overlaid.
Creating the Dashboard
1. In Grafana, click **Dashboards > New Dashboard > Add visualization**.
2. Select the Prometheus data source.
3. Paste the PromQL query into the query editor.
4. Set appropriate titles, units, and thresholds (e.g., red above 90% for CPU and disk).
5. Repeat for each panel.
6. Save the dashboard.
For a pre-built option, import the community dashboard with ID **1860** (Node Exporter Full). Go to **Dashboards > Import**, enter 1860, select your Prometheus data source, and import. This gives you dozens of panels covering every node_exporter metric. You can customize it from there.
---
Useful PromQL Queries for FreeBSD
Beyond the basic dashboard panels, these queries are specifically tuned for FreeBSD systems.
ZFS ARC Hit Rate
promql
rate(node_zfs_arc_hits_total[5m]) / (rate(node_zfs_arc_hits_total[5m]) + rate(node_zfs_arc_misses_total[5m])) * 100
A healthy ARC hit rate is above 90%. If it drops consistently below 80%, you may need to increase the ARC size limit via the vfs.zfs.arc.max sysctl, or your working set has grown beyond what memory can cache.
ZFS Pool Usage
promql
node_zfs_pool_allocated_bytes / node_zfs_pool_size_bytes * 100
ZFS performance degrades significantly above 80% pool capacity due to how copy-on-write allocates blocks. Alert on this metric.
System Load Relative to CPU Count
promql
node_load5 / count without(cpu) (node_cpu_seconds_total{mode="idle"})
A value above 1.0 means the system has more runnable processes than CPU cores. Sustained values above 2.0 indicate a bottleneck.
Disk I/O Utilization (via devstat)
promql
rate(node_disk_io_time_seconds_total[5m])
Values approaching 1.0 (100% utilization) indicate the disk is saturated. This is especially important for UFS volumes on spinning disks. ZFS on NVMe typically stays well below saturation.
Swap Usage
promql
(1 - node_memory_swap_free_bytes / node_memory_swap_total_bytes) * 100
Any swap usage on a FreeBSD server warrants investigation. FreeBSD's VM system is efficient at managing physical memory, so swap activity usually means you are genuinely out of RAM.
Network Errors
promql
rate(node_network_receive_errs_total[5m]) + rate(node_network_transmit_errs_total[5m])
Non-zero values indicate driver issues, cable problems, or interface saturation. On FreeBSD, check ifconfig -a and netstat -ib for more detail.
---
Setting Up Alertmanager
Alertmanager handles alert deduplication, grouping, silencing, and routing. Prometheus evaluates alerting rules and sends firing alerts to Alertmanager, which then dispatches notifications.
Installation
sh
pkg install alertmanager
Enable and start the service:
sh
sysrc alertmanager_enable="YES"
service alertmanager start
Alertmanager listens on port 9093 by default.
Alertmanager Configuration
Create the configuration file:
yaml
# /usr/local/etc/alertmanager/alertmanager.yml
global:
resolve_timeout: 5m
smtp_smarthost: "smtp.example.com:587"
smtp_from: "alertmanager@example.com"
smtp_auth_username: "alertmanager@example.com"
smtp_auth_password: "your-smtp-password"
smtp_require_tls: true
route:
receiver: "default-email"
group_by: ["alertname", "instance"]
group_wait: 30s
group_interval: 5m
repeat_interval: 4h
routes:
- match:
severity: critical
receiver: "pagerduty-critical"
repeat_interval: 1h
- match:
severity: warning
receiver: "slack-warnings"
repeat_interval: 4h
receivers:
- name: "default-email"
email_configs:
- to: "ops-team@example.com"
send_resolved: true
- name: "slack-warnings"
slack_configs:
- api_url: "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
channel: "#alerts"
title: '{{ .GroupLabels.alertname }}'
text: '{{ range .Alerts }}{{ .Annotations.description }}{{ end }}'
send_resolved: true
- name: "pagerduty-critical"
pagerduty_configs:
- service_key: "your-pagerduty-service-key"
severity: '{{ if eq .Status "firing" }}critical{{ else }}info{{ end }}'
inhibit_rules:
- source_match:
severity: "critical"
target_match:
severity: "warning"
equal: ["alertname", "instance"]
The inhibit rule prevents warning-level alerts from firing when a critical alert for the same problem is already active.
Validate the configuration:
sh
amtool check-config /usr/local/etc/alertmanager/alertmanager.yml
Restart Alertmanager:
sh
service alertmanager restart
Alert Rules
Create a rules file for Prometheus to evaluate:
yaml
# /usr/local/etc/prometheus/rules/freebsd-alerts.yml
groups:
- name: freebsd-host-alerts
rules:
- alert: HighCpuUsage
expr: 100 - (avg by(instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 85
for: 10m
labels:
severity: warning
annotations:
summary: "High CPU usage on {{ $labels.instance }}"
description: "CPU usage has been above 85% for 10 minutes. Current value: {{ $value | printf \"%.1f\" }}%."
- alert: HighMemoryUsage
expr: (1 - (node_memory_inactive_bytes + node_memory_free_bytes) / node_memory_size_bytes) * 100 > 90
for: 5m
labels:
severity: warning
annotations:
summary: "High memory usage on {{ $labels.instance }}"
description: "Memory usage above 90% for 5 minutes. Current value: {{ $value | printf \"%.1f\" }}%."
- alert: DiskSpaceLow
expr: 100 - (node_filesystem_avail_bytes{fstype=~"ufs|zfs"} / node_filesystem_size_bytes{fstype=~"ufs|zfs"} * 100) > 85
for: 5m
labels:
severity: warning
annotations:
summary: "Disk space low on {{ $labels.instance }} ({{ $labels.mountpoint }})"
description: "Filesystem {{ $labels.mountpoint }} is {{ $value | printf \"%.1f\" }}% full."
- alert: DiskSpaceCritical
expr: 100 - (node_filesystem_avail_bytes{fstype=~"ufs|zfs"} / node_filesystem_size_bytes{fstype=~"ufs|zfs"} * 100) > 95
for: 2m
labels:
severity: critical
annotations:
summary: "Disk space critical on {{ $labels.instance }} ({{ $labels.mountpoint }})"
description: "Filesystem {{ $labels.mountpoint }} is {{ $value | printf \"%.1f\" }}% full. Immediate action required."
- alert: ZfsPoolNearFull
expr: node_zfs_pool_allocated_bytes / node_zfs_pool_size_bytes * 100 > 80
for: 10m
labels:
severity: critical
annotations:
summary: "ZFS pool nearing capacity on {{ $labels.instance }}"
description: "ZFS pool {{ $labels.pool }} is {{ $value | printf \"%.1f\" }}% allocated. Performance degrades above 80%."
- alert: HostDown
expr: up == 0
for: 2m
labels:
severity: critical
annotations:
summary: "Host {{ $labels.instance }} is unreachable"
description: "Prometheus has been unable to scrape {{ $labels.instance }} for 2 minutes."
- alert: SwapInUse
expr: (1 - node_memory_swap_free_bytes / node_memory_swap_total_bytes) * 100 > 5
for: 10m
labels:
severity: warning
annotations:
summary: "Swap usage detected on {{ $labels.instance }}"
description: "Swap is {{ $value | printf \"%.1f\" }}% used. Investigate memory pressure."
- alert: HighNetworkErrors
expr: rate(node_network_receive_errs_total[5m]) + rate(node_network_transmit_errs_total[5m]) > 0
for: 5m
labels:
severity: warning
annotations:
summary: "Network errors on {{ $labels.instance }}"
description: "Interface {{ $labels.device }} is producing errors. Check cabling and driver status."
Validate the rules:
sh
promtool check rules /usr/local/etc/prometheus/rules/freebsd-alerts.yml
Reload Prometheus to pick up the new rules:
sh
service prometheus reload
Verify the rules are loaded by visiting http://localhost:9090/alerts in a browser or with:
sh
fetch -qo - http://localhost:9090/api/v1/rules | head -5
---
Alert Routing: Email, Slack, PagerDuty
The Alertmanager configuration above demonstrates the three most common routing targets. Here is how each one works.
The email_configs receiver uses the SMTP settings defined in the global section. The send_resolved: true option sends a follow-up notification when the alert clears. This is important -- you want to know when the problem ended, not just when it started.
For FreeBSD servers without a local mail relay, use an external SMTP service. Any provider that supports STARTTLS on port 587 will work. Set smtp_require_tls: true to enforce encrypted delivery.
Slack
The slack_configs receiver posts to a Slack channel via an incoming webhook URL. Go to your Slack workspace settings, create an incoming webhook, and paste the URL into the configuration.
The template syntax ({{ .GroupLabels.alertname }}) uses Go's text/template engine. You can customize the message format extensively. For FreeBSD-specific alerts, including the instance and mountpoint labels in the message body helps the on-call engineer identify the affected server immediately.
PagerDuty
The pagerduty_configs receiver integrates with PagerDuty's Events API. Create a service in PagerDuty, generate an integration key, and paste it as service_key. The severity mapping ensures that firing alerts create critical incidents and resolutions auto-resolve them.
Routing Logic
Alertmanager processes routes top-to-bottom, using the first match. The structure in the configuration above routes critical alerts to PagerDuty (waking someone up), warnings to Slack (visible during business hours), and everything else to email. The group_by setting groups related alerts into a single notification, so a host with low disk space on three mount points sends one message, not three.
---
Monitoring Additional Services
node_exporter covers OS-level metrics. For application-level monitoring, Prometheus uses specialized exporters.
NGINX Exporter
If you run NGINX as a reverse proxy (see our [NGINX setup guide](/blog/nginx-freebsd-production-setup/)), the NGINX exporter exposes connection and request metrics:
sh
pkg install nginx-prometheus-exporter
sysrc nginx_exporter_enable="YES"
sysrc nginx_exporter_args="--nginx.scrape-uri=http://127.0.0.1:8080/stub_status"
service nginx-exporter start
Enable the stub_status module in your NGINX configuration:
nginx
server {
listen 127.0.0.1:8080;
location /stub_status {
stub_status;
allow 127.0.0.1;
deny all;
}
}
Add the scrape target to prometheus.yml:
yaml
- job_name: "nginx"
static_configs:
- targets: ["localhost:9113"]
PostgreSQL Exporter
For PostgreSQL monitoring (see our [PostgreSQL on FreeBSD guide](/blog/postgresql-freebsd-setup/)), the postgres_exporter exposes query performance, connection pools, and replication lag:
sh
pkg install postgres_exporter
sysrc postgres_exporter_enable="YES"
sysrc postgres_exporter_args="--web.listen-address=:9187"
Set the connection string via environment variable in /usr/local/etc/rc.conf:
sh
sysrc postgres_exporter_env="DATA_SOURCE_NAME=postgresql://prometheus:password@localhost:5432/postgres?sslmode=disable"
Create a dedicated monitoring user in PostgreSQL:
sql
CREATE USER prometheus WITH PASSWORD 'password';
GRANT pg_monitor TO prometheus;
Start the exporter and add the target:
sh
service postgres_exporter start
yaml
- job_name: "postgresql"
static_configs:
- targets: ["localhost:9187"]
Blackbox Exporter
The blackbox exporter probes endpoints over HTTP, HTTPS, DNS, TCP, and ICMP. Use it to monitor your services from the outside:
sh
pkg install blackbox_exporter
sysrc blackbox_exporter_enable="YES"
service blackbox_exporter start
Add a probe target in prometheus.yml:
yaml
- job_name: "blackbox-http"
metrics_path: /probe
params:
module: [http_2xx]
static_configs:
- targets:
- "https://example.com"
- "https://api.example.com/health"
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: localhost:9115
This configuration scrapes the blackbox exporter, which in turn probes each URL and reports whether it returned HTTP 200 and how long it took.
---
Production Hardening
Running Prometheus and Grafana in production requires attention to storage, retention, security, and availability.
Storage and Retention
Prometheus stores data on local disk in its TSDB format. Estimate your storage needs:
- Each unique time series consumes about 1-2 bytes per sample.
- At a 15-second scrape interval, one metric produces 5,760 samples per day (about 8-12 KB).
- A typical FreeBSD host with node_exporter produces 500-1,000 unique time series.
- For 10 hosts scraped at 15-second intervals, expect roughly 50-100 MB per day.
Set retention based on your needs:
sh
sysrc prometheus_args="--storage.tsdb.retention.time=90d --storage.tsdb.retention.size=50GB --storage.tsdb.path=/var/db/prometheus"
The retention.size flag acts as a safety cap to prevent the disk from filling. Whichever limit is reached first triggers data deletion.
If you are running ZFS, put the Prometheus data directory on a dedicated dataset with appropriate compression:
sh
zfs create -o compression=lz4 -o atime=off -o recordsize=128K zroot/prometheus
zfs set mountpoint=/var/db/prometheus zroot/prometheus
chown prometheus:prometheus /var/db/prometheus
The 128K record size aligns well with Prometheus's TSDB chunk sizes. LZ4 compression typically achieves 2-3x reduction on metrics data.
Reverse Proxy with NGINX
Do not expose Prometheus or Grafana directly to the internet. Place them behind NGINX with TLS:
nginx
# /usr/local/etc/nginx/conf.d/monitoring.conf
server {
listen 443 ssl http2;
server_name monitoring.example.com;
ssl_certificate /usr/local/etc/letsencrypt/live/monitoring.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/monitoring.example.com/privkey.pem;
# Grafana
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Prometheus (restrict access)
location /prometheus/ {
auth_basic "Prometheus";
auth_basic_user_file /usr/local/etc/nginx/.htpasswd;
proxy_pass http://127.0.0.1:9090/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 80;
server_name monitoring.example.com;
return 301 https://$host$request_uri;
}
If you serve Prometheus under a subpath, configure it to match:
sh
sysrc prometheus_args="--web.external-url=https://monitoring.example.com/prometheus/ --storage.tsdb.retention.time=90d --storage.tsdb.path=/var/db/prometheus"
Create the htpasswd file for basic authentication on the Prometheus endpoint:
sh
pkg install apache24-utils
htpasswd -c /usr/local/etc/nginx/.htpasswd prometheus-admin
For a complete NGINX setup guide, see our [NGINX production setup on FreeBSD](/blog/nginx-freebsd-production-setup/).
Firewall Rules
If you use PF (and you should -- see our [PF firewall guide](/blog/pf-firewall-freebsd/)), restrict access to monitoring ports:
# /etc/pf.conf (excerpt)
monitoring_ports = "{ 9090, 9093, 9100, 3000 }"
management_net = "10.0.0.0/24"
# Allow monitoring access only from management network
pass in on $int_if proto tcp from $management_net to (self) port $monitoring_ports
block in on $ext_if proto tcp to (self) port $monitoring_ports
Backup Strategy
Prometheus's TSDB supports snapshot-based backups:
sh
fetch -qo - -X POST http://localhost:9090/api/v1/admin/tsdb/snapshot
This creates a snapshot under the Prometheus data directory. If you are running ZFS, you can also snapshot the dataset directly:
sh
zfs snapshot zroot/prometheus@$(date +%Y%m%d-%H%M%S)
ZFS snapshots are instantaneous and consistent. Schedule them with cron for automated backups.
---
FAQ
What resources does Prometheus need on FreeBSD?
For a small deployment (1-20 hosts), Prometheus runs comfortably on 1 CPU core and 1-2 GB of RAM. Memory usage scales with the number of active time series rather than the total number of samples stored. A server monitoring 100 hosts with node_exporter, one or two application exporters per host, and a 90-day retention period will typically use 4-8 GB of RAM and 50-100 GB of disk over time.
Can I monitor remote FreeBSD servers from a central Prometheus instance?
Yes. Install node_exporter on each remote server and add their IP:port as targets in your central prometheus.yml. Prometheus will scrape them over the network. For servers behind firewalls, you have two options: open port 9100 on the firewall (restrict source IP to the Prometheus server), or use Prometheus's push gateway as an intermediary, though the push gateway is intended for batch jobs, not general host monitoring. For complex network topologies, consider running a Prometheus instance in each network segment and using federation to aggregate data centrally.
How do I upgrade Prometheus and Grafana on FreeBSD?
Standard pkg updates handle this:
sh
pkg update
pkg upgrade prometheus grafana alertmanager node_exporter
Prometheus is backward-compatible with its TSDB format across minor versions. Grafana preserves dashboard configurations in its SQLite database at /var/db/grafana/grafana.db. Always take a ZFS snapshot or filesystem backup before major version upgrades. Review the upstream release notes for breaking changes, particularly around PromQL behavior or configuration syntax.
How do I monitor multiple services on the same FreeBSD host?
Install the relevant exporter for each service alongside node_exporter. A typical FreeBSD web server might run node_exporter (port 9100), nginx-prometheus-exporter (port 9113), and postgres_exporter (port 9187) simultaneously. Each exporter runs as its own service and exposes its own metrics endpoint. Add each as a separate job_name in prometheus.yml. There is no conflict between exporters -- they bind to different ports and scrape independently.
Should I use Prometheus or Zabbix on FreeBSD?
Both work well on FreeBSD, but they solve different problems. Prometheus excels at time-series metrics, dynamic cloud environments, and container-based infrastructure where targets come and go. Its PromQL query language is more powerful for ad hoc analysis. Zabbix is better suited to traditional infrastructure with stable inventories, SNMP-managed network devices, and organizations that prefer a batteries-included approach with auto-discovery and built-in agent management. For a pure FreeBSD server fleet, Prometheus plus Grafana gives you more flexibility and a more active ecosystem of exporters. For a full comparison, see our [FreeBSD server monitoring guide](/blog/freebsd-server-monitoring-guide/).
How do I persist Grafana dashboards across reinstalls?
Use Grafana's provisioning system. Store dashboard JSON files in /usr/local/etc/grafana/provisioning/dashboards/ and create a provider configuration:
yaml
# /usr/local/etc/grafana/provisioning/dashboards/default.yml
apiVersion: 1
providers:
- name: "default"
orgId: 1
folder: ""
type: file
options:
path: /usr/local/etc/grafana/provisioning/dashboards
Export your dashboards as JSON from the Grafana UI, save them in that directory, and they will be loaded automatically on startup. Version-control this directory alongside your Prometheus configuration for a fully reproducible monitoring setup.