Grafana on FreeBSD: Dashboard and Visualization Review
Grafana is the leading open-source platform for monitoring visualization and analytics. It connects to time-series databases, log aggregators, and other data sources, and presents the data through customizable dashboards with panels, graphs, tables, and alerts. Grafana does not collect or store metrics itself -- it is purely a visualization and alerting layer that sits on top of data sources like Prometheus, InfluxDB, Elasticsearch, PostgreSQL, and dozens more.
For FreeBSD administrators, Grafana transforms raw metrics from tools like Prometheus and node_exporter into actionable dashboards showing system health, ZFS pool status, network throughput, and service availability. This review covers Grafana editions (OSS, Enterprise, Cloud), installation on FreeBSD, data source configuration, dashboard building, FreeBSD-specific panels, alerting, and the plugin ecosystem.
Grafana OSS vs Enterprise vs Cloud
Grafana OSS (Open Source)
Grafana OSS is the free, open-source edition licensed under AGPLv3. It includes:
- All core visualization features (time series graphs, stat panels, tables, heatmaps, histograms, gauges, logs, traces)
- 100+ built-in data source plugins
- Dashboard templating with variables
- Alerting with notification channels
- Organization and team management
- Annotations and event overlays
- Dashboard sharing and embedding
For FreeBSD infrastructure monitoring, Grafana OSS provides everything you need. The vast majority of Grafana deployments use the OSS edition.
Grafana Enterprise
Grafana Enterprise adds features targeted at large organizations:
- Enterprise data source plugins (Splunk, Oracle, ServiceNow, Datadog, New Relic)
- Enhanced LDAP and SAML authentication
- Data source permissions (restrict who can query which data sources)
- Reporting (scheduled PDF/PNG reports via email)
- Audit logging
- White-labeling (custom branding)
- Enterprise support from Grafana Labs
Enterprise requires a paid license. It is the same binary as OSS with additional features unlocked.
Grafana Cloud
Grafana Cloud is the managed SaaS offering from Grafana Labs. It includes hosted Grafana, hosted Prometheus (Mimir), hosted Loki for logs, and hosted Tempo for traces. The free tier allows up to 10,000 metrics series, 50 GB of logs, and 50 GB of traces.
For FreeBSD NAS or homelab use, self-hosted Grafana OSS is the natural choice. For organizations that want to avoid managing Grafana infrastructure, Grafana Cloud is an option -- your FreeBSD servers send metrics to the cloud endpoint.
Installation on FreeBSD
Package Installation
shpkg install grafana10
This installs Grafana 10.x, the current major release. The binary is at /usr/local/bin/grafana, and the main configuration file is at /usr/local/etc/grafana/grafana.ini.
Enable and start:
shsysrc grafana_enable="YES" service grafana start
Grafana listens on port 3000 by default. Access the web UI at http://server:3000. The default credentials are admin/admin -- change the password on first login.
Configuration
Key settings in /usr/local/etc/grafana/grafana.ini:
sh[server] http_addr = 0.0.0.0 http_port = 3000 domain = grafana.example.com root_url = %(protocol)s://%(domain)s:%(http_port)s/ [database] type = sqlite3 path = /var/db/grafana/grafana.db [security] admin_user = admin admin_password = change_me_immediately secret_key = generate_a_random_32_char_string cookie_secure = false cookie_samesite = lax [users] allow_sign_up = false auto_assign_org = true auto_assign_org_role = Viewer [auth.anonymous] enabled = false [log] mode = file level = info filters = [log.file] log_rotate = true daily_rotate = true max_days = 7
After editing, restart:
shservice grafana restart
Reverse Proxy with nginx
For production deployments, place Grafana behind nginx with TLS:
shpkg install nginx
Configure /usr/local/etc/nginx/nginx.conf:
shserver { listen 443 ssl http2; server_name grafana.example.com; ssl_certificate /usr/local/etc/ssl/grafana.crt; ssl_certificate_key /usr/local/etc/ssl/grafana.key; location / { proxy_pass http://127.0.0.1:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } # WebSocket support for live dashboards location /api/live/ { proxy_pass http://127.0.0.1:3000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; } }
Update grafana.ini:
sh[server] root_url = https://grafana.example.com/
Data Source Configuration
Prometheus (Primary)
Prometheus is the most common data source for FreeBSD monitoring. Configure it in the Grafana web UI or via provisioning.
Web UI method: Navigate to Configuration > Data Sources > Add data source > Prometheus. Set the URL to http://localhost:9090 (or your Prometheus server address). Click Save & Test.
Provisioning method (GitOps-friendly): Create /usr/local/etc/grafana/provisioning/datasources/prometheus.yml:
shapiVersion: 1 datasources: - name: Prometheus type: prometheus access: proxy url: http://localhost:9090 isDefault: true editable: true jsonData: timeInterval: "15s" httpMethod: POST
PostgreSQL
For database metrics or application data:
sh# /usr/local/etc/grafana/provisioning/datasources/postgresql.yml apiVersion: 1 datasources: - name: PostgreSQL type: postgres url: localhost:5432 database: grafana_data user: grafana_reader secureJsonData: password: "secure_password" jsonData: sslmode: disable maxOpenConns: 10 maxIdleConns: 5 connMaxLifetime: 14400 postgresVersion: 1600
Loki (Log Aggregation)
For viewing FreeBSD logs alongside metrics:
shpkg install loki promtail sysrc loki_enable="YES" sysrc promtail_enable="YES"
Configure promtail to ship FreeBSD logs:
sh# /usr/local/etc/promtail/promtail.yml server: http_listen_port: 9080 positions: filename: /var/db/promtail/positions.yaml clients: - url: http://localhost:3100/loki/api/v1/push scrape_configs: - job_name: freebsd-syslog static_configs: - targets: [localhost] labels: job: syslog host: nas01 __path__: /var/log/messages - job_name: freebsd-auth static_configs: - targets: [localhost] labels: job: auth host: nas01 __path__: /var/log/auth.log
Add Loki as a data source in Grafana with URL http://localhost:3100.
Building Dashboards
Dashboard Structure
A well-organized Grafana dashboard follows a top-down hierarchy:
- Row 1 -- Overview: Stat panels showing total servers, alerts firing, overall uptime percentage
- Row 2 -- CPU and Memory: Time series graphs for CPU utilization and memory breakdown
- Row 3 -- Storage: ZFS pool health, disk usage gauges, I/O throughput
- Row 4 -- Network: Interface traffic, connection counts, error rates
- Row 5 -- Services: Per-service health indicators (HTTP response codes, database connections)
Dashboard Variables
Variables make dashboards reusable across servers. Configure in Dashboard Settings > Variables:
Server selector:
- Name:
instance - Type: Query
- Data source: Prometheus
- Query:
label_values(node_uname_info{sysname="FreeBSD"}, instance)
Interface selector:
- Name:
interface - Type: Query
- Data source: Prometheus
- Query:
label_values(node_network_receive_bytes_total{instance="$instance"}, device)
Use $instance and $interface in panel queries to filter data dynamically.
Panel Examples
CPU Utilization (Time Series):
sh# Panel query 100 - (avg by (instance) (rate(node_cpu_seconds_total{instance="$instance", mode="idle"}[5m])) * 100)
Memory Breakdown (Stacked Time Series):
sh# Query A: Active node_memory_active_bytes{instance="$instance"} # Query B: Inactive node_memory_inactive_bytes{instance="$instance"} # Query C: Wired node_memory_wired_bytes{instance="$instance"} # Query D: Free node_memory_free_bytes{instance="$instance"} # Query E: ARC node_zfs_arc_size{instance="$instance"}
Set display to stacked area chart. This shows the complete FreeBSD memory breakdown including ZFS ARC.
ZFS ARC Hit Rate (Gauge):
shrate(node_zfs_arc_hits_total{instance="$instance"}[5m]) / (rate(node_zfs_arc_hits_total{instance="$instance"}[5m]) + rate(node_zfs_arc_misses_total{instance="$instance"}[5m])) * 100
Set thresholds: green > 90%, yellow > 80%, red < 80%.
Network Throughput (Time Series with dual Y-axis):
sh# Query A (positive axis): Inbound rate(node_network_receive_bytes_total{instance="$instance", device="$interface"}[5m]) * 8 # Query B (negative axis): Outbound -rate(node_network_transmit_bytes_total{instance="$instance", device="$interface"}[5m]) * 8
Unit: bits/sec. This creates the classic bidirectional traffic graph.
Disk Space Usage (Bar Gauge):
sh1 - (node_filesystem_avail_bytes{instance="$instance", fstype="zfs"} / node_filesystem_size_bytes{instance="$instance", fstype="zfs"})
Unit: percentunit. Thresholds: green < 0.7, yellow < 0.85, red >= 0.85.
Importing Community Dashboards
The fastest way to get started is importing pre-built dashboards from grafana.com:
- Node Exporter Full (ID: 1860): Comprehensive system metrics dashboard
- Node Exporter for Prometheus (ID: 11074): Simpler overview dashboard
- ZFS (ID: 328): ZFS-specific metrics
Import via Dashboards > Import > Enter dashboard ID.
FreeBSD-Specific Panels
ZFS Pool Health Matrix
Create a table panel showing all ZFS pools and their health status. This requires the textfile collector approach for pool-level metrics:
sh# PromQL query zpool_health
Use value mappings: 1 = "ONLINE" (green), 0 = "DEGRADED" (red).
FreeBSD Memory Model
FreeBSD's memory model differs from Linux. Create a panel that accurately represents FreeBSD memory:
sh# Active memory (in use by processes) node_memory_active_bytes{instance="$instance"} # Inactive (recently freed, still cached) node_memory_inactive_bytes{instance="$instance"} # Wired (kernel, cannot be paged out) node_memory_wired_bytes{instance="$instance"} # ARC (ZFS read cache, reclaimable) node_zfs_arc_size{instance="$instance"} # Free node_memory_free_bytes{instance="$instance"}
Display as a stacked area chart with distinct colors. Add an annotation: "ARC is reclaimable -- system is not low on memory if ARC is large."
Jail Resource Usage
If monitoring jails with separate node_exporter instances:
sh# CPU by jail 100 - (avg by (instance) (rate(node_cpu_seconds_total{instance=~".*jail.*", mode="idle"}[5m])) * 100)
Use the instance label to distinguish jails. Create a repeat panel that shows one row per jail.
Boot Environment Status
Use a textfile collector to expose boot environment information:
sh#!/bin/sh # /usr/local/etc/prometheus/textfile/bectl.sh OUTPUT="/var/tmp/node_exporter/bectl.prom" active_be=$(bectl list -H | awk '$2 == "NR" {print $1}') be_count=$(bectl list -H | wc -l | tr -d ' ') echo "freebsd_boot_environment_count $be_count" > "$OUTPUT" echo "freebsd_boot_environment_active{name=\"$active_be\"} 1" >> "$OUTPUT"
Display as a stat panel in Grafana showing the active boot environment name and total BE count.
Alerting
Grafana 10 includes a unified alerting system that can evaluate queries and fire alerts without relying on external tools like Alertmanager (though it integrates with Alertmanager too).
Creating Alert Rules
In the Grafana UI, navigate to Alerting > Alert rules > New alert rule.
Example: High CPU Alert:
- Rule name: "High CPU - FreeBSD"
- Query:
100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) - Condition: IS ABOVE 90
- Evaluate every: 1m
- For: 10m
- Labels: severity=warning
Example: ZFS Pool Degraded:
- Rule name: "ZFS Pool Degraded"
- Query:
zpool_health == 0 - Condition: HAS VALUE
- Evaluate every: 1m
- For: 1m
- Labels: severity=critical
Contact Points
Configure where alerts are delivered:
sh# Via provisioning: /usr/local/etc/grafana/provisioning/alerting/contacts.yml apiVersion: 1 contactPoints: - orgId: 1 name: "Operations Team" receivers: - uid: email-ops type: email settings: addresses: "ops@example.com" - uid: slack-ops type: slack settings: url: "https://hooks.slack.com/services/T00/B00/XXXXX" channel: "#alerts"
Notification Policies
Route alerts to different contact points based on severity:
sh# /usr/local/etc/grafana/provisioning/alerting/policies.yml apiVersion: 1 policies: - orgId: 1 receiver: "Operations Team" group_by: ["alertname", "instance"] group_wait: 30s group_interval: 5m repeat_interval: 4h routes: - receiver: "On-Call PagerDuty" matchers: - severity = critical continue: false
Plugin Ecosystem
Grafana's plugin system extends its capabilities with additional panels, data sources, and apps.
Installing Plugins on FreeBSD
sh# Install via CLI grafana-cli plugins install grafana-piechart-panel grafana-cli plugins install grafana-worldmap-panel grafana-cli plugins install grafana-clock-panel # Restart to load plugins service grafana restart
Useful Plugins for FreeBSD Monitoring
Panels:
- Pie Chart: Visualize disk usage distribution across pools
- Worldmap: Geographic view if monitoring distributed FreeBSD servers
- Status Panel: Traffic-light health indicators for services
- Diagram Panel: Network topology visualization
- Clock: Display server timezone for NOC dashboards
Data Sources:
- Loki: Log aggregation and correlation with metrics
- Infinity: Query any REST API, CSV, JSON, or XML data source
- JSON API: Fetch data from custom APIs (useful for FreeBSD-specific tools that expose JSON)
Apps:
- Grafana OnCall: Incident management and on-call scheduling
- Grafana k6: Load testing visualization (useful for benchmarking FreeBSD web servers)
Plugin Directory
List installed plugins:
shgrafana-cli plugins ls
Plugins are stored in /usr/local/share/grafana/plugins/ on FreeBSD.
Performance Tuning
Dashboard Performance
Large dashboards with many panels can be slow. Optimize:
- Reduce time range: Default to 6h or 12h instead of 24h for detailed graphs
- Increase min interval: Set panel min interval to match your scrape interval (e.g., 15s)
- Use
$__rate_interval: In Prometheus queries, use[$__rate_interval]instead of hardcoded intervals - Limit series: Use
topk(10, ...)in PromQL to show only the top 10 series - Mixed resolution: Use high resolution (15s) for recent data, lower resolution (5m) for older data
Grafana Server Performance
For large deployments, tune the Grafana server:
sh[database] # Use PostgreSQL instead of SQLite for large installations type = postgres host = localhost:5432 name = grafana user = grafana password = secure_password [server] # Enable concurrent rendering concurrent_render_request_limit = 10 [dataproxy] # Increase timeout for slow queries timeout = 60
FAQ
Can Grafana collect metrics directly from FreeBSD?
No. Grafana is a visualization layer only. It queries data sources (Prometheus, InfluxDB, PostgreSQL, etc.) that have already collected and stored the metrics. You need a collector (like node_exporter + Prometheus) to gather FreeBSD metrics and a data source for Grafana to query.
What is the difference between Grafana alerting and Prometheus Alertmanager?
Grafana alerting evaluates queries within Grafana and sends notifications through Grafana's contact points. Prometheus Alertmanager receives alerts from Prometheus server and handles routing, grouping, and notification. Both work for FreeBSD monitoring. If you already have Prometheus with Alertmanager, use that. If you prefer managing alerts in the same UI as dashboards, use Grafana alerting. They can also work together -- Grafana can forward alerts to Alertmanager.
How do I back up Grafana dashboards on FreeBSD?
Export dashboards as JSON from the UI, or use the Grafana API:
sh# List all dashboards curl -s http://admin:password@localhost:3000/api/search | jq '.[] | .uid' # Export a dashboard curl -s http://admin:password@localhost:3000/api/dashboards/uid/DASHBOARD_UID | jq > dashboard-backup.json
For automated backups, use grafana-backup or store dashboards as provisioned JSON files in /usr/local/etc/grafana/provisioning/dashboards/.
How much resources does Grafana use on FreeBSD?
Grafana uses approximately 50-100 MB of RAM for a small deployment (a few dashboards, handful of users). CPU usage is minimal except during dashboard rendering. SQLite is fine for up to ~20 concurrent users; switch to PostgreSQL for larger deployments. Disk usage depends on dashboard count and alert history.
Can I embed Grafana panels in other web pages?
Yes. Enable anonymous access or use Grafana's share/embed feature. Each panel can be embedded via an iframe URL. For public dashboards without authentication, configure:
sh[auth.anonymous] enabled = true org_name = Public org_role = Viewer
Or use Grafana's snapshot feature to create static, shareable dashboard snapshots.
How do I monitor Grafana itself?
Grafana exposes Prometheus metrics at /metrics. Add it as a scrape target in Prometheus:
sh# In prometheus.yml scrape_configs: - job_name: "grafana" static_configs: - targets: ["localhost:3000"]
Monitor dashboard load times, API request rates, and alerting evaluation duration. Import the "Grafana Internals" dashboard (ID: 3590) for pre-built panels.
Does Grafana support dark mode for NOC displays?
Yes. Grafana supports light and dark themes at the organization level and per-user preference. For NOC (Network Operations Center) displays, use dark theme, enable kiosk mode (append ?kiosk to the dashboard URL), and set auto-refresh to 30 seconds. Create a dedicated NOC dashboard with large stat panels and high-contrast colors visible from a distance.