Netdata on FreeBSD: Real-Time Performance Monitoring Review
Netdata is the fastest way to get comprehensive real-time monitoring on a FreeBSD server. Install it, open a browser, and within seconds you have hundreds of metrics updating every second with no configuration files to write, no database to deploy, and no query language to learn. That zero-config promise is Netdata's core value proposition, and on FreeBSD it largely delivers -- with some caveats worth understanding before you commit.
This review covers Netdata's FreeBSD-specific capabilities, what it monitors out of the box, how the dashboard works, the difference between Netdata Cloud and self-hosted operation, actual resource overhead numbers, and how it compares to Prometheus-based monitoring stacks. If you are evaluating monitoring tools for FreeBSD, see the full comparison of monitoring tools for a broader perspective.
What Netdata Monitors Out of the Box
Netdata's auto-detection is its defining feature. On a fresh FreeBSD installation, Netdata discovers and monitors:
- CPU: per-core utilization, user/system/idle/interrupt breakdown, context switches, interrupts per CPU
- Memory: physical memory usage, wired/active/inactive/laundry/free breakdown, swap usage, kernel memory
- Disk I/O: per-disk reads/writes, bandwidth, operations/sec, average I/O time, utilization percentage
- Network: per-interface bandwidth, packets/sec, errors, drops, multicast traffic
- ZFS: ARC size and hit rate, L2ARC statistics, pool I/O operations, scrub progress, dataset usage
- Processes: total count, running/sleeping/zombie, forks/sec, per-process CPU and memory (with the
appsplugin) - System load: 1/5/15 minute averages, entropy available, uptime
- TCP/UDP: connection states, segments sent/received, retransmits, UDP datagrams
- IPv4/IPv6: packets, errors, fragments, ICMP statistics
Beyond system metrics, Netdata auto-detects and monitors running services:
- NGINX/Apache: active connections, requests/sec, response codes
- PostgreSQL/MySQL: queries/sec, connections, buffer usage, replication lag
- Redis: commands/sec, memory usage, connected clients, keyspace hits/misses
- Postfix: queue sizes, delivery rates
- Unbound: query rates, cache hit ratios, DNSSEC validation stats
This auto-detection works by probing known ports and socket paths. If PostgreSQL is running on port 5432, Netdata finds it and starts collecting metrics. No configuration needed.
Installation on FreeBSD
Binary Package
shpkg install netdata
This installs Netdata and its dependencies. The configuration lives under /usr/local/etc/netdata/ and the data directory is /var/db/netdata/.
Enable and start:
shsysrc netdata_enable="YES" service netdata start
The dashboard is immediately available at http://your-server:19999.
Build from Ports
If you need custom build options:
shcd /usr/ports/net-mgmt/netdata make install clean
Post-Installation Verification
Check that Netdata is collecting FreeBSD metrics:
shcurl -s http://localhost:19999/api/v1/info | grep os
This should return "os_name":"freebsd". If it shows a generic OS, the FreeBSD-specific collectors may not be loading correctly.
Check running collectors:
shcurl -s http://localhost:19999/api/v1/collectors
This lists every active data collector with its update frequency and the number of charts it produces.
FreeBSD-Specific Collectors
Netdata includes collectors written specifically for FreeBSD that read from sysctl, devstat, and FreeBSD-specific interfaces rather than the Linux /proc and /sys filesystems.
ZFS Monitoring
The ZFS collector is one of the most valuable on FreeBSD. It tracks:
- ARC (Adaptive Replacement Cache): size, target size, minimum size, hit rate, data/metadata/prefetch breakdown. The ARC hit rate is the single most important ZFS performance metric -- if it drops below 90%, your workload is exceeding available memory for caching.
- L2ARC: size, reads, writes, hit rate. Useful if you have an SSD configured as an L2ARC device.
- ZIL (ZFS Intent Log): commit count and commit data size. High ZIL activity indicates write-heavy workloads that benefit from a dedicated SLOG device.
- Pool I/O: read/write operations and bandwidth per pool.
- Dataset usage: space used, available, referenced, and compression ratio per dataset.
sh# Verify ZFS metrics are being collected curl -s http://localhost:19999/api/v1/charts | grep -i zfs
Network Interface Monitoring
On FreeBSD, Netdata reads interface statistics from the kernel via sysctl net.link. It captures per-interface bytes, packets, errors, drops, and collisions. VLAN interfaces, lagg aggregation interfaces, and bridge interfaces are all detected automatically.
Device Statistics
The devstat collector reads FreeBSD's device statistics framework for disk I/O. This provides accurate per-device metrics that match what gstat reports, including I/O operations, bandwidth, busy time, and queue length.
FreeBSD Jails
Netdata does not natively monitor individual jails from the host. Each jail appears as part of the host's aggregate metrics. To monitor per-jail resource usage, you need to either:
- Install Netdata inside each jail (increases overhead)
- Use the
jailcommand to query per-jail statistics and feed them to Netdata via a custom collector - Use an external tool like Prometheus with
jail_exporterand visualize in Grafana
This is one area where Netdata falls short compared to Prometheus-based setups on FreeBSD.
Dashboard Walkthrough
The Netdata dashboard is a single-page web application served directly by the Netdata agent on port 19999. No separate web server is required.
Navigation
The left sidebar organizes charts by category: System Overview, CPU, Memory, Disks, Networking, ZFS, Applications, and any detected services. Clicking a category scrolls to that section. The dashboard loads all charts on a single page, which means scrolling reveals hundreds of real-time charts.
Chart Interaction
Every chart supports:
- Hover: displays exact values at the cursor position with timestamps
- Click and drag: selects a time range and zooms in
- Shift + click and drag: pans the time window
- Double-click: resets to the default time range
- Resize: drag the bottom edge of any chart to make it larger
All charts are synchronized. Zooming or panning one chart adjusts all charts to the same time window. This makes correlation trivially easy -- if you see a CPU spike, zoom in and immediately see what was happening with disk I/O, network traffic, and memory at the exact same moment.
Alarms
Netdata ships with hundreds of pre-configured alarms. On FreeBSD, these include:
- Disk space utilization above 80%, 90%, and 98%
- ZFS ARC hit rate below 80%
- CPU utilization sustained above 85%
- Swap usage above 50%
- Network interface errors or drops
- 1-minute load average exceeding CPU count
View active alarms at http://your-server:19999/api/v1/alarms or through the dashboard's alarm bell icon.
Customize alarms by editing files in /usr/local/etc/netdata/health.d/. For example, to change the disk space warning threshold:
shcat > /usr/local/etc/netdata/health.d/disk_custom.conf << 'EOF' alarm: disk_space_usage on: disk.space lookup: max -1s percentage of used every: 10s warn: $this > 85 crit: $this > 95 info: Disk space utilization EOF
Restart Netdata to load the new alarm:
shservice netdata restart
Netdata Cloud vs Self-Hosted
Netdata operates in two modes.
Self-Hosted (Standalone)
The agent runs on your FreeBSD server, stores metrics locally in a custom database engine, and serves the dashboard directly. No data leaves your network. This is the default mode after pkg install netdata.
Metrics are stored in RAM and on disk. The default retention is approximately 2-3 days at per-second resolution on a server with 1 GB of RAM allocated to Netdata. You can increase retention by allocating more disk space:
sh# /usr/local/etc/netdata/netdata.conf [db] mode = dbengine storage tiers = 3 dbengine multihost disk space MB = 2048 dbengine tier 1 multihost disk space MB = 512 dbengine tier 2 multihost disk space MB = 128
Tier 1 stores data at per-second granularity. Tier 2 and 3 automatically downsample to per-minute and per-hour granularity, extending retention to weeks or months with minimal additional storage.
Netdata Cloud
Netdata Cloud is a free SaaS service that provides a centralized dashboard for multiple Netdata agents. Agents connect outbound to Netdata Cloud over HTTPS -- no inbound ports need to be opened.
What Cloud adds:
- Unified dashboard for all your servers
- Cross-server metric correlation
- Custom dashboards and chart groupings
- Role-based access control
- Mobile-friendly interface
- Anomaly detection (ML-based)
What Cloud does NOT do: it does not store your metrics. All metric data remains on the individual agents. Cloud acts as a real-time proxy, querying agents on demand when you view a dashboard. If an agent is offline, its historical data is unavailable in Cloud until it comes back.
To connect a FreeBSD agent to Cloud:
shnetdata-claim.sh -token=YOUR_CLAIM_TOKEN -rooms=YOUR_ROOM_ID -url=https://app.netdata.cloud
You get the claim token from the Netdata Cloud web interface after creating a free account.
Privacy consideration: even though metrics are not stored in Cloud, the connection metadata (which servers you have, when they are online, their hostnames) is visible to Netdata. For air-gapped or compliance-sensitive environments, standalone mode is the right choice.
Resource Overhead
Netdata's overhead on FreeBSD, measured on a 4-core server with ZFS, NGINX, and PostgreSQL running:
- CPU: 1-3% of a single core at per-second collection. Spikes to 5-8% during chart rendering when someone is actively viewing the dashboard.
- Memory (RSS): 150-300 MB depending on the number of charts and retention settings. This is higher than a Prometheus node_exporter (20-40 MB) but includes the dashboard, database engine, and all collectors in a single process.
- Disk I/O: 1-5 MB/s writes for the database engine at per-second granularity with default retention settings.
- Disk space: 500 MB to 2 GB depending on configured retention and number of metrics.
- Network: negligible in standalone mode. With Cloud enabled, approximately 50-200 KB/s outbound for dashboard queries.
To reduce overhead, lower the collection frequency:
sh# /usr/local/etc/netdata/netdata.conf [global] update every = 2
This changes collection from every 1 second to every 2 seconds, roughly halving CPU and disk I/O overhead.
Disable unused collectors to save resources:
sh# /usr/local/etc/netdata/netdata.conf [plugins] apps = no proc = no
On resource-constrained systems (1 CPU, 512 MB RAM), Netdata's overhead is noticeable. For such systems, consider a lighter agent like node_exporter with a central Prometheus server.
Netdata vs Prometheus + Grafana
This is the comparison that matters most for FreeBSD sysadmins choosing a monitoring stack.
Setup time. Netdata: pkg install netdata && service netdata start. Prometheus + Grafana: install Prometheus, install node_exporter on every host, configure scrape targets, install Grafana, configure data source, import or build dashboards. Netdata wins overwhelmingly on initial setup.
Query language. Prometheus has PromQL, a powerful query language for aggregation, rates, predictions, and complex alert conditions. Netdata has no general-purpose query language. Its alarms use a simpler lookup syntax that covers common cases but cannot express arbitrary metric computations.
Multi-server monitoring. Prometheus is architecturally designed for multi-server monitoring. One Prometheus server scrapes hundreds of targets. Grafana provides unified dashboards. Netdata Cloud provides multi-server views but relies on each agent being online. Prometheus stores data centrally, so historical data is available even when monitored hosts are offline.
Retention and storage. Prometheus stores time-series data in a highly efficient custom format with configurable retention (weeks to years). Netdata's retention depends on each agent's local storage. For long-term trend analysis and capacity planning, Prometheus is superior.
Alerting. Prometheus uses Alertmanager for routing, grouping, silencing, and deduplication of alerts. It integrates with PagerDuty, Slack, email, and webhooks. Netdata has built-in alerting with email and webhook notifications, plus integrations for Slack, PagerDuty, and others. Both are capable, but Alertmanager offers more sophisticated routing.
Dashboard quality. Netdata's built-in dashboard is excellent for real-time troubleshooting. Every metric is immediately visible and interactive. Grafana dashboards are more customizable and better suited for executive reporting, capacity planning, and combining metrics from multiple data sources. Different tools for different audiences.
Resource usage. A Prometheus node_exporter uses 20-40 MB of RAM per monitored host. Netdata uses 150-300 MB per host. For a fleet of 50 servers, that difference adds up. However, Netdata's overhead includes the dashboard and database, while Prometheus requires a separate server for storage and Grafana for visualization.
Recommendation. Use Netdata for single-server or small-fleet monitoring where speed of setup and real-time visibility matter most. Use Prometheus + Grafana for larger fleets, long-term storage, complex alerting, and environments where a central monitoring server is practical. The two are not mutually exclusive -- some teams run Netdata for real-time debugging and Prometheus for historical analysis and alerting.
For a full Prometheus + Grafana setup guide on FreeBSD, see the monitoring tools comparison.
Configuration Reference
The main configuration file is /usr/local/etc/netdata/netdata.conf. Generate a full configuration with defaults:
shcurl -o /usr/local/etc/netdata/netdata.conf "http://localhost:19999/netdata.conf"
Key settings:
ini[global] hostname = myserver.example.com update every = 1 memory mode = dbengine [web] bind to = 127.0.0.1 default port = 19999 allow connections from = localhost 10.0.0.* [db] mode = dbengine dbengine multihost disk space MB = 1024
Security: by default, Netdata binds to all interfaces. In production, bind to 127.0.0.1 and access the dashboard through an SSH tunnel or a reverse proxy with authentication. Netdata has no built-in authentication in standalone mode.
sh# Access via SSH tunnel ssh -L 19999:127.0.0.1:19999 user@your-server # Then open http://localhost:19999 in your browser
Or put NGINX in front:
shellserver { listen 443 ssl; server_name monitoring.example.com; auth_basic "Monitoring"; auth_basic_user_file /usr/local/etc/nginx/.htpasswd; location / { proxy_pass http://127.0.0.1:19999; proxy_set_header Host $host; } }
FAQ
Q: Does Netdata work inside a FreeBSD jail?
A: Yes, but with limitations. Inside a jail, Netdata sees only the jail's own view of the system. ZFS pool-level metrics and host hardware metrics are not available from within a jail. For full system monitoring, install Netdata on the host.
Q: Can I export Netdata metrics to Prometheus?
A: Yes. Netdata includes a built-in Prometheus exporter. Access it at http://your-server:19999/api/v1/allmetrics?format=prometheus. Add this as a scrape target in your Prometheus configuration to get the best of both worlds.
Q: How much disk space does Netdata use for 30 days of retention?
A: With tiered storage (per-second for 2 days, per-minute for 14 days, per-hour for 90 days), a typical FreeBSD server with 500 metrics uses 1-2 GB of disk. Adjust dbengine multihost disk space MB to control this.
Q: Is Netdata Cloud free?
A: Yes. Netdata Cloud is free for unlimited nodes. There is no paid tier as of 2026. Revenue comes from Netdata's enterprise on-premise product. The free Cloud service may add paid features in the future.
Q: Can Netdata monitor ZFS scrub progress?
A: Yes. The ZFS collector tracks scrub status, progress percentage, and errors found. You can set up alarms to notify you when a scrub completes or finds errors.
Q: How do I update Netdata on FreeBSD?
A: If installed via pkg: pkg upgrade netdata. If installed from ports: portsnap fetch update && cd /usr/ports/net-mgmt/netdata && make deinstall reinstall clean. Then service netdata restart.
Q: Does Netdata support SNMP monitoring?
A: Yes. Netdata includes an SNMP collector that can poll network devices, printers, UPS units, and other SNMP-enabled equipment. Configure it in /usr/local/etc/netdata/go.d/snmp.conf.
Q: What is the performance impact of per-second collection?
A: On a typical FreeBSD server, per-second collection uses 1-3% of one CPU core. If this is too much, change update every to 2 or 5 seconds. The dashboard will still be responsive; you just lose granularity for short-lived spikes.
For a comprehensive monitoring strategy on FreeBSD, see the server monitoring guide.