# How to Install and Configure PostgreSQL on FreeBSD
PostgreSQL and FreeBSD share the same DNA. Both are BSD-licensed projects with decades of engineering discipline behind them. Both value correctness over hype. Both are trusted in production by organizations that cannot afford data loss -- banks, hospitals, telecom operators, government agencies. Running PostgreSQL on FreeBSD is not a novel combination. It is the original combination, and it remains one of the most stable, performant database platforms you can build.
This guide walks through a complete production setup: installation, initialization, configuration tuning, authentication, ZFS integration, backups, replication, monitoring, and security. Every command targets FreeBSD 14.x with PostgreSQL 16. Every path and service command is FreeBSD-native.
Table of Contents
1. [Why PostgreSQL on FreeBSD](#why-postgresql-on-freebsd)
2. [Installation](#installation)
3. [Initialization](#initialization)
4. [Tuning postgresql.conf](#tuning-postgresqlconf)
5. [Configuring pg_hba.conf](#configuring-pg_hbaconf)
6. [Creating Databases and Users](#creating-databases-and-users)
7. [ZFS Dataset for PostgreSQL Data](#zfs-dataset-for-postgresql-data)
8. [Backup Strategies](#backup-strategies)
9. [Monitoring](#monitoring)
10. [Streaming Replication](#streaming-replication)
11. [Security](#security)
12. [FAQ](#faq)
---
Why PostgreSQL on FreeBSD
PostgreSQL was born at UC Berkeley in the 1980s, the same institution that produced BSD Unix. The developers who built early PostgreSQL ran it on BSD systems. That shared heritage is not merely historical trivia -- it translates to real engineering advantages.
**Kernel-level stability.** FreeBSD's virtual memory system, its network stack, and its filesystem layer are mature and predictable under sustained load. PostgreSQL's shared memory architecture maps cleanly onto FreeBSD's POSIX-compliant shared memory and semaphore implementations. You will not fight sysctl tuning wars that plague PostgreSQL on certain Linux distributions.
**ZFS integration.** PostgreSQL's write-ahead log (WAL) and data files benefit enormously from ZFS. Atomic snapshots give you instant, consistent backups without stopping the database. ZFS compression reduces storage costs. Tuned recordsize aligns with PostgreSQL's 8K pages. This is covered in detail in our [ZFS guide](/blog/zfs-freebsd-guide/).
**Simplicity.** FreeBSD's ports and packages provide clean, well-tested PostgreSQL builds. The rc.conf service management is straightforward. Paths are predictable. There are no competing init systems, no snap packages, no distribution-specific patches that change behavior. You get PostgreSQL as upstream intends it.
**Performance.** FreeBSD's ULE scheduler, its NUMA awareness on supported hardware, and its mature AIO subsystem give PostgreSQL a solid platform for high-concurrency workloads. Combined with ZFS's ARC (Adaptive Replacement Cache), you get a sophisticated two-tier caching system -- PostgreSQL's shared_buffers plus ZFS's ARC -- that adapts to your workload automatically.
If you are evaluating database options, see our comparison of the [best databases for FreeBSD](/blog/best-database-freebsd/). PostgreSQL wins for relational workloads, and it is not close.
---
Installation
FreeBSD provides PostgreSQL through binary packages. For most deployments, pkg is the right choice. Building from ports is only necessary if you need custom compile-time options like a specific PL language or an unusual SSL library.
Install PostgreSQL 16
bash
pkg install postgresql16-server postgresql16-client postgresql16-contrib
The contrib package includes essential extensions: pg_stat_statements, pgcrypto, hstore, uuid-ossp, and dozens more. Always install it.
Multiple Versions
FreeBSD maintains packages for multiple PostgreSQL major versions simultaneously. At the time of writing, packages are available for PostgreSQL 14, 15, 16, and 17:
bash
pkg search postgresql | grep server
postgresql14-server-14.15 PostgreSQL is the most advanced open-source database
postgresql15-server-15.11 PostgreSQL is the most advanced open-source database
postgresql16-server-16.7 PostgreSQL is the most advanced open-source database
postgresql17-server-17.3 PostgreSQL is the most advanced open-source database
You can run multiple major versions side by side on the same machine. Each version uses its own data directory and port. This is useful during major version upgrades using pg_upgrade.
Verify the Installation
bash
postgres --version
postgres (PostgreSQL) 16.7
The binaries land in /usr/local/bin/. The server runs under the postgres user, which the package creates automatically.
---
Initialization
Installing the package does not create a database cluster. You must initialize one.
Enable the Service
bash
sysrc postgresql_enable=YES
This adds postgresql_enable="YES" to /etc/rc.conf.
Initialize the Data Directory
bash
service postgresql initdb
This runs initdb as the postgres user, creating the cluster in /var/db/postgres/data16. The default encoding is UTF-8, and the default locale matches your system.
If you need a specific locale or encoding:
bash
su - postgres -c '/usr/local/bin/initdb -D /var/db/postgres/data16 -E UTF8 --locale=en_US.UTF-8'
Start PostgreSQL
bash
service postgresql start
Verify it is running:
bash
service postgresql status
pg_ctl: server is running (PID: 1234)
Data Directory Layout
/var/db/postgres/data16/
├── base/ # Database files
├── global/ # Cluster-wide tables
├── pg_wal/ # Write-ahead log segments
├── pg_xact/ # Transaction commit status
├── postgresql.conf # Main configuration
├── pg_hba.conf # Authentication configuration
├── pg_ident.conf # Identity mapping
└── postmaster.pid # PID file
The data directory belongs to postgres:postgres with mode 0700. Do not change these permissions.
---
Tuning postgresql.conf
The default postgresql.conf is designed for a developer laptop. It allocates minimal memory and uses conservative settings. For any server workload, you must tune it.
The configuration file lives at /var/db/postgres/data16/postgresql.conf.
Connection Settings
ini
listen_addresses = 'localhost' # Change to '*' or specific IPs for remote access
port = 5432
max_connections = 200 # Start here, reduce if possible
Every connection consumes memory. If your application uses connection pooling (and it should), you can often run with max_connections = 100 or fewer and let pgBouncer handle thousands of client connections.
Memory Settings
These are the most impactful settings. Here are recommended values for common RAM configurations:
| RAM | shared_buffers | effective_cache_size | work_mem | maintenance_work_mem | wal_buffers |
|-----|----------------|---------------------|----------|---------------------|-------------|
| 4 GB | 1 GB | 3 GB | 4 MB | 256 MB | 16 MB |
| 8 GB | 2 GB | 6 GB | 8 MB | 512 MB | 32 MB |
| 16 GB | 4 GB | 12 GB | 16 MB | 1 GB | 64 MB |
| 32 GB | 8 GB | 24 GB | 32 MB | 2 GB | 64 MB |
| 64 GB | 16 GB | 48 GB | 64 MB | 2 GB | 64 MB |
Apply them:
ini
# Memory
shared_buffers = '4GB' # 25% of total RAM
effective_cache_size = '12GB' # 75% of total RAM (includes OS cache + ZFS ARC)
work_mem = '16MB' # Per-sort/hash operation, not per-connection
maintenance_work_mem = '1GB' # For VACUUM, CREATE INDEX, ALTER TABLE
wal_buffers = '64MB' # WAL write buffer, -1 for auto (1/32 of shared_buffers)
**A critical note on work_mem**: this is allocated per-sort-operation, not per-connection. A single complex query with multiple sorts or hash joins can allocate work_mem several times. If you set it to 256 MB with 200 connections, you risk OOM. Start conservatively and increase based on EXPLAIN ANALYZE output showing disk-based sorts.
**On FreeBSD with ZFS**, effective_cache_size should account for the ZFS ARC. If your ARC target is 8 GB and shared_buffers is 4 GB, set effective_cache_size to 12 GB. This tells the query planner that data is likely cached, encouraging index-heavy plans.
WAL and Checkpoint Settings
ini
# WAL
wal_level = 'replica' # Required for replication and WAL archiving
max_wal_size = '4GB' # Trigger checkpoint after this much WAL
min_wal_size = '1GB' # Keep at least this much WAL
checkpoint_completion_target = 0.9 # Spread checkpoint writes over 90% of interval
Query Planner
ini
# Planner
random_page_cost = 1.1 # For SSD storage (default 4.0 is for spinning disks)
effective_io_concurrency = 200 # For SSD/NVMe (default 1)
If you are running on NVMe storage -- and you should be for any production database -- these two settings have a dramatic impact on query plan selection.
Logging
ini
# Logging
log_destination = 'stderr'
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d.log'
log_rotation_age = '1d'
log_rotation_size = 0
log_min_duration_statement = 250 # Log queries taking longer than 250ms
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0 # Log all temp file usage
Apply Changes
Most settings require a reload. A few (like shared_buffers, max_connections, and wal_buffers) require a full restart.
bash
# Reload (most settings)
service postgresql reload
# Restart (shared_buffers, max_connections, etc.)
service postgresql restart
Verify a setting took effect:
bash
su - postgres -c "psql -c 'SHOW shared_buffers;'"
---
Configuring pg_hba.conf
pg_hba.conf controls who can connect and how they authenticate. The file lives at /var/db/postgres/data16/pg_hba.conf.
The default allows local connections using trust (no password). This is fine for initial setup but unacceptable for production.
Recommended Production Configuration
# TYPE DATABASE USER ADDRESS METHOD
# Local socket connections
local all postgres peer
local all all scram-sha-256
# IPv4 local connections
host all all 127.0.0.1/32 scram-sha-256
# IPv6 local connections
host all all ::1/128 scram-sha-256
# Application server subnet (adjust to your network)
host all all 10.0.1.0/24 scram-sha-256
# Replication connections (from standby servers)
host replication replicator 10.0.1.0/24 scram-sha-256
Authentication Methods
| Method | Use Case | Security |
|--------|----------|----------|
| peer | Local Unix socket, maps OS user to DB user | High (OS-level) |
| scram-sha-256 | Password auth for TCP connections | High (salted, iterated hash) |
| md5 | Legacy password auth | Moderate (use scram-sha-256 instead) |
| cert | SSL client certificate | Very high |
| reject | Explicitly deny connections | N/A |
| trust | No authentication | None (never in production) |
Always use scram-sha-256 over md5. Set the default in postgresql.conf:
ini
password_encryption = 'scram-sha-256'
Reload after editing pg_hba.conf:
bash
service postgresql reload
---
Creating Databases and Users
Create a User (Role)
bash
su - postgres -c "createuser --pwprompt myapp"
Or from SQL:
sql
CREATE ROLE myapp WITH LOGIN PASSWORD 'strong_random_password_here';
Create a Database
bash
su - postgres -c "createdb -O myapp myapp_production"
Or from SQL:
sql
CREATE DATABASE myapp_production OWNER myapp;
Grant Privileges
For a typical application, the app user should own the database and its schemas. Avoid granting SUPERUSER.
sql
-- Connect to the database
\c myapp_production
-- Create a schema for the application
CREATE SCHEMA app AUTHORIZATION myapp;
-- Set the default search path
ALTER ROLE myapp SET search_path TO app, public;
Create a Read-Only User
For monitoring dashboards and reporting:
sql
CREATE ROLE readonly WITH LOGIN PASSWORD 'another_strong_password';
GRANT CONNECT ON DATABASE myapp_production TO readonly;
GRANT USAGE ON SCHEMA app TO readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA app TO readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA app GRANT SELECT ON TABLES TO readonly;
---
ZFS Dataset for PostgreSQL Data
If you are running FreeBSD, you are running ZFS. If you are running ZFS, you should put your PostgreSQL data on a purpose-tuned dataset. This is one of the strongest reasons to choose FreeBSD for your database server. For a full treatment, see our [ZFS guide](/blog/zfs-freebsd-guide/).
Create the Dataset
Stop PostgreSQL first, then create a dedicated dataset with optimal properties:
bash
service postgresql stop
# Create the dataset
zfs create -o mountpoint=/var/db/postgres zroot/postgres
# Set PostgreSQL-optimal properties
zfs set recordsize=8K zroot/postgres
zfs set primarycache=metadata zroot/postgres
zfs set atime=off zroot/postgres
zfs set logbias=throughput zroot/postgres
zfs set compression=lz4 zroot/postgres
zfs set redundant_metadata=most zroot/postgres
Why These Settings
**recordsize=8K**: PostgreSQL uses 8 KB pages internally. Matching ZFS's recordsize eliminates read amplification. A default 128K recordsize means reading a single 8 KB page pulls 128 KB from disk. With recordsize=8K, ZFS reads exactly what PostgreSQL asks for.
**primarycache=metadata**: PostgreSQL manages its own data cache via shared_buffers. Letting ZFS's ARC also cache data blocks means double-caching, which wastes RAM. Setting primarycache=metadata tells ZFS to only cache metadata (directory entries, block pointers), leaving data caching to PostgreSQL. This is the tuning recommended by the PostgreSQL and ZFS communities.
**atime=off**: Disables access-time updates. Every SELECT query would otherwise trigger metadata writes. There is no reason to track when a data file was last read.
**logbias=throughput**: Optimizes ZFS's intent log (ZIL) for throughput over latency. PostgreSQL already uses WAL for crash safety, so ZFS's synchronous write overhead can be reduced. If you have a dedicated SLOG device, you may want to benchmark both settings.
**compression=lz4**: LZ4 compression is fast enough that it often improves performance by reducing I/O. Table data, indexes, and WAL segments all compress well.
Separate Dataset for WAL
For maximum performance on write-heavy workloads, put WAL on its own dataset -- ideally on a separate physical device:
bash
zfs create -o mountpoint=/var/db/postgres/data16/pg_wal zroot/postgres/wal
zfs set recordsize=128K zroot/postgres/wal
zfs set primarycache=all zroot/postgres/wal
zfs set logbias=latency zroot/postgres/wal
zfs set compression=lz4 zroot/postgres/wal
WAL writes are sequential and latency-sensitive. A 128K recordsize matches WAL segment write patterns, and logbias=latency ensures writes are committed promptly.
Restore Data and Fix Permissions
If you had an existing data directory, move it into the new dataset:
bash
# Move data into the new mount
mv /var/db/postgres/data16.bak/* /var/db/postgres/data16/
# Fix ownership
chown -R postgres:postgres /var/db/postgres
chmod 700 /var/db/postgres/data16
service postgresql start
ZFS Snapshots as Backup Complement
bash
zfs snapshot zroot/postgres@before-migration-20260329
This is instant and space-efficient. Snapshots are not a replacement for pg_basebackup (they are not portable and cannot be shipped to another machine easily), but they are invaluable for quick rollbacks during schema migrations.
---
Backup Strategies
A database without tested backups is a ticking liability. PostgreSQL provides three complementary backup methods.
pg_dump -- Logical Backups
Best for: individual databases, schema migration, moving data between major versions.
bash
# Dump a single database (custom format, compressed)
su - postgres -c "pg_dump -Fc myapp_production > /backup/pg/myapp_production.dump"
# Dump all databases
su - postgres -c "pg_dumpall > /backup/pg/all_databases.sql"
# Restore a custom-format dump
su - postgres -c "pg_restore -d myapp_production /backup/pg/myapp_production.dump"
Custom format (-Fc) is always preferred over plain SQL. It supports parallel restore, selective table restore, and is compressed by default.
pg_basebackup -- Physical Backups
Best for: full cluster backup, setting up replication, point-in-time recovery.
bash
su - postgres -c "pg_basebackup -D /backup/pg/base -Ft -z -P -R"
Flags:
- -Ft: tar format
- -z: gzip compression
- -P: show progress
- -R: create standby.signal and connection info (for setting up a replica)
WAL Archiving -- Continuous Archiving
WAL archiving captures every change as it happens, enabling point-in-time recovery (PITR) to any moment between base backups.
Configure in postgresql.conf:
ini
archive_mode = on
archive_command = 'cp %p /backup/pg/wal_archive/%f'
For production, replace the cp command with a tool that handles compression and remote shipping:
ini
archive_command = '/usr/local/bin/wal-g wal-push %p'
Cron Schedule
Automate with cron. Edit the postgres user's crontab:
bash
su - postgres -c "crontab -e"
cron
# Daily logical backup at 02:00
0 2 * * * pg_dump -Fc myapp_production > /backup/pg/myapp_production_$(date +\%Y\%m\%d).dump 2>> /var/log/pg_backup.log
# Weekly base backup on Sunday at 03:00
0 3 * * 0 pg_basebackup -D /backup/pg/base_$(date +\%Y\%m\%d) -Ft -z 2>> /var/log/pg_backup.log
# Clean backups older than 30 days
0 4 * * * find /backup/pg/ -name "*.dump" -mtime +30 -delete
Test Your Restores
A backup that has never been restored is not a backup. Schedule monthly restore tests to a staging server. Verify row counts, run application health checks, and confirm WAL replay works end-to-end.
---
Monitoring
A running database without monitoring is a failure waiting to surprise you.
pg_stat_activity -- Active Sessions
sql
SELECT pid, usename, state, query_start, query
FROM pg_stat_activity
WHERE state != 'idle'
ORDER BY query_start;
This shows all active queries. Look for long-running transactions (hours-old query_start) and sessions stuck in idle in transaction -- both are common sources of bloat and lock contention.
pg_stat_user_tables -- Table Health
sql
SELECT schemaname, relname,
seq_scan, idx_scan,
n_tup_ins, n_tup_upd, n_tup_del,
n_dead_tup, last_autovacuum
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC;
Watch for:
- **High seq_scan with low idx_scan**: missing indexes
- **High n_dead_tup**: autovacuum is not keeping up
- **Stale last_autovacuum**: check autovacuum settings
pg_stat_statements -- Query Performance
Enable the extension:
sql
CREATE EXTENSION pg_stat_statements;
Add to postgresql.conf:
ini
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all
Then query the top resource consumers:
sql
SELECT query, calls, total_exec_time, mean_exec_time, rows
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20;
This is the single most useful tool for identifying slow queries in production.
pgBouncer -- Connection Pooling
PostgreSQL forks a process per connection. At 500+ connections, this model wastes memory and context-switch overhead. pgBouncer sits between your application and PostgreSQL, multiplexing hundreds of client connections onto a small pool of server connections.
bash
pkg install pgbouncer
Minimal /usr/local/etc/pgbouncer.ini:
ini
[databases]
myapp_production = host=127.0.0.1 port=5432 dbname=myapp_production
[pgbouncer]
listen_addr = 127.0.0.1
listen_port = 6432
auth_type = scram-sha-256
auth_file = /usr/local/etc/pgbouncer.userlist
pool_mode = transaction
default_pool_size = 20
max_client_conn = 1000
Enable and start:
bash
sysrc pgbouncer_enable=YES
service pgbouncer start
Your application connects to port 6432 instead of 5432. With pool_mode = transaction, pgBouncer returns server connections to the pool after each transaction completes. This lets 1000 application connections share 20 database connections, drastically reducing PostgreSQL's resource usage.
---
Streaming Replication
Streaming replication gives you a hot standby for failover and read scaling. The standby continuously applies WAL records received from the primary.
On the Primary
Create a replication user:
sql
CREATE ROLE replicator WITH REPLICATION LOGIN PASSWORD 'replication_password';
Add to pg_hba.conf:
host replication replicator 10.0.1.20/32 scram-sha-256
Configure postgresql.conf:
ini
wal_level = 'replica' # Already set if you followed earlier tuning
max_wal_senders = 5 # Number of concurrent standby connections
wal_keep_size = '2GB' # Retain WAL for standbys that fall behind
Reload:
bash
service postgresql reload
On the Standby
Take a base backup from the primary:
bash
su - postgres -c "pg_basebackup -h 10.0.1.10 -U replicator -D /var/db/postgres/data16 -Fp -Xs -P -R"
The -R flag creates standby.signal and populates postgresql.auto.conf with the primary connection info:
ini
primary_conninfo = 'host=10.0.1.10 port=5432 user=replicator password=replication_password'
Start the standby:
bash
sysrc postgresql_enable=YES
service postgresql start
Verify Replication
On the primary:
sql
SELECT client_addr, state, sent_lsn, replay_lsn,
sent_lsn - replay_lsn AS replication_lag
FROM pg_stat_replication;
You should see the standby listed with state = 'streaming' and minimal lag.
On the standby:
sql
SELECT pg_is_in_recovery();
Returns t (true) -- confirming it is operating as a standby.
Promoting the Standby
If the primary fails:
bash
su - postgres -c "pg_ctl promote -D /var/db/postgres/data16"
Or from PostgreSQL 12+:
sql
SELECT pg_promote();
The standby becomes a writable primary. Update your application connection strings and DNS accordingly.
---
Security
A database exposed to the network is a target. Defense in depth applies here as it does everywhere. See our [FreeBSD hardening guide](/blog/hardening-freebsd-server/) for OS-level measures.
SSL/TLS Encryption
Generate or obtain certificates, then configure postgresql.conf:
ini
ssl = on
ssl_cert_file = '/var/db/postgres/data16/server.crt'
ssl_key_file = '/var/db/postgres/data16/server.key'
ssl_ca_file = '/var/db/postgres/data16/root.crt' # For client cert verification
ssl_min_protocol_version = 'TLSv1.3'
Set file permissions:
bash
chown postgres:postgres /var/db/postgres/data16/server.key
chmod 600 /var/db/postgres/data16/server.key
Force SSL for remote connections in pg_hba.conf:
hostssl all all 0.0.0.0/0 scram-sha-256
Using hostssl instead of host rejects unencrypted TCP connections.
Password Encryption
Ensure all passwords use the strongest available hash:
ini
password_encryption = 'scram-sha-256'
Reset existing passwords after changing this setting -- old md5-hashed passwords are not automatically upgraded.
Network Restrictions
Bind PostgreSQL to specific interfaces:
ini
listen_addresses = '127.0.0.1,10.0.1.10'
Never use '*' in production unless you have a firewall restricting access. Ideally, combine listen_addresses with pg_hba.conf restrictions and a [FreeBSD pf firewall](/blog/hardening-freebsd-server/):
# /etc/pf.conf excerpt
pass in on egress proto tcp from 10.0.1.0/24 to self port 5432
block in on egress proto tcp to self port 5432
Principle of Least Privilege
- Application users should never be SUPERUSER.
- Use GRANT to give minimum necessary permissions.
- Separate read-write and read-only roles.
- Revoke CREATE on the public schema:
sql
REVOKE CREATE ON SCHEMA public FROM PUBLIC;
Audit Logging
For compliance environments, enable detailed logging:
ini
log_connections = on
log_disconnections = on
log_statement = 'ddl' # Log all DDL (CREATE, ALTER, DROP)
log_line_prefix = '%t [%p] %u@%d '
For full statement auditing, consider the pgAudit extension, available through the FreeBSD ports tree.
---
FAQ
What FreeBSD version should I use for PostgreSQL?
FreeBSD 14.2-RELEASE or later. It ships with OpenZFS 2.2.x, has the latest security patches, and provides the most current PostgreSQL packages. If you are provisioning a new server, there is no reason to run anything older. See our [FreeBSD VPS setup guide](/blog/freebsd-vps-setup/) for provisioning steps.
How much RAM does PostgreSQL need?
For a dedicated database server: as much as you can afford. A useful minimum for production is 8 GB, where you would set shared_buffers to 2 GB and let ZFS ARC claim most of the remainder. For serious OLTP workloads with datasets larger than RAM, 32 GB to 64 GB is common. The memory tuning table in the [postgresql.conf section](#tuning-postgresqlconf) gives specific values for each tier.
Should I use ZFS or UFS for PostgreSQL data?
ZFS, without question. Atomic snapshots for instant backups, compression for reduced I/O, checksumming for silent corruption detection, and tunable recordsize for alignment with PostgreSQL's 8K pages. UFS has none of these features. The only scenario for UFS is a severely RAM-constrained system where you cannot spare memory for the ZFS ARC, but in that case you likely do not have enough RAM for PostgreSQL either.
How do I upgrade PostgreSQL to a new major version?
Use pg_upgrade for in-place major version upgrades. The process:
1. Install the new version: pkg install postgresql17-server
2. Initialize a new data directory: initdb -D /var/db/postgres/data17
3. Stop the old server: service postgresql stop
4. Run pg_upgrade: pg_upgrade -b /usr/local/bin/pg_16 -B /usr/local/bin/pg_17 -d /var/db/postgres/data16 -D /var/db/postgres/data17
5. Update rc.conf to point to the new version
6. Start the new server: service postgresql start
Always take a ZFS snapshot before starting. If anything goes wrong, rollback is instant.
How do I connect my web application to PostgreSQL?
If your web server runs on the same machine, connect via Unix socket for best performance. If it runs on a separate host (the recommended architecture), connect via TCP with SSL. In both cases, use pgBouncer as a connection pool. Point your application at pgBouncer (port 6432) rather than PostgreSQL directly (port 5432). For web server setup, see our [NGINX on FreeBSD guide](/blog/nginx-freebsd-production-setup/).
Can I run PostgreSQL in a FreeBSD jail?
Yes, and it is an excellent practice. Jails provide OS-level isolation, limiting the blast radius if PostgreSQL is compromised. Create a jail with a dedicated ZFS dataset, install PostgreSQL inside it, and expose only port 5432 to the host network. This combines FreeBSD's jail isolation with ZFS's data management -- a setup that is difficult to replicate on any other platform.
How do I tune autovacuum?
The defaults are conservative. For write-heavy databases, increase the number of autovacuum workers and reduce the cost delay:
ini
autovacuum_max_workers = 4
autovacuum_naptime = '30s'
autovacuum_vacuum_cost_delay = '2ms'
autovacuum_vacuum_cost_limit = 1000
Monitor pg_stat_user_tables for tables with high n_dead_tup counts and stale last_autovacuum timestamps. If dead tuples keep growing, autovacuum is not keeping up.
---
Wrapping Up
PostgreSQL on FreeBSD is a battle-tested combination used in production by organizations that value data integrity above all else. The setup described in this guide -- ZFS-backed storage with tuned recordsize, properly configured shared_buffers and connection pooling, streaming replication for high availability, and defense-in-depth security -- gives you a database platform that is stable, fast, and maintainable.
The key principles: tune memory settings to match your hardware. Use ZFS with recordsize=8K for data and 128K for WAL. Put pgBouncer in front of PostgreSQL. Enable WAL archiving from day one. Test your backups monthly. Monitor pg_stat_statements for slow queries.
Start with the configuration in this guide, benchmark with your actual workload using pgbench, and adjust. PostgreSQL's EXPLAIN ANALYZE and the statistics views give you everything you need to make data-driven tuning decisions.