# How to Set Up Bhyve Virtual Machines on FreeBSD
FreeBSD ships with its own native hypervisor called **bhyve** (pronounced "beehive"). If you need to run Linux distributions, Windows, or other BSDs alongside your FreeBSD host without reaching for third-party software, bhyve is the tool for the job. This guide walks through every step: from loading the kernel module to running production guests with ZFS-backed storage and bridged networking.
What Is Bhyve?
Bhyve is a Type 2 (hosted) hypervisor built into the FreeBSD base system since FreeBSD 10.0. Originally developed by Neel Natu and Peter Grehan at NetApp, it was contributed to the FreeBSD project in 2011 and has been under active development since.
Unlike VirtualBox or QEMU (which emulate hardware in userspace), bhyve leverages hardware virtualization extensions (Intel VT-x and AMD-V) to run guest operating systems at near-native speed. The kernel module vmm.ko provides the virtualization infrastructure, while the bhyve userland process manages each virtual machine.
Key characteristics:
- **Native to FreeBSD** -- no packages to install for the core hypervisor.
- **Hardware-assisted** -- requires VT-x with EPT (Intel) or AMD-V with RVI (AMD).
- **Supports UEFI and legacy boot** -- run modern and older operating systems.
- **ZFS integration** -- use zvols for VM disks with snapshots and clones.
- **PCI passthrough** -- assign physical devices directly to guests.
Bhyve supports FreeBSD, Linux, Windows, OpenBSD, and other x86-64 operating systems as guests. For containerized workloads, consider [FreeBSD jails](/blog/freebsd-jails-guide/) instead; bhyve is the right choice when you need a full operating system kernel running inside the guest.
Prerequisites
Hardware Requirements
Bhyve requires a CPU with hardware virtualization support:
- **Intel**: VT-x with Extended Page Tables (EPT) -- most Intel CPUs from 2008 onward.
- **AMD**: AMD-V with Rapid Virtualization Indexing (RVI) -- most AMD CPUs from 2008 onward.
Check for support:
sh
dmesg | grep -E 'VT-x|AMD-V|EPT|RVI'
Or inspect the CPU features directly:
sh
sysctl hw.model
sysctl kern.vm_guest
grep -o 'VMX\|SVM' /var/run/dmesg.boot
Loading the Kernel Module
Load the vmm kernel module:
sh
kldload vmm
Verify it loaded:
sh
kldstat | grep vmm
To load vmm automatically at boot, add this line to /boot/loader.conf:
vmm_load="YES"
Required Tools
The bhyve, bhyveload, and bhyvectl utilities are part of the FreeBSD base system. No packages are needed for basic operation. For UEFI boot support, you will need the firmware package:
sh
pkg install bhyve-firmware
This installs the UEFI firmware file at /usr/local/share/uefi-firmware/BHYVE_UEFI.fd.
Manual VM Creation Step by Step
This section covers creating a VM entirely with base system commands. We will create a FreeBSD guest first because it uses the simpler bhyveload path.
Step 1: Create a Disk Image
Create a 20 GB raw disk image:
sh
truncate -s 20G /vms/freebsd-guest.img
Or, if you are running [ZFS](/blog/zfs-freebsd-guide/) (recommended), create a zvol:
sh
zfs create -V 20G zroot/vms/freebsd-guest
The zvol appears as /dev/zvol/zroot/vms/freebsd-guest.
Step 2: Download an Installation ISO
sh
fetch https://download.freebsd.org/releases/amd64/amd64/ISO-IMAGES/14.2/FreeBSD-14.2-RELEASE-amd64-disc1.iso
Step 3: Boot the Installer with bhyveload
For FreeBSD guests, bhyveload loads the FreeBSD kernel directly from the ISO:
sh
bhyveload -m 2G -d FreeBSD-14.2-RELEASE-amd64-disc1.iso myvm
This loads the guest kernel into memory for VM myvm with 2 GB of RAM.
Step 4: Start the VM
sh
bhyve -c 2 -m 2G -H -P \
-s 0:0,hostbridge \
-s 1:0,lpc \
-s 2:0,virtio-net,tap0 \
-s 3:0,virtio-blk,/dev/zvol/zroot/vms/freebsd-guest \
-s 4:0,ahci-cd,FreeBSD-14.2-RELEASE-amd64-disc1.iso \
-l com1,stdio \
myvm
Flag breakdown:
| Flag | Purpose |
|------|---------|
| -c 2 | Assign 2 virtual CPUs |
| -m 2G | Allocate 2 GB RAM |
| -H | Yield CPU on HLT instruction |
| -P | Yield CPU on PAUSE instruction |
| -s 0:0,hostbridge | Emulated PCI host bridge |
| -s 1:0,lpc | LPC PCI-ISA bridge (required for COM ports) |
| -s 2:0,virtio-net,tap0 | VirtIO network adapter on tap0 |
| -s 3:0,virtio-blk,... | VirtIO block device (the disk) |
| -s 4:0,ahci-cd,... | AHCI CD-ROM with the ISO |
| -l com1,stdio | Map COM1 to your terminal |
The installer runs in your terminal. Complete the installation as you would on physical hardware.
Step 5: Reboot the Guest
After installation, the guest will exit. Destroy the VM instance and restart without the ISO:
sh
bhyvectl --destroy --vm=myvm
bhyveload -m 2G -d /dev/zvol/zroot/vms/freebsd-guest myvm
bhyve -c 2 -m 2G -H -P \
-s 0:0,hostbridge \
-s 1:0,lpc \
-s 2:0,virtio-net,tap0 \
-s 3:0,virtio-blk,/dev/zvol/zroot/vms/freebsd-guest \
-l com1,stdio \
myvm
You must always run bhyvectl --destroy before restarting a guest. This cleans up the VM state in the kernel.
UEFI Boot Setup
Non-FreeBSD guests (Linux, Windows) cannot use bhyveload. They require UEFI firmware. After installing the bhyve-firmware package, replace bhyveload with the -l bootrom flag:
sh
bhyve -c 2 -m 4G -H -P \
-s 0:0,hostbridge \
-s 1:0,lpc \
-s 2:0,virtio-net,tap0 \
-s 3:0,virtio-blk,/dev/zvol/zroot/vms/linux-guest \
-s 4:0,ahci-cd,ubuntu-24.04-live-server-amd64.iso \
-s 29,fbuf,tcp=0.0.0.0:5900,w=1024,h=768 \
-s 30,xhci,tablet \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
linuxvm
New flags explained:
- -l bootrom,... -- Use UEFI firmware instead of bhyveload.
- -s 29,fbuf,... -- Framebuffer device accessible via VNC on port 5900.
- -s 30,xhci,tablet -- USB tablet device for proper mouse tracking in VNC.
Connect to the VM's graphical console with any VNC client:
sh
vncviewer localhost:5900
For a UEFI boot without a graphical console (headless Linux), many distributions support serial console. Append serial console parameters to the guest's boot configuration after installation.
Storage Backends
Bhyve supports several storage backends. Choose based on your use case.
ZFS Zvols (Recommended)
Zvols provide the best integration with FreeBSD. You get instant snapshots, clones, compression, and replication:
sh
zfs create -V 50G -o volblocksize=4K zroot/vms/production-db
Use with bhyve as:
sh
-s 3:0,virtio-blk,/dev/zvol/zroot/vms/production-db
See the [ZFS guide](/blog/zfs-freebsd-guide/) for details on zvol tuning and snapshot workflows.
Raw Disk Images
Simple flat files, portable across machines:
sh
truncate -s 50G /vms/guest-disk.img
Use with:
sh
-s 3:0,virtio-blk,/vms/guest-disk.img
NVMe Emulation
For guests that need NVMe storage (some operating systems perform better with it):
sh
-s 3:0,nvme,/dev/zvol/zroot/vms/guest-nvme
AHCI (SATA Emulation)
For maximum guest compatibility, especially older operating systems:
sh
-s 3:0,ahci-hd,/dev/zvol/zroot/vms/guest-sata
Networking
Creating Tap Interfaces and a Bridge
Each VM needs a tap interface connected to a bridge. The bridge connects to your physical network.
Create the networking infrastructure:
sh
# Create a bridge
ifconfig bridge0 create
# Add your physical interface to the bridge
ifconfig bridge0 addm em0
# Create a tap interface for the VM
ifconfig tap0 create
# Add the tap to the bridge
ifconfig bridge0 addm tap0
# Bring everything up
ifconfig bridge0 up
ifconfig tap0 up
To persist this across reboots, add to /etc/rc.conf:
sh
cloned_interfaces="bridge0 tap0"
ifconfig_bridge0="addm em0 addm tap0 up"
ifconfig_tap0="up"
For multiple VMs, create additional tap interfaces (tap1, tap2, etc.) and add each to the bridge.
Allow Tap Interface Opening
Ensure bhyve can open tap devices. Add to /etc/sysctl.conf:
net.link.tap.up_on_open=1
Apply immediately:
sh
sysctl net.link.tap.up_on_open=1
NAT with PF for VM Internet Access
If your VMs live on an internal network and need internet access through the host, set up NAT with [PF](/blog/pf-firewall-freebsd/):
sh
# /etc/pf.conf
ext_if="em0"
vm_net="10.0.0.0/24"
nat on $ext_if from $vm_net to any -> ($ext_if)
pass from $vm_net to any
Enable PF and IP forwarding:
sh
sysrc pf_enable="YES"
sysrc gateway_enable="YES"
service pf start
sysctl net.inet.ip.forwarding=1
Assign a static IP to the bridge for the VM subnet:
sh
ifconfig bridge0 inet 10.0.0.1/24
Inside the guest, configure a static IP in the 10.0.0.0/24 range with 10.0.0.1 as the gateway.
Installing a Linux Guest (Ubuntu Server)
This section walks through installing Ubuntu Server 24.04 as a bhyve guest.
Step 1: Prepare Storage and ISO
sh
zfs create -V 30G zroot/vms/ubuntu
fetch https://releases.ubuntu.com/24.04/ubuntu-24.04-live-server-amd64.iso -o /vms/ubuntu-24.04.iso
Step 2: Create Networking
sh
ifconfig tap1 create
ifconfig bridge0 addm tap1
ifconfig tap1 up
Step 3: Boot the Installer
sh
bhyvectl --destroy --vm=ubuntu 2>/dev/null
bhyve -c 4 -m 4G -H -P \
-s 0:0,hostbridge \
-s 1:0,lpc \
-s 2:0,virtio-net,tap1 \
-s 3:0,virtio-blk,/dev/zvol/zroot/vms/ubuntu \
-s 4:0,ahci-cd,/vms/ubuntu-24.04.iso \
-s 29,fbuf,tcp=0.0.0.0:5901,w=1024,h=768 \
-s 30,xhci,tablet \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
ubuntu
Step 4: Connect via VNC and Install
sh
vncviewer localhost:5901
Walk through the Ubuntu installer. Select the VirtIO disk as the installation target. Configure networking inside the guest (DHCP if your bridge provides it, or static).
Step 5: Post-Install Boot
After installation completes and the guest shuts down:
sh
bhyvectl --destroy --vm=ubuntu
bhyve -c 4 -m 4G -H -P \
-s 0:0,hostbridge \
-s 1:0,lpc \
-s 2:0,virtio-net,tap1 \
-s 3:0,virtio-blk,/dev/zvol/zroot/vms/ubuntu \
-s 29,fbuf,tcp=0.0.0.0:5901,w=1024,h=768 \
-s 30,xhci,tablet \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
ubuntu
Remove the -s 4:0,ahci-cd,... line so the VM boots from disk instead of the ISO.
Running Linux Guests Headless
After installation, many Linux distributions support serial console. Inside the Ubuntu guest, enable it:
sh
sudo systemctl enable serial-getty@ttyS0.service
sudo systemctl start serial-getty@ttyS0.service
Then on the host, replace the framebuffer with a serial console:
sh
bhyve -c 4 -m 4G -H -P \
-s 0:0,hostbridge \
-s 1:0,lpc \
-s 2:0,virtio-net,tap1 \
-s 3:0,virtio-blk,/dev/zvol/zroot/vms/ubuntu \
-l com1,stdio \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
ubuntu
Installing a Windows Guest
Windows guests require UEFI boot and VirtIO drivers for best performance.
Step 1: Download VirtIO Drivers
sh
pkg install bhyve-firmware
fetch https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso -o /vms/virtio-win.iso
Step 2: Prepare Storage
sh
zfs create -V 60G zroot/vms/windows
Step 3: Boot the Installer
sh
bhyve -c 4 -m 8G -H -P \
-s 0:0,hostbridge \
-s 1:0,lpc \
-s 2:0,virtio-net,tap2 \
-s 3:0,virtio-blk,/dev/zvol/zroot/vms/windows \
-s 4:0,ahci-cd,/vms/Win11_English_x64.iso \
-s 5:0,ahci-cd,/vms/virtio-win.iso \
-s 29,fbuf,tcp=0.0.0.0:5902,w=1280,h=1024,wait \
-s 30,xhci,tablet \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
windows
Notes:
- Two CD-ROM devices: one for the Windows ISO, one for VirtIO drivers.
- wait in the framebuffer config pauses the VM until a VNC client connects.
- 8 GB RAM minimum recommended for Windows 10/11.
Step 4: Load VirtIO Drivers During Install
When the Windows installer asks "Where do you want to install Windows?" and shows no drives:
1. Click "Load driver".
2. Browse to the VirtIO CD.
3. Navigate to viostor\w11\amd64 (adjust for your Windows version).
4. Select the VirtIO SCSI driver and click Next.
5. The VirtIO disk will now appear. Proceed with installation.
After installation, install the remaining VirtIO drivers (network, balloon, serial) from the VirtIO ISO inside the running Windows guest.
Step 5: Post-Install Boot
sh
bhyvectl --destroy --vm=windows
bhyve -c 4 -m 8G -H -P \
-s 0:0,hostbridge \
-s 1:0,lpc \
-s 2:0,virtio-net,tap2 \
-s 3:0,virtio-blk,/dev/zvol/zroot/vms/windows \
-s 29,fbuf,tcp=0.0.0.0:5902,w=1280,h=1024 \
-s 30,xhci,tablet \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
windows
vm-bhyve: A Friendlier Management Layer
Typing long bhyve command lines gets tedious fast. **vm-bhyve** is a management framework that handles VM lifecycle, networking, and storage with simple commands.
Install and Initialize
sh
pkg install vm-bhyve
Configure a ZFS dataset for VM storage:
sh
zfs create zroot/vms
sysrc vm_enable="YES"
sysrc vm_dir="zfs:zroot/vms"
vm init
Set Up Networking
Create a virtual switch:
sh
vm switch create public
vm switch add public em0
This creates a bridge and attaches your physical interface.
Create a VM
Copy a template and create a VM:
sh
cp /usr/local/share/examples/vm-bhyve/* /zroot/vms/.templates/
vm create -t ubuntu -s 30G myubuntu
Fetch an ISO and Install
sh
vm iso https://releases.ubuntu.com/24.04/ubuntu-24.04-live-server-amd64.iso
vm install myubuntu ubuntu-24.04-live-server-amd64.iso
Manage VMs
sh
# List all VMs
vm list
# Start a VM
vm start myubuntu
# Stop a VM gracefully
vm stop myubuntu
# Force stop
vm poweroff myubuntu
# Access the console
vm console myubuntu
# Destroy (delete) a VM
vm destroy myubuntu
Edit VM Configuration
Each VM has a configuration file:
sh
vm configure myubuntu
A typical configuration looks like:
loader="uefi"
cpu=4
memory=4G
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"
vm-bhyve simplifies nearly every operation covered in the manual sections above while still allowing full customization through its config files.
Resource Management
CPU Pinning
Pin virtual CPUs to specific physical cores to reduce context-switch overhead and improve cache locality:
sh
cpuset -l 2,3 bhyve -c 2 -m 4G -H -P \
-s 0:0,hostbridge \
-s 1:0,lpc \
-s 3:0,virtio-blk,/dev/zvol/zroot/vms/guest \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
myvm
This pins the bhyve process (and its two vCPUs) to physical cores 2 and 3.
Memory Limits
Bhyve allocates guest memory using wired pages. Monitor host memory to avoid overcommitting:
sh
# Check total and available memory
sysctl hw.physmem
sysctl vm.stats.vm.v_free_count
vmstat -s | grep "pages free"
A safe rule: keep at least 2 GB of host RAM free beyond the sum of all guest allocations.
Limiting with rctl
Use FreeBSD's resource control framework for additional limits:
sh
# Limit a VM's memory usage (including overhead)
rctl -a process:$(pgrep -f "bhyve.*myvm"):memoryuse:deny=5G
Enable rctl by adding to /boot/loader.conf:
kern.racct.enable=1
Snapshots and Cloning with ZFS
This is where bhyve on FreeBSD truly outshines other hypervisors. When your VM disks live on [ZFS zvols](/blog/zfs-freebsd-guide/), you get instant snapshots and zero-copy clones.
Take a Snapshot
Stop the VM (or ensure the guest filesystem is consistent), then:
sh
zfs snapshot zroot/vms/ubuntu@before-upgrade
Roll Back
If something goes wrong:
sh
zfs rollback zroot/vms/ubuntu@before-upgrade
The VM disk reverts to the exact state at snapshot time.
Clone a VM
Create an instant copy of a VM for testing:
sh
zfs clone zroot/vms/ubuntu@base zroot/vms/ubuntu-test
The clone starts with zero additional disk usage and only grows as blocks diverge. This is perfect for spinning up disposable test environments.
Automated Snapshot Script
Here is a script to snapshot all VMs before maintenance:
sh
#!/bin/sh
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
for zvol in $(zfs list -H -o name -r zroot/vms -t volume); do
zfs snapshot "${zvol}@auto-${TIMESTAMP}"
echo "Snapshot: ${zvol}@auto-${TIMESTAMP}"
done
Combine this with cron for scheduled backups. For off-site replication, use zfs send and zfs receive to transfer snapshots to a remote machine.
GPU Passthrough Basics
Bhyve supports PCI passthrough, which lets you assign a physical GPU (or any PCI device) directly to a guest. This is useful for running GPU-accelerated workloads inside VMs.
Prerequisites
1. **CPU must support IOMMU**: Intel VT-d or AMD-Vi.
2. **BIOS/UEFI must have IOMMU enabled**.
Step 1: Identify the PCI Device
sh
pciconf -lv | grep -B3 -A3 "VGA"
Example output:
vgapci0@pci0:1:0:0: class=0x030000 rev=0xa1 hdr=0x00 vendor=0x10de device=0x1b81
vendor = 'NVIDIA Corporation'
device = 'GP104 [GeForce GTX 1070]'
Note the PCI address: 1/0/0.
Step 2: Reserve the Device for Passthrough
Add to /boot/loader.conf:
pptdevs="1/0/0"
Reboot. The device should now appear as ppt0 instead of vgapci0:
sh
pciconf -lv | grep ppt
Step 3: Pass the Device to the Guest
sh
bhyve -c 8 -m 16G -H -P \
-s 0:0,hostbridge \
-s 1:0,lpc \
-s 2:0,virtio-net,tap0 \
-s 3:0,virtio-blk,/dev/zvol/zroot/vms/gpu-guest \
-s 6:0,passthru,1/0/0 \
-s 29,fbuf,tcp=0.0.0.0:5903,w=1920,h=1080 \
-s 30,xhci,tablet \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
gpu-guest
The -s 6:0,passthru,1/0/0 line assigns the PCI device directly to the guest.
Limitations
- The host cannot use the device while it is assigned to a guest.
- Multi-function devices (GPU + HDMI audio) may require passing through both functions.
- Not all GPUs work reliably with passthrough. NVIDIA cards sometimes require the x-vga=on equivalent workarounds.
- GPU passthrough with bhyve is still evolving. Test thoroughly before relying on it in production.
Automating VM Startup
To start VMs automatically at system boot, create an rc.d script or use vm-bhyve:
With vm-bhyve
In the VM config file, add:
autostart="yes"
autostart_delay="10"
vm-bhyve handles the rest when vm_enable="YES" is in /etc/rc.conf.
Manual Approach
Create /usr/local/etc/rc.d/bhyve_vms:
sh
#!/bin/sh
# PROVIDE: bhyve_vms
# REQUIRE: NETWORKING vmm
# KEYWORD: shutdown
. /etc/rc.subr
name="bhyve_vms"
start_cmd="bhyve_vms_start"
stop_cmd="bhyve_vms_stop"
bhyve_vms_start() {
/usr/local/bin/vm-bhyve-start-all.sh &
}
bhyve_vms_stop() {
/usr/local/bin/vm-bhyve-stop-all.sh
}
load_rc_config $name
run_rc_command "$1"
Bhyve vs Other Virtualization Options on FreeBSD
| Feature | bhyve | VirtualBox | QEMU |
|---------|-------|------------|------|
| Native to FreeBSD | Yes | No (package) | No (package) |
| Hardware acceleration | VT-x/AMD-V | VT-x/AMD-V | VT-x/AMD-V via KVM (Linux only) |
| PCI passthrough | Yes | Limited | Yes (Linux host) |
| ZFS integration | Native | Manual | Manual |
| Windows guests | Yes (UEFI) | Yes | Yes |
| Management tools | vm-bhyve, CBSD | GUI | libvirt |
| Overhead | Minimal | Moderate | Moderate |
For a broader comparison of FreeBSD and Linux approaches to virtualization and other system tasks, see [FreeBSD vs Linux](/blog/freebsd-vs-linux/).
Troubleshooting Common Issues
**"vmm.ko failed to load"** -- Your CPU does not support hardware virtualization, or it is disabled in BIOS. Enter BIOS and enable VT-x/AMD-V.
**"vm already exists"** -- Run bhyvectl --destroy --vm=vmname before starting the VM again. This cleans up stale kernel state.
**Guest has no network** -- Verify the tap interface is added to the bridge and that net.link.tap.up_on_open=1 is set. Check with ifconfig bridge0 and ifconfig tap0.
**VNC connection refused** -- Make sure the fbuf device is configured and bhyve is running. Check that no firewall rule on the host blocks the VNC port.
**Windows installer does not see disk** -- You need to load VirtIO drivers during installation. Attach the virtio-win ISO as a second CD-ROM.
Frequently Asked Questions
Can bhyve run macOS guests?
No. bhyve does not support macOS guests. Apple's EULA restricts macOS virtualization to Apple hardware, and bhyve does not implement the required Apple SMC emulation.
How many VMs can I run simultaneously?
There is no hard limit in bhyve itself. The practical limit depends on your hardware resources -- primarily RAM and CPU cores. Each VM consumes wired memory equal to its configured RAM allocation. Monitor host resources with top -aS and vmstat.
Can I live-migrate bhyve VMs between hosts?
Not natively. Bhyve does not support live migration at this time. For planned maintenance, shut down the guest, zfs send the zvol to the target host, and start the guest there. Some third-party tools like CBSD provide migration helpers, but they involve guest downtime.
Is bhyve production-ready?
Yes. Bhyve has been used in production environments since FreeBSD 10. Cloud providers and hosting companies run bhyve at scale. It is the hypervisor behind several FreeBSD-based hosting platforms.
How do I access a VM's console after detaching?
If using vm-bhyve, run vm console vmname. For manual setups with nmdm (null modem) devices, configure the VM to use -l com1,/dev/nmdm0A and connect from the host with cu -l /dev/nmdm0B -s 115200. This lets you attach and detach from the serial console freely.
Can I run bhyve inside a jail?
No. Bhyve requires direct access to the vmm kernel module and hardware virtualization extensions. These are not available inside [jails](/blog/freebsd-jails-guide/). Run bhyve on the host system.
What is the difference between bhyve and jails?
Jails are OS-level virtualization (containers) that share the host kernel. They are lightweight and ideal for running multiple FreeBSD environments. Bhyve is full hardware virtualization -- each guest runs its own kernel. Use jails for FreeBSD workloads where isolation is the goal; use bhyve when you need a different operating system or full kernel independence. Read more in the [FreeBSD jails guide](/blog/freebsd-jails-guide/).
Conclusion
Bhyve gives FreeBSD administrators a native, performant hypervisor without external dependencies. Combined with ZFS for storage and PF for networking, it forms a complete virtualization stack built entirely on FreeBSD technologies. Start with vm-bhyve for easier management, use ZFS zvols for snapshots and clones, and reserve manual bhyve commands for cases where you need full control over every device slot.
For related topics, explore the [ZFS guide](/blog/zfs-freebsd-guide/) for storage best practices, the [PF firewall guide](/blog/pf-firewall-freebsd/) for advanced networking rules, and the [jails guide](/blog/freebsd-jails-guide/) for lightweight FreeBSD-native containerization.