After looking at Virtual Box to setup virtual servers i decided to have a look at bhyve as well.
A good thing about it is that it is native of the *BSD family. I will go through two set of installation of a Ubuntu Linux (in my case 20.4 LTS). The first is straight with bhyve, the second using ‘vm-bhyve’ a utility that simplifies the management of bhyve virtual machines.
As reference I used mostly the FreeBSD handbook page but I adjusted a few things: first of all because I needed to install Linux and second because I wanted to use a ZFS volume.
Setup/Installation
I was surprise that there was nothing to install, only enable a kernel module
# kldload vmm
# kldload nmdm
Then we add these 4 lines to the file /boot/loader.conf, this way the two kernel modules will be loaded at startup and two additional network related capabilities (bridged and tap) will be available.
vmm_load="YES"
nmdm_load="YES"
if_tap_load="YES"
if_bridge_load="YES"
We enable tap:
# sysctl net.link.tap.up_on_open=1
And add this line to /etc/sysctl.conf.
net.link.tap.up_on_open=1
So basically now we have the proper kernel modules. as well as the proper networking capabilities, enabled.
The only actual installation is the following pkg: grub2-bhyve, this basically creates a grub loader to be used to load systems different than FreeBSD in the virtual machines.
pkg install grub2-bhyve
That’s it, for now, we’ll use it later …
Network Configurations
Next we will use ifconfig to create or networks (we are basically creating a virtual switch – might no be technically exact this definition, but it works well in my mind – that will connect to the main ethernet port of my server (igb0) to virtual ethernet ports, one for each virtual machine.
# ifconfig bridge create
# ifconfig bridge0 addm igb0
# ifconfig bridge0 name igb0bridge
# ifconfig tap0 create
# ifconfig igb0bridge addm tap0
Then we add a few lines into the /etc/rc.conf file to recreate these network elements at startup.
cloned_interfaces="bridge0 tap0"
ifconfig_bridge0_name="igb0bridge"
ifconfig_igb0bridge="addm igb0 addm tap0 up"
After all these configuration, here is the response i get from ifconfig command (with a few edits):
igb0: flags=8943 metric 0 mtu 1500
options=a520b9
ether aa:aa:aa:aa:aa:aa
inet xxx.xxx.xxx.142 netmask 0xffffff00 broadcast xxx.xxx.xxx.255
inet xxx.xxx.xxx.143 netmask 0xffffffff broadcast xxx.xxx.xxx.143
inet xxx.xxx.xxx.144 netmask 0xffffffff broadcast xxx.xxx.xxx.144
media: Ethernet autoselect (1000baseT )
status: active
nd6 options=29
lo0: flags=8049 metric 0 mtu 16384
options=680003
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3
inet 127.0.0.1 netmask 0xff000000
groups: lo
nd6 options=21
igb0bridge: flags=8843 metric 0 mtu 1500
ether aa:aa:aa:aa:aa:ac
id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
maxage 20 holdcnt 6 proto stp-rstp maxaddr 2000 timeout 1200
root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
member: tap0 flags=143
ifmaxaddr 0 port 5 priority 128 path cost 2000000
member: igb0 flags=143
ifmaxaddr 0 port 1 priority 128 path cost 2000000
groups: bridge
nd6 options=9
tap0: flags=8943 metric 0 mtu 1500
options=80000
ether aa:aa:aa:aa:aa:ad
groups: tap
media: Ethernet autoselect
status: active
nd6 options=29
Opened by PID 12345
ZFS Volume
We then need to create the volume that will be used by the virtual machine.We can’t simply used a regular zfs volume, it needs to be created as volmode=dev, I did not research what are the differences. Anyway here is how to do it:
# zfs create -V240G -o volmode=dev storage/vm/ubuntu
In my server I created the main zpool storage as “storage” and created a regular zfs volume (storage/vm) to use for virtual machines. So in this case I created ubuntu to install ubuntu, it could have been called anything I liked 🙂
Create the VM
One last thing I need is the iso file of the ubuntu installer, i download it and save it in my home directory.
One very interesting thing is that I can connect the virtual monitor of the bhyve virtual machine to a vnc server, effectively I will be able to manage the installation via vnc. I will use VNC Viewer a free product of Real VNC, any vnc software should do.
My server has a firewall allowing mostly anything out but only allowing specific open ports, I could open port 5900 to connect directly via vnc but much better is to use a ssh tunnel so that the connection is encrypted. I will create the tunnel like this (from the terminal on my own machine, not the server):
# ssh -p2200 -L 5900:localhost:5900 server_ip
Note that the -p2200 is needed because my server uses port 2200 for ssh connections.
Once this tunnel is active I can start the new vm. This is the command (which is pretty complicated, once the need for the utility ‘vm-bhyve’ if i plan to use bhyve on a regular basis.
# bhyve -c 1 -m 16G -w -H \
-s 0,hostbridge \
-s 3,ahci-cd,/path/to/ubuntu-20.04.3-live-server-amd64.iso \
-s 4,virtio-blk,/dev/zvol/storage/vm/ubuntu \
-s 5,virtio-net,tap0 \
-s 29,fbuf,tcp=0.0.0.0:5900,w=800,h=600,wait \
-s 30,xhci,tablet \
-s 31,lpc -l com1,stdio \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
ubuntu
Now, to tell you the truth I don’t fully understand everything in this command, but here are the basics:
-c 1 # the vm will have one virtual processor (cpu)
-m 16G # the vm will have 16gigs of ram
-w and -H mean “ignore unimplemented MSR” and “host filesystem to export to the loader”, don’t ask me more about this, at least not ye.
-s are each a separate virtual pic, so that we have one used for the “virtual cd”, one of the hard drive, one for tap0, our virtual network interface, one to create a 800×600 large screen streamed over vnc…
-l specifies to use BHYVE_UEFI.fd as bootrom (remember? this is available tanks to the ‘grub2-bhyve’ we installed earlier).
The last paramether ‘ubuntu’ gives the vm a name.
Once this is launched i am able to connect my vnc client to “localhost:5900” and, tanks to the ssh tunnel i created earlier, i am actually viewing the vnc of my server localhost…
From here I am able to follow and control the installation options.
After I’m done with the installation disk will typically ask to restart the computer, i will fo that, then I will need to actually restart the vm.
First i shut it down:
# bhyvectl --destroy --vm=ubuntu
Then I start it again with a similar (but not identical) command:
# bhyve -c 1 -m 16G -w -H -s 0,hostbridge -s 4,virtio-blk,/dev/zvol/storage/vm/nox -s 5,virtio-net,tap0 -s 29,fbuf,tcp=0.0.0.0:5900,w=800,h=600 -s 30,xhci,tablet -s 31,lpc -l com1,stdio -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd ubuntu
At this point i had some trouble with my VNC, it turned out that the installed vm required a high video quality turned on in my vnc preferences for that connection, otherwise it was displaying garbage, like monitor out of sync.
Technically, if everything went well, there might be no reason to connect to any vnc, the new installation would start and i would be able to connect via ssh (assuming i turn it on) directly to the newly installed vm.