With storage configured (and backups configured), I am moving on to configuring compute resources. Currently, I have Docker containers managed in Unraid and my goal is to add some redundancy which I assume will involve migrating to Kubernetes nodes running on multiple physical servers. In preparation for this, I have moved my ddns and WireGuard services from Unraid to my pfSense router. This means I can maintain remote access to my network so long as my router is on, even if nothing else on the network is running. With that taken care of, now I need somewhere for all of the other containers.

System Requirements

I have a few requirements for the hypervisor I use for virtualized compute:

  • Ability to run Windows, Linux, and BSD VMs
  • Support for hardware pass-through
  • Support for ZVol disks
  • Web UI for management
  • FOSS with reasonable throubleshooting resources/support options

With those requirements in mind I found Proxmox and XCP-NG to be the popular options for what I’m looking to do and have started researching and testing those options.

First Impressions

To start, I will set up each option and get an idea of what the UI looks like, how administration will work, and form some basic opinions. I expect that either option will do what I need, so its more a question of how easy it is to get things running and administer the server(s).

Proxmox

This seems to be the more popular option at the moment, with an active community and development team. It seems to to check all my boxes, including support for clustering and more functionality then I will likely use. These are only first impressions after setup and poking around the Web UI:

Pros

  • Web UI with 2FA support and user management
  • Support for LXC and VMs
  • System monitoring tools built-in
  • KVM virtualization on Debian is a familiar configuration

Cons

  • The clock is incorrect with no obvious place to run NTP
  • Web UI doesn’t appear customizable and is bloated with Ceph, Firewall, and Disk management tools I won’t use
  • There’s a “No Subscription” modal that appears EVERY TIME you login unless you purchase an annual subscription

The subscription warning is annoying to me as it feels needlesly intrusive. A banner/watermark or option to dismiss for a month at a time would be nice, or even a one-time payment option just to remove the warning. There are threads on the Proxmox forums discussing this and it is a conscious design decision that seems to be here to stay.

XCP-ng

The older and more established option of these 2. The management interface here (Xen Orchestra) is distinct from the core OS and runs in a container but for the purposes of this comparison I will include Xen Orchestra functionality:

Pros

  • Maintained by the Linux Foundation
  • Clean and easily navigable management UI
  • Monitoring tools built-in

Cons

  • xen hypervisor is less common than KVM

Overall, I find this UI much simpler and easier to navigate compared to Proxmox. I also really like that Xen Orchestra is independent from the actual virtualization system. I am less certain of how xen will compare to kvm, but based on my initial impressions I will start setting up some things with xcp-ng and see how it goes.

Some other notes

Prior to installing anything, I did do some research into Proxmox and XCP-ng. I saw that both Linode and AWS have adopted kvm in recent years, citing better performance (though Amazon is using their own hypervisor built on top of kvm). However, I am comparing Proxmox and XCP-ng, not kvm an xen, so I don’t believe these comparisons are directly applicable. I also came across a forum post on lawrencesystems.com from the CEO and co-founder of the company responsible for Xen Orchestra explaining some of the details of Xen, how it compared to KVM, and what the goals are for XO.

Where to go from here

I will continue with setting up XCP-ng and get a better idea of whether it is the solution I am looking for. I may do some more with Proxmox, but based on my initial impressions and its overall scope I don’t think it fits as well into the setup I have in mind. For the moment, I have things installed on a spare computer (a loose motherboard and PSU on a shelf in the spare room) so I have somewhere to break things before committing to any deployment.