Xen userland consists of a toolstack that allows users to create, destroy, configure, and manage other aspects of a guest lifecycle. There are some obsolete/deprecated tool stacks such as xend and xm while currently xl is the default toolstack and shipped with xen.
Linux distro-users can install xl toolstack by installing pre-built packages e.g. xen-utils package will install a certain version of xl.
The xl toolstack must be of the same version as Xen hypervisor. This leaves a user with two options:
Ubuntu 16.04 LTS users can install xen-utils v4.8 so they would be required to build and use the same version (mainline is 4.10 unstable).
For ubuntu 17.04, xen-utils v4.8 is available so you can go upto hypervisor v4.8
For downgrading to a certain version of xen hypervisor, go to your xen repo and checkout that tag
git checkout RELEASE-4.8.0
Rebuild the xen hypervisor (refer to our previous post) and update your SD card boot directory.
Cross-compiling the user-land is a bit trickier due to package dependencies.
There are a few options:
https://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/CrossCompiling
We actually found the native build from Orange Pi terminal to be cleaner. There might be some problems with startup configuration, so we suggest first installing the stock xen-utils package for the distribution and then executing the build and install steps described below.
Once you have Xen code, development tools and all dependency packages installed on the SD card and Xen and dom0 working, you can enter the following command on dom0 console:
make dist-tools XEN_TARGET_ARCH=arm64
To install the tool stack, you can enter the following command
make install-tools EN_TARGET_ARCH=arm64
To test the installed toolstack you can enter the following command on dom0 console:
xl list
This should display some output as below:
Name ID Mem VCPUs State Time(s) Domain-0 0 256 4 r----- 3038.6
If you can see domain0 listed, your toolstack is up and running and you are ready to create your first VM.
Before you can create a guest/VM, you would need some sort of rootfs for the guest. There are multiple options.
You can partition your SD card so that dom0 and guests have their separate partitions.
The other option is to create a disk image and store it in the dom0 rootfs.
Once a partition or disk image is created, you will be required to generate a rootfs within this image as well. It would be simple to actually use the same contents in the dom0 rootfs.
There are ample tutorials online on how to accomplish this stuff. We leave it on the user to select the option that works best for them.
You will be required to create a guest conf file. Create a file named guest1.conf with the following contents:
kernel = "/boot/Image" memory = 128 name = "guest1" vcpus = 2 serial = "pty" disk = ['phy:/dev/loop0,xvda,w'] extra = 'console=hvc0 root=/dev/xvda rw clk_ignore_unused'
Note that we are using a disk image which has been set as loopback device loop0.
You can now launch your guest using the command
xl create -c guest1.conf
The -c option attaches you to the guest console. You should see the guest Linux kernel booting up on the console. You should be able to login and execute command on the guest kernel.
To return to dom0 console, press:
CTRL ]
You can again switch to guest1 console by
xl console guest1
You have successfully booted a guest domU kernel. You can create another disk image, guest configuration and launch the second guest. You can also execute different xl commands to tinker with your domU guests.
In the next blog post, we’ll discuss how to enable display subsystem.
Awais Masood is an Associate Software Architect at Vadion. He has been working on embedded systems for more than 10 years and has an extensive experience in porting device drivers and OS kernels on ARM based hardware platforms.