Xen гипервизор

Система управления виртуальными машинами Xen — это гипервизор, позволяющий запускать на одном физическом сервере одновременно несколько операционных систем.

Xen способен запускать виртуальные машины как в режиме полной виртуализации, так и в режиме «пара-виртуализации».

Запускаемые Xen виртуальные машины называются «домены». Особенностью Xen является наличие домена dom0, который на самом деле управляет физическим сервером. Виртуальные машины запускаются в dom1, dom2 и так далее.

В статье описан запуск Xen на базе дистрибутива Debian, а также создание и запуск виртуальных машин в паравиртуальном режиме и режиме полной виртуализации.

Описание Xen

Виды виртуальных машин

Xen supports 2 primary types of virtualization, para-virtualization and hardware virtual machine (HVM) also known as “full virtualization”. Para-virtualization uses modified guest operating systems that we refer to as enlightened guests. These operating systems are aware that they are being virtualized and as such don’t require virtual “hardware” devices, instead they make special calls to Xen that allow them to access CPUs, storage and network resources.

In contrast HVM guests need not be modified as Xen will create a fully virtual set of hardware devices for this machine that resemble a physical x86 computer. This emulation requires much more overhead than the paravirtualisation approach but allows unmodified guest operating systems like Microsoft Windows to run ontop of Xen. HVM support requires special CPU extensions — VT-x for Intel processors and AMD-V for AMD based machines. This technology is now prevalent and all recent servers and desktop systems should be equipped with them.

A third type of virtualization though not discussed in this guide is called PVHVM or “Para-virtualisation on HVM” which is a HVM domain with paravirtualized storage, network and other devices. This provides the best of both worlds by reducing expensive emulation but providing hardware accelerated CPU and memory access. A brief look at Xen architecture To understand how storage, networking and other resources are delivered to guest systems we need to quickly delve into how the different bits of Xen interact.

Dom0 — драйвера и управление гипервизором

The dom0 forms the interface to the hypervisor, through special instructions the dom0 communicates to Xen and changes the configuration of the hypervisor. This includes instantiating new domains and related tasks.

Another crucial part of the dom0’s role is that it is the primary interface to the hardware. Xen doesn’t contain device drivers, instead the devices are attached to dom0 and you can use standard Linux drivers. Dom0 then shares these resources with guest operating systems through a number of “backend” deamons.

Паравиртуализация

Each para-virtualized subsystem in Xen consists of 2 parts: 1) the aforementioned “backend” that lives in dom0, and 2) the “frontend” driver within the guest domain. The backend is effectively a deamon which uses special ring buffer based interfaces to transfer data to guests, be it to provide a virtual hard-disk, Ethernet adapter of even a generic SCSI device. The frontend driver then takes this stream of data and converts it back into a device within the guest operating system.

The two important paravirtualisation systems are: net-back/net-front, and blk-back/blk-front — which are the paravirtualized networking and storage systems, respectively.

Установка Xen

Для запуска гипервизора Xen подойдет любой современный компьютер с поддержкой Intel VT-d или AMD-V.

Также вам понадобятся образ диска для установки Windows и VNC клиент.

Включите виртуализацию в BIOS

Xen нужна аппаратная поддержка виртуализации для запуска машин в режиме полной виртуализации — HVM. Нужный пункт в BIOS может называться

“Enable Virtualisation Technology” или “Enable Intel VT” для чипсетов Intel, либо “Vanderpool Technology”.

Установка Debian Squeeze

Загрузка образа диска с Debian

Download the ISO image from this URL:

   http://cdimage.debian.org/debian-cd/current/amd64/iso-cd/

The netinst image is sufficient for our purposes.

Burn the ISO to disk using your computers standard utilities. I recommend wodim on Linux or the built in ISO burning feature in Windows.

Небольшое описание Debian

Debian is a simple, stable and well supported Linux distribution. It has included Xen support since Debian 3.1 “Sarge” released in 2005. The current stable release Debian 6.0 “Squeeze” ships with Xen 4.0.1 and a Xen enabled Linux 2.6.32 kernel.

Debian uses the simple Apt package management system which is both powerful and simple to use. Installing a package is as simple as the following example:

   aptitude install htop

Where htop was the application desired to install.

Simple tasks such as configuring startup scripts, setting up the network etc are covered by this tutorial so don’t worry if you haven’t used Debian before!

Many popular distributions are based off of Debian and also use the Apt package manager, if you have used Ubuntu, Linux Mint or Damn Small Linux you will feel right at home.

Установка системы

Boot the Debian Squeeze Installer CD Insert the Debian CD and configure the CDROM drive as your default boot device in the BIOS or use the system bootmenu if your BIOS supports it (usually F12).

You should see a menu, choose the default “Install” option to begin the installation process. Install the system The Debian installer is very straight forward. Follow the prompts until you reach the disk partitioning section.

Choose advanced/custom, we are going to configure a few partitions here, one for /boot another for /, one more for swap and a final partition to setup as an LVM volume group for our guest machines.

First create the /boot partition by choosing the disk and hitting enter, make the partition 300MB and format it as ext2, choose /boot as the mountpoint.

Repeat the process for / but of course changing the mountpoint to / and making it 15GB or so large. Format it as ext3.

Create another partition approximately 1.5x the amount of RAM you have in size and elect to have it used as a swap volume.

Finally create a partition that consumes the rest of the diskspace but don’t format it or assign a mount point.

We should now have a layout that looks like this assuming your disk device is /dev/sda :

   sda1 - /boot 200MB
   sda2 - / 15GB
   sda3 - swap
   sda4 - reserved for LVM

When you reach the package selection stage only install the base system, we won’t require any GUI or other packages for this guide.

Continue through the installer then reboot and login at the prompt as root.

Настройка LVM для дисков виртуальных машин

LVM is the Linux Logical Volume manager. It is a technology that allows Linux to manage block devices in a more abstract manner.

LVM introduces the concept of a “logical volume”, effectively a virtualized block device composed of blocks written to 1 or more physical devices. These blocks don’t need to be contiguous unlike proper disk partitions.

Because of this abstraction logical volumes can be created, deleted, resized and even snapshotted without affecting other logical volumes.

LVM creates logical volumes within what is called a volume group, which is simply a set of logical volumes that share the same physical storage, known as physical volumes.

The process of setting up LVM can be summarized as allocating a physical volume, creating a volume group on top of this, then creating logical volumes to store data.

Because of these features and superior performance over file backed virtual machines I recommend the use of LVM if you are going to store VM data locally.

Now lets install LVM and get started!

Install LVM with aptitude by running this command:

   aptitude install lvm2

Now that we have LVM installed lets configure it to use /dev/sda4 as its physical volume

   pvcreate /dev/sda4

Ok, now LVM has somewhere to store its blocks (known as extents for future reference). Lets create a volume group called ‘vg0’ using this physical volume:

   vgcreate vg0 /dev/sda4

Now LVM is setup and initialized so that we can later create logical volumes for our virtual machines.

For the interested below is a number of useful commands and tricks when using LVM.

Create a new logical volume:

   lvcreate -n<name of the volume> -L<size, you can use G and M here> <volume group>

For example, creating a 100 gigabyte volume called database-data on a volume group called vg0.

   lvcreate -ndatabase-data -L100G vg0

You can then remove this volume with the following:

   lvremove /dev/vg0/database-data

Note that you have to provide the path to the volume here.

If you already have a volume setup that you would like to copy, LVM has a cool feature that allows you to create a CoW (copy on write) clone called a snapshot. This means that you can make an «instant» copy that will only store the changes compared to the original. There are a number of caveats to this that will be discussed in a yet unwritten article. The most important thing to note is that the «size» of the snapshot is only the amount of space allocated to store changes. So you can make the snapshot «size» a lot smaller than the source volume.

To create a snapshot use the following command:

   lvcreate -s /dev/vg0/database-data -ndatabase-backup -L5G

Once again note the use of the full path.

Настройка Linux Bridge для работы сети

Next we need to setup our system so that we can attach virtual machines to the external network. This is done by creating a virtual switch within dom0 that takes packets from the virtual machines and forwards them onto the physical network so they can see the internet and other machines on your network.

The piece of software we use to do this is called the Linux bridge and its core components reside inside the Linux kernel. In this case the “bridge” is effectively our virtual switch. Our Debian kernels are compiled with the Linux bridging module so all we need to do is install the control utilities.

   aptitude install bridge-utils

With bridge-utils installed we now have a utility called “brctl” this utility talks to the Linux bridging module to setup new bridges and attach physical or virtual interfaces to these bridges.

brctl can be used to create a bridge as such, where <bridgename> is the name of the bridge:

   brctl addbr <bridgename>

And interfaces can be added to that bridge by running the following:

   brctl addif <bridgename> <interface>

Instead of calling brctl directly we are instead going to configure our bridge through Debian’s networking infrastructure which can be configured through

   /etc/network/interfaces

Open this file with the editor of your choice (more editors can be installed with aptitude) Nano is installed by default if you selected the minimal install

   nano /etc/network/interfaces

Depending on your hardware you probably see a file pretty similar to this:

    auto lo
    iface lo inet loopback

    auto eth0
    iface eth0 inet dhcp

This file is very simple. Each stanza represents a single interface. Breaking it down “auto eth0” means that eth0 will be configured when ifup -a is run (which is run a boot time) what this means is that the interface with automatically be started/stopped for you. “iface eth0” then describes the interface itself, in this case it merely specifies that it should be configured by DHCP — we are going to assume that you have DHCP running on your network for this guide. If you are using static addressing you probably know how to set that up. We are going to edit this file so it resembles such:

    auto lo
    iface lo inet loopback

    iface eth0 inet manual
    auto xenbr0

    iface xenbr0 inet dhcp
        bridge_ports eth0

This will setup the bridge xenbr0 and add eth0 to the bridge for us. The equivalent commands would be:

   brctl addbr xenbr0
   brctl addif xenbr0 eth0
   dhclient xenbr0

Except this will now be done automatically on boot and completely managed by Debian.

This method of network configuration is best practice and the automated scripts called by xend are now deprecated, we will discuss this later once we have Xen installed.

Установка Xen

The Debian Xen packages consists primarily of a Xen enabled Linux kernel, the Xen hypervisor itself, a modified version of QEMU that support Xen’s HVM mode and a set of userland tools.

All of this except QEMU can be installed via an Apt meta-package callled xen-linux-system. A meta-package is basically a way of installing a group of packages automatically. Apt will ofcourse resolve all dependencies and bring in all the extra libraries we need.

Lets install the xen-linux-system metapackage:

   aptitude -P install xen-linux-system

Next we will install the Xen QEMU package so that we can boot HVM guests later (this is optional but highly recommended)

   aptitude install xen-qemu-dm

Now have Xen, a Xen kernel and the userland tools installed, almost ready to go.

Настройка GRUB для запуска Xen

Because Xen starts before your operating system we need to change how your systems boot process is setup. The bootloader installed during installation called GRUB is what tells your computer which operating system to start and how.

GRUB2 configuration is stored in the file /boot/grub/grub.cfg However we aren’t going to edit this file directly, as it changes every time we update our kernel. Debian configures GRUB for us using a number of automated scripts that handle upgrades etc, these scripts are stored in /etc/grub.d/* and are configured via

   /etc/default/grub

We are going to change the order of the operating systems so that our Xen system is the default option. By executing the below command we are moving Xen to a higher priority than default Linux so that it gets the first position in the boot menu.

   mv -i /etc/grub.d/20_linux_xen /etc/grub.d/09_linux_xen

We then generate the /boot/grub/grub.cfg file by running the command below:

   update-grub

We can now reboot and the default boot option will be our Xen dom0 running ontop of Xen!

See also

Основные команды Xen

Before we dive into creating some guest domains we will quickly cover some basic Xen commands using the “xm” utiltiy. Going forward it is likely that xm will be deprecated as previously mentioned but once xl is stable and part of a supported release this guide will be rewritten to reflect that.

So lets start with simple stuff!

   xl info

This returns the information about the Xen hypervisor and dom0 including version, free memory etc.

   xl list

Lists running domains, their IDs. memory, state and CPU time consumed

   xl top

Shows running domains in real time and is similar to the “top” command under Linux. This can be used to visualize CPU, memory usage and block device access.

We will cover some more commands during the creation of our guest domains.

Создание паравиртуализированной виртуальной машины с Debian

PV guests are notoriously “different” to install. Due to the nature of enlightened systems they don’t have the usual concepts of a CD-ROM drive installer analagous to their physical counterparts. However, luckly enough there does exist some tools that help us prepare “images” or effectively snapshots of the operating systems that are able to run inside of Xen domains.

Debian contains a number of tools for creating Xen guests. The easiest of which is known as xen-tools. This software suite manages the downloading and installing of guest operating systems including both Debian and RHEL based DomUs. In this guide we are going to use xen-tools to prepare a Debian Squeeze paravirtualized domU.

Xen-tools can use LVM storage for storing the guest operating systems, in this guide we created the volume group “vg0” in the Setting up LVM Storage section.

When guests are paravirtualized there is no “BIOS” or bootloader resident within the guest filesystem and for a long time guests were provided with kernels external to the guest image. This however is bad for maintainability (guests cannot upgrade their kernels without access to the dom0) and is not as flexible in terms of boot options as they must be passed via the config file.

The Xen community wrote a utility known as pygrub which is a python application for PV guests that enables the dom0 to parse the GRUB configuration of the domU and extract its kernel, initrd and boot parameters. This allows for kernel upgrades etc inside of our guest machines along with a GRUB menu. Using pygrub or the stub-dom implementation known as pv-grub is best practice for starting PV guests. In some cases pv-grub is arguably more secure but as it is not included with Debian we won’t use it here though it is recommended in production environments where guests cannot be trusted.

Apart from this PV guests are very similar to their HVM and physical OS counterparts. Configuring xen-tools and building our guest First lets install the xen-tools package:

   aptitude install xen-tools

We can now create a guest operating system with this tool. It effectly automates the process of setting up a PV guest from scratch right to the point of creating config files and starting the guest. The process can be summarized as follows:

  • Create logical volume for rootfs
  • Create logical volume for swap
  • Create filesystem for rootfs
  • Mount rootfs
  • Install operating system using debootstrap (or rinse etc, only debootstrap covered here)
  • Run a series of scripts to generate guest config files like fstab/inittab/menu.lst
  • Create a Xen config file for the guest
  • Generate a root password for the guest system
  • Unmount the guest filesystem

These 9 steps can be carried out manually but the manual process is outside the scope of this guide. We instead will execute the below command:

  xen-create-image --hostname=tutorial-pv-guest \
  --memory=512mb \
  --vcpus=2 \
  --lvm=vg0 \
  --dhcp \
  --pygrub \
  --dist=squeeze

This command instructs xen-create-image (the primary binary of the xen-tools toolkit) to create a guest domain with 512MB of memory, 2 vcpus, using storage from the vg0 volume group we created, use DHCP for networking, pygrub to extract the kernel from the image when booted and lastly we specify that we want to deploy a Debian Squeeze operating system.

This process will take a few minutes after which you can start the guest with:

  xl create -c /etc/xen/tutorial-pv-guest.cfg

The -c in this command tells xm that we wish to connect to the guest virtual console. Which is a paravirtualised serial port within the domain that xen-create-image configured to listen with a getty. This is analgous to running:

  xl create /etc/xen/tutorial-pv-guest.cfg && xm console tutorial-pv-guest

You can leave the guest virtual console by pressing ctrl+] and re-enter it by running the “xl console <domain>” command.

You can later shutdown this guest either from within the domain or from dom0 with the following:

  xl shutdown tutorial-pv-guest

That completes our section on setting up your first paravirtualized domain! If you don’t have any interest in setting up a HVM domain then no need to read any further but it is highly recommended!

Создание HVM виртуальной машины c Windows

HVM guests are quite a bit different to their PV counterparts. Because they require the emulation of hardware there is more moving pieces that need to be configured etc.

The main point worth mentioning here is that HVM requires the emulation of ATA, Ethernet and other devices, while virtualized CPU and Memory access is performed in hardware to achieve good performance. Because of this the default emulated devices are very slow and we generally try to use PV drivers within HVM domains. We will be installing a set of Windows PV drivers that greatly increase performance once we have our Windows guest running.

This extra emulation is provided by a Xen modified version of QEMU we should have installed this earlier but in case you skipped that step install the Xen QEMU package now:

   aptitude install xen-qemu-dm

Once the necessary packages are installed we need to create a logical volume to store our Windows VM hard disk, create a config file that tells Xen to start the domain in HVM mode and boot from the DVD in order to install Windows.

First, create the new logical volume — name the volume «windows», set the size to 20GB and use the volume group vg0 we created earlier.

   lvcreate -nwindows -L20G vg0

Next open a new file with your text editor of choice:

   nano windows.cfg

Paste the config below into the file and save it, NOTE this assumes your Windows iso is located in /root/ with the filename windows.iso.

   kernel = "/usr/lib/xen-4.0/boot/hvmloader"
   builder='hvm'
   memory = 4096
   vcpus=4
   name = "ovm-1734"
   vif = ['bridge=xenbr0']
   disk = ['phy:/dev/vg0/windows,hda,w','file:/root/windows.iso,hdc:cdrom,r']
   acpi = 1
   device_model = 'qemu-dm'
   boot="d"
   sdl=0
   serial='pty'
   vnc=1
   vnclisten=""
   vncpasswd=""

You can then start the domain and connect to it via VNC from your graphical machine.

   xl create windows.cfg

The VNC display should be available on port 5900 of your dom0 IP, for instance using gvncviewer:

   gvncviewer <dom0 ip>:5900

Once you have installed Windows by formatting the disk and by following the prompts the domain will restart — however this time we want to prevent it booting from DVD so destroy the domain with

   xl destroy windows

Then change the boot line in the config file to read boot=»c»‘ restart the domain with

   xl create windows.cfg

Reconnect with VNC and finish the installatation. When this process is complete you should then procede to download the GPLPV drivers for Windows by James Harper.

Signed drivers can be obtained from here:

http://wiki.univention.de/index.php?title=Installing-signed-GPLPV-drivers

Many thanks for Univention for making signed drivers available to the Xen community and ofcourse a massive thanks to James for all his work on making Windows on Xen such a smooth experience. On finalizing the installation and rebooting you should notice much improved disk and network performance and Xen will now be able to gracefully shutdown your Windows domains.

Добавить комментарий

Ваш e-mail не будет опубликован. Обязательные поля помечены *