Xen
Xen
We currently run Xen on Pumpkin. Xen is a system to allow multiple "host" operating systems to run on the same hardware. The correct terminology is a "domain0" host system, which has direct control of the hardware via a "hypervisor" and domU guest systems which can be either Para-virtualized (faster, but requires a special kernel that is Xen aware) or Fully-Virtualized (slower, because it requires all the hardware to be emulated.)
Xen Documentation sources
- Xen project documentation
- RHEL virtualization front page
- These may also come in handy:
Creating a new Virtual Host from a previously installed disk.
For Fully Virtualized Systems:
It seems one can do the following to create a clone of a physical system as a virtual host. This still needs to be tested better!
- Stick the disk with the operating system on it in slot 23 or 24.
- Create a new fully virtualized host:
- In virtual machine manager, click create new.
- Choose a name, as in: VHost23_sde
- Choose for a fully virtualized host, this allows more flexibility for the kernels etc.
- Choose to install from the image at /data1/. This is a RHEL5 image. Choose the operating system version.
- Choose the disk: /dev/sde or /dev/sdf
- Set ethernet to xenbr1
- Set memory (probably 1024) and cpus (probably 2)
- Save config (click finish)
- You can now boot the virtual system. (It will do this automatically.) When prompted Do not install a new system (idiot!) instead type "linux rescue".
- When the rescue boots, it will look for installed operating systems.
- From the rescue console you can figure out what the hardware signature is. Now some files on the operating system need to be adapted to the new (temporary) hardware signature. You could also do this ahead of time by mounting the disks on pumpkin and modifying the files there.
Backup all files you modify to a *_Physical version, so you can undo this before sticking it back into the physical system. Keep track of your changes on the wiki!
Hard disk will be /dev/hda* ==> Modify grub.conf (or use LABEL=ROOT and label your partition), with LVM systems you should be OK. ++> A problem with labels is that they don't "stick". It seems that if you made the label while the ++> disk was mounted on the host, it is not seen while mounted on the guest. You need to do these ++> things from the guest operating system. This is also true for installing (re-installing) grub. ==> Modify /etc/fstab (probably change /dev/sda* to /dev/hda*)
Ethernet ==> Modify /etc/modules.conf or /etc/modprobe.conf and alias eth0 8139cp (REALTEK 8139cp driver) Needs test ==> Same for eth1
- Exit the console, or shutdown the machine. Now add another ethernet card to the config, hooking this up to xenbr0
- Restart your virtual system. Make sure all VM disks are unmounted from pumpkin
For the "FullVirt24_sdf" system I ran into an difficult problem: the initrd was no good. There was no way to "repair" this, since it needs a booted system to create a new initrd. I guess I could get an initrd from elsewhere and put that on the /boot with matching kernel. Instead I decided to reinstall RHEL4, calling this sytstem "Landau".
Creating a new VM from a tar file
In this case, we have a system backed up to a tar file (or several tar files). How do we create a domU out of this?
Step one, prepare a new disk.
Either use one of the "System?" disks, which run from the RAID array, or use a disk in slot 23 or 24. We need to partition the disk in the same way as the original. This is easier if you didn't have LVM. Use "fdisk /dev/sdx", create the partitions and swap partition and mark the first partition as bootable. Next format the partitions (mke2fs -j /dev/sdx1;mke2fs -j /dev/sdx3) and make the swap space: (mkswap /dev/sdx2). Then label the drives as they were on the original system (probably "e2label /dev/sdx3 '/ " and "e2label /dev/sdx1 '/boot'")
Next, mount the drives on the host and untar the saved operating system, then unmount the drives!
Now make the installation available for NFS export. Here is how:
mount -t iso9660 -o ro,loop=/dev/loop0 /data1/rhel5_i386.iso /mnt/iso export -o no_root_squash,sync /mnt/iso
We now need to create a Xen config file that will boot from the CD in rescue mode. This can be done with the following example file. Note that you will need to change the disk line according to your device.
name = "corn" uuid = "30ae2c1d07d9ea094c7e0d7d8ea63b72" maxmem = 4096 memory = 2048 vcpus = 2 # bootloader = "/usr/bin/pygrub" kernel="/var/lib/xen/images/64/vmlinuz" ramdisk="/var/lib/xen/images/64/initrd.img" extra=" rescue" on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ "type=vnc,vncunused=1,keymap=en-us" ] disk = [ "phy:/dev/sdb,xvda,w" ] vif = ["mac=00:16:3e:0e:b4:0d,bridge=xenbr1" ]
Start your domU with: "xm create corn"
Pop up the console (easiest way is from the virt-manager). OK the first questions, then choose NFS image for the installation source. Configure the "xenbr1" (usually eth1, or eth0, check your config) ethernet to an ip (10.0.0.26/255.255.255.0) and use (10.0.0.248) as DNS server.
On the next screen, use "pumpkin" for the server, and "/mnt/iso" for the distribution location, click continue.
If you are lucky, the system will find your system drive and mount it. If not, do "fdisk -l" to see what you got in terms of disks and mount the disks by hand. Check to make sure the mounted system has a proper /dev! If not, you can do: cd /;tar cf - /dev | (cd /mnt/sysimage; tar xvf -).
Now, I ran into problems: Could not get "grub" correctly on the new disk.... What are the grub errors, if it's loading grub?
Doing a new install for para-virtualized. You can use virt-manager and click on new. It creates the following config file
name = "corn" uuid = "5c0b168b78b987e2a19bf0883744225a" maxmem = 2048 memory = 2048 vcpus = 2 kernel = "/var/lib/xen/virtinst-vmlinuz.UcImzS" ramdisk = "/var/lib/xen/virtinst-initrd.img.kkWMzV" extra = " method=/data1/rhel-5.1-server-i386-dvd.iso" on_poweroff = "destroy" on_reboot = "destroy" on_crash = "destroy" vfb = [ "type=vnc,vncdisplay=1,keymap=en-us" ] disk = [ "phy:/dev/sdb,xvda,w", "file:/data1/rhel-5.1-server-i386-dvd.iso,xvdb,r" ] vif = [ "mac=00:16:3e:6c:e9:7d,bridge=xenbr1", "mac=00:16:3e:6c:e9:7d,bridge=xenbr1,script=vif-bridge", "mac=00:16:3e:70:71:6a,bridge=xenbr0,script=vif-bridge" ]
Xen Tricks and Debugging
See also the Trouble Shooting section in RH Virtualization Guide
- Logs
- Xen sends logs to /var/log/xen/... These are not always fully useful, but can give a hint what happened.
Rescue Methods
There are 3 ways that I found so far in which a Xen system can boot. Each method has it's own rescue.
It you have a fully virtualized machine (hint you have a kernel = "/usr/lib/xen/boot/hvmloader" line in config file), you can boot of the ISO image directly. To do this change the following in the /etc/xen/<config file>
# boot="c" boot="d" # disk = [ 'phy:/dev/sdf,hda,w'] disk = [ 'phy:/dev/sdf,hda,w','file:/data1/rhel-5.1.-server-i386-dvd.iso,hdc:cdrom,r']
Then run "xm create hvmconfig_file" to load your changes and boot the new config. It will boot from the cdrom image.
This seems to work! Remember to change boot="d" back to boot="c" to boot from your disk. Also note: comments are not preserved when you use GUIs to change configuration lines, so backup the original config file.
If you are using a Para-virtualized system, this trick does not work. There is another trick, which is similar to what you do when you need to install a system on a para-virtualized domU. First, you need to make the ISO available as an expanded directory tree, exported over NFS. Two way to do this, in either case mount the iso (hint: mount -t iso9660 -o ro,loop=/dev/loop6 <path and iso-file-name> /mnt/iso). You can now copy the iso to a directory and then export that directory (note: is is perfectly OK to export a directory that is itself on a disk that is already exported as something esle.) OR export the /mnt/iso (yes, that is supposed to work.) Now copy vmlinuz and initrd.img from the images/xen directory to an accessible place (or leave it where it is) and then modify the /etc/xen/configfile to use that kernel:
# bootloader = "/usr/bin/pygrub" kernel="/var/lib/xen/images/vmlinuz" ramdisk="/data1/xen_store/boot/initrd.img" extra=" rescue"
Start the domU (xm create <config file>) and pop up the console. You now need to tell it where the NFS mountable install stuff is.
Xen Tools
I found that the "harddrive on a file" for a system is incredibly slow. Only use it in an emergency!
To open and manipulate such a file use the kpartx tool. Only do this if when the domU is shut down!
>kpartx -l <filename> # Show the partitions on the file. >kpartx -av <filename> # create devices using loops corresponding to the partitions. add map loop2p1 : 0 208782 linear /dev/loop2 63 add map loop2p2 : 0 20257965 linear /dev/loop2 208845 # These loop2p1 etc can be found in /dev/mapper/loop2p1 etc. mount /dev/mapper/loop2p2 /mnt/tmp # Mount the second partition on /mnt/tmp ..... do your work .... umount /mnt/tmp xpartk -d <filename>