Site Tools


Sidebar

software:xen:faq
What about the location of the domU-kernel, somewhere on dom0 or on domU directly?
Two places are common:
1) domU-kernel placed on dom0-filesystem directly, 'kernel' option in xen-config for the
   domU is used then. Only possible for paravirt-domU.
     pros: - kernel is directly reachable from dom0
     cons: - domU depends on files outside of its disc-image, so you have to
             keep an eye of what domU uses what kernel-file
           - on upgrading the domU-kernel is a bit more complicated, keep
             kernel, maybe existing initrd and modules-directory in sync

2) domU-kernel placed inside the domU-diskimage. Works for both HVM and paravirt-domU.
   One sees mostly this nowadays. Kernel is located/booted by pygrub (or a script
   mounting the partition, making a copy of the kernel inside to dom0, and starting it then)
     pros: - easy updating, i.e. just 'yum update' from the domU updates the
             kernel, initrd, modules and kernel is booted on next domU-boot
     cons: - potential security weaknesses. If the dom0 blindly executes stuff one can change
             from inside the domU. Already happened, see CVE-2007-4993
Does live migration require a cluster aware filesystem?

Not necessarily, there are also other ways. Basically all thats needed is that the dom0s taking part in the migration have read/write-access to the domU disks. This is mostly done on one of those ways:

  • Partition: Simplest variant. Here the dom0s have access to devices like e.g. 7G big LUNs exported from a SAN-storage, or 5GB big LUNs exported via iscsi from an iscsi-target. Good thing: its easy, bad: its not very flexibel for enlarging the disks.
  • LVM: Here the dom0s have access to at least one blockdevice, i.e. a SAN-lun or iscsi-lun. That storage is made usable with the usual LVM2-layers PhysicalVolumes, VolumeGroups and Volumes (or the appropriate layers used by EVMS-lvm). The domU-disks live in LogicalVolumes here. Important: a vanilla lvm2 wont do here, the lvm has to be cluster-aware. CLVM (ClusterLVM) based on RedHats cluster manager, or EVMS based on heartbeat will do. The good thing: its flexible, you can enlarge discs on LogicalVolume layer. Bad thing: more complex: the nodes have to become clusternodes, clustermanager or heartbeat.
  • Clusterfs commonly shared storage: Here you have a big filesystem shared among the dom0s, the domU-disks live as files in there. GFS from redhat, OCFS2 from oracle and GPFS from IBM may be options. The shared storage accessed by all dom0s can be something like SAN or iscsi-luns, or an additional layer like DRBD that lives on local disks on the dom0s.
  • Clusterfs local storage: Ceph ( https://ceph.io/ ) and AFS could be interesting for this, haven't heard from someone using those as storage for domU-discs.
Doing live-migration: but the ext3 or ntfs-filesystems in domUs are not cluster-aware?

They dont need to. They live in a file or block-device like a logical-Volume. On live-migration they are carefully handed over from one dom0 to an other, at handing over none of the two dom0 is writing. After the handover the domU is unfrozen on the destination-dom0, its executed again, and i/o to the disc-blockdevice is done by destination-dom0.

My domU is running, but on what vnc-port is it listening?
xm list # get the id your domU is running under
xm list <id> -l | grep location # now get the long listing and grep something out that looks like the vnc-port
  # if 'location 127.0.0.1:5903' drops out here you can i.e. start a 'vncviewer :3' on dom0 to connect to the vnc-screen.
What about 3D under xen?

To present the 3D on dom0: there are currently patches needed for installation of nvidia or ati-3d-drivers. A patch for an older version of the nvidia-driver is here. Intel-drivers should need no patching (only DRI in kernel needed and proper xorg-support), but performance is worse than nvidia/ati.

For the clientside one can run OpenGL-applications in virtual machines, those beeing para- or HVM-xen-domU, qemu, kvm or vmware. The software for this is vmgl. Modification of both the host (i.e. dom0) and vm (i.e. domU) is needed. For directx one could theoretically use wine in a virtual machine that uses vmgl. Currently nothing is known for 3d-accelerated windows-vm.
Also dedication of pci-graphiccards to para-domU could work, there have been stories on @xen-users where dedication of the card worked and after configuring Xorg (-sharevts) x starts up, but the cpu is at 100% usage and thus unusable.

What devices can i use from the domU?

On paravirtualized domU (that can basically run on any computer younger than i686) you can access

  • network-devices, those specified in the domU configfile. performance: unmatched
  • block-devices, dom0 accesses those as SAN-luns, disc-partitions, lvm-volumes, iscsi-volumes or such. The domU just uses the para-block-driver and has no clue if the blocks are SAN, iscsi etc. performance: unmatched
  • pci-devices, you can mask pci-devices from beeing used in dom0 and hand it over to a domU. Those need the device-driver to appropriately use the device. Used for network-cards, dvb-t, soundcards etc. performance: unmatched
  • scsi-devices, once the pvSCSI-driver is completed scsi-devices can be handed over

HVM-domU require appropriate hardware (SVM from AMD or VT from Intel) and allow accessing

  • network-devices emulated, those are completely emulated by a layer of qemu-code, presented as tap-devices in dom0. The domU-os has just to support the emulated network-device. performance: bad
  • network-devices paradriver, here the domU-os has a para-driver to hand over data to the dom0. performance: medium
  • block-devices emulated, everything that dom0 can access can be handed over to the domU, however here an emulation-layer is used emulating hardware for the domU. performance: bad
  • block-devices paradriver, ere the domU-os has a para-driver to hand over data to the dom0. performance: medium
  • usb-devices, those can be handed over to domUs
  • pci-devices, xen-unstable can hand those over to HVM-domU if the system is running on VT-d cpu(s), thats an IO-related addition to VT found in newest cpus
software/xen/faq.txt ยท Last modified: 2022/11/13 12:06 by 127.0.0.1