===== some benchmarking ===== == Testimonials / What ressources to benchmark and how? == * network-i/o: netcat -l -p 2222 >/dev/null & dd if=/dev/zero bs=512M count=10 |netcat 10.0.0.1 2222 * disk i/o random: iozone, bonnie++ * disk i/o block: time bash -c 'dd if=/dev/zero of=testfile bs=512M count=4; sync;' * ram i/o, pure userspace/cpu-power: kernelunzipping to ram, kernelcompilation in ram, ubench == Testcases == * hvm/nas/net: domU is in a file on attached nfs in HVM-mode, no paradrivers * hvmp/nas/net: domU is in a file on attached nfs in HVM-mode, paradrivers * para/nas/net: domU is in a file on attached nfs * para/lvm/san: domU is in a logical volume on SAN * para/file/ram: domU is in a file on a dom0-ram * para/lvm/iscsi:domU is in a logical volume on iscsi * para/xfs/iscsi:domU is in a file on an xfs-filesystem on iscsi * para/lvm/disc: domU is in a logical volume on the local disk of dom0 * para/xfs/disc: domU is in a file on an xfs-filesystem on the local disk of dom0 * dom0/xfs/disc: tests on the dom0 (using internal sas-discs) * bare/xfs/disc: bare metal linux without xen-kernel == results == ^ ^ hvm^ hvmp^ paravirtualized domU ^^^^^^^ dom0^ bare^ ^ ^ nas^ nas^ nas^ lvm^ file^ lvm^ xfs^ lvm^ xfs^ xfs^ xfs^ ^ ^ net^ net^ net^ san^ ram^ iscsi^ iscsi^ disc^ disc^ disc^ disc^ |network throughput domU->dom0 in Mb/s | 7| 84| 230| 229| 231| 221| 226| 230| 231| 246| 344| |network throughput dom0->domU in Mb/s | 21| 122| 97| 110| 96| 93| 93| 98| 97| 246| 344| |latency for icmp domU<->dom0 in ms | .2| .1| .1| .1| .1| .1| .1| .1| .1| .03| .01| |time tar xjf linux-2.6.19.tar.bz2 in s | 17| 14| 25| 18| 18| 17| 15| 13| 18| 16| 13| |time make -j8 bzImage modules (real in s)| 1241| 1257| 561| 547| 560| 555| 556| 559| 559| 593| 465| |time it takes to write a 2GB blockfile | 17| 72| 40| 16| 8| 42| 48| 139| 65| 36| 32| == notice == * OS was SuSE SLES10SP1 amd64 with current patches as of september 2007 * kernel extraction and compiling was done in ram of the domU * kernel compilation was done w/ 4 cpus pinned to the domU/dom0 and warm cache, 8GB ram * notice: cant test how tap:aio instead of file behaves since sles10sp1 xen doesnt support it apparently * here i used the disc the domU was running on for testing. Below is a test for a disc without partitiontable/filesystem, just blockdevice handed through. * conclusion: many values dont make much sense. random i/o also hasnt been tested yet. One should benchmark having a specific environment in mind. ===== some more benchmarking ===== What was benched here: raw i/o throughput. Just wanted to see how much performance-overhad a xen-domU has compared to the dom0 under it. Bevore testing i filled up on a host a partition on local harddiscs with data, and compared this to filling up a lvm-volume: both values are the same as expected. Commands used for testing now: # in dom0: time bash -c 'dd if=/dev/zero of=/dev/system/lv_ch_perftest bs=512M count=16; sync;' # this writes 8GB of data to the logical volume and syncs it. # in domU, after handing the lv_ch_perftest over as xvdb: time bash -c 'dd if=/dev/zero of=/dev/xvdb bs=512M count=16; sync;'