hvm | hvmp | paravirtualized domU | dom0 | bare | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
nas | nas | nas | lvm | file | lvm | xfs | lvm | xfs | xfs | xfs | |
net | net | net | san | ram | iscsi | iscsi | disc | disc | disc | disc | |
network throughput domU→dom0 in Mb/s | 7 | 84 | 230 | 229 | 231 | 221 | 226 | 230 | 231 | 246 | 344 |
network throughput dom0→domU in Mb/s | 21 | 122 | 97 | 110 | 96 | 93 | 93 | 98 | 97 | 246 | 344 |
latency for icmp domU↔dom0 in ms | .2 | .1 | .1 | .1 | .1 | .1 | .1 | .1 | .1 | .03 | .01 |
time tar xjf linux-2.6.19.tar.bz2 in s | 17 | 14 | 25 | 18 | 18 | 17 | 15 | 13 | 18 | 16 | 13 |
time make -j8 bzImage modules (real in s) | 1241 | 1257 | 561 | 547 | 560 | 555 | 556 | 559 | 559 | 593 | 465 |
time it takes to write a 2GB blockfile | 17 | 72 | 40 | 16 | 8 | 42 | 48 | 139 | 65 | 36 | 32 |
What was benched here: raw i/o throughput. Just wanted to see how much performance-overhad a xen-domU has compared to the dom0 under it. Bevore testing i filled up on a host a partition on local harddiscs with data, and compared this to filling up a lvm-volume: both values are the same as expected. Commands used for testing now:
# in dom0: time bash -c 'dd if=/dev/zero of=/dev/system/lv_ch_perftest bs=512M count=16; sync;' # this writes 8GB of data to the logical volume and syncs it. # in domU, after handing the lv_ch_perftest over as xvdb: time bash -c 'dd if=/dev/zero of=/dev/xvdb bs=512M count=16; sync;'