What about the location of the domU-kernel, somewhere on dom0 or on domU directly?
Two places are common:
1) domU-kernel placed on dom0-filesystem directly, 'kernel' option in xen-config for the
   domU is used then. Only possible for paravirt-domU.
     pros: - kernel is directly reachable from dom0
     cons: - domU depends on files outside of its disc-image, so you have to
             keep an eye of what domU uses what kernel-file
           - on upgrading the domU-kernel is a bit more complicated, keep
             kernel, maybe existing initrd and modules-directory in sync

2) domU-kernel placed inside the domU-diskimage. Works for both HVM and paravirt-domU.
   One sees mostly this nowadays. Kernel is located/booted by pygrub (or a script
   mounting the partition, making a copy of the kernel inside to dom0, and starting it then)
     pros: - easy updating, i.e. just 'yum update' from the domU updates the
             kernel, initrd, modules and kernel is booted on next domU-boot
     cons: - potential security weaknesses. If the dom0 blindly executes stuff one can change
             from inside the domU. Already happened, see CVE-2007-4993
Does live migration require a cluster aware filesystem?

Not necessarily, there are also other ways. Basically all thats needed is that the dom0s taking part in the migration have read/write-access to the domU disks. This is mostly done on one of those ways:

Doing live-migration: but the ext3 or ntfs-filesystems in domUs are not cluster-aware?

They dont need to. They live in a file or block-device like a logical-Volume. On live-migration they are carefully handed over from one dom0 to an other, at handing over none of the two dom0 is writing. After the handover the domU is unfrozen on the destination-dom0, its executed again, and i/o to the disc-blockdevice is done by destination-dom0.

My domU is running, but on what vnc-port is it listening?
xm list # get the id your domU is running under
xm list <id> -l | grep location # now get the long listing and grep something out that looks like the vnc-port
  # if 'location 127.0.0.1:5903' drops out here you can i.e. start a 'vncviewer :3' on dom0 to connect to the vnc-screen.
What about 3D under xen?

To present the 3D on dom0: there are currently patches needed for installation of nvidia or ati-3d-drivers. A patch for an older version of the nvidia-driver is here. Intel-drivers should need no patching (only DRI in kernel needed and proper xorg-support), but performance is worse than nvidia/ati.

For the clientside one can run OpenGL-applications in virtual machines, those beeing para- or HVM-xen-domU, qemu, kvm or vmware. The software for this is vmgl. Modification of both the host (i.e. dom0) and vm (i.e. domU) is needed. For directx one could theoretically use wine in a virtual machine that uses vmgl. Currently nothing is known for 3d-accelerated windows-vm.
Also dedication of pci-graphiccards to para-domU could work, there have been stories on @xen-users where dedication of the card worked and after configuring Xorg (-sharevts) x starts up, but the cpu is at 100% usage and thus unusable.

What devices can i use from the domU?

On paravirtualized domU (that can basically run on any computer younger than i686) you can access

HVM-domU require appropriate hardware (SVM from AMD or VT from Intel) and allow accessing