LVM thin provisioning is in the linux kernel since some versions, Fedora17 and RHEL6.3 support it. Why this is useful:
First lets install the utilities required to manage thin volumes.
# yum install device-mapper-persistent-data
Now creating a sparsefile and using this as backend for the thin provisioning testing. Then creating PV and VolumeGroup.
# dd if=/dev/zero of=sparsefile bs=1 count=1 seek=5G 1+0 records in 1+0 records out 1 byte (1 B) copied, 4.3124e-05 s, 23.2 kB/s # losetup -f sparsefile # pvcreate /dev/loop0 Writing physical volume data to disk "/dev/loop0" Physical volume "/dev/loop0" successfully created # vgcreate test /dev/loop0 Volume group "test" successfully created
Now creating the pool. Its a 'special kind of volume', the thin provisioned volumes will be stored inside of the space allocated for the pool. As parameter we need the size to allocate for the pool.
# lvcreate --thinpool pool -l 1200 test Rounding up size to full physical extent 8.00 MiB Logical volume "pool" created
We now create a thin volume of 512M virtual size. The virtual size can also be higher than the size of the pool volume.
# lvcreate -T test/pool -V 512M Logical volume "lvol0" created
Now lets create a file on the volume, watch the space getting occupied in the pool volume.
# mkfs.ext4 /dev/test/lvol0 [...] # mkdir -p /mnt/tmp # mount -o discard /dev/test/lvol0 /mnt/tmp # lvs -olv_name,lv_attr,lv_size,data_percent,lv_metadata_size,metadata_percent,data_lv,metadata_lv,pool_lv test LV Attr LSize Data% MSize Meta% Data Meta Pool lvol0 Vwi-aotz 512.00m 4.79 pool pool twi-a-tz 4.69g 0.51 8.00m 0.83 [pool_tdata] [pool_tmeta] # dd if=/dev/urandom of=/mnt/tmp/file bs=1M count=128 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 14.7862 s, 9.1 MB/s # lvs -olv_name,lv_attr,lv_size,data_percent,lv_metadata_size,metadata_percent,data_lv,metadata_lv,pool_lv test LV Attr LSize Data% MSize Meta% Data Meta Pool lvol0 Vwi-aotz 512.00m 29.79 pool pool twi-a-tz 4.69g 3.18 8.00m 1.61 [pool_tdata] [pool_tmeta]
So the thin volime has filled a bit, also the pool volume. Lets remove the file, inform the lower layers that it has freed up, and thus the space is available again in the pool.
# rm -f /mnt/tmp/file # lvs -olv_name,lv_attr,lv_size,data_percent,lv_metadata_size,metadata_percent,data_lv,metadata_lv,pool_lv test LV Attr LSize Data% MSize Meta% Data Meta Pool lvol0 Vwi-aotz 512.00m 29.79 pool pool twi-a-tz 4.69g 3.18 8.00m 1.61 [pool_tdata] [pool_tmeta] # lvs -olv_name,lv_attr,lv_size,data_percent,lv_metadata_size,metadata_percent,data_lv,metadata_lv,pool_lv test LV Attr LSize Data% MSize Meta% Data Meta Pool lvol0 Vwi-aotz 512.00m 4.79 pool pool twi-a-tz 4.69g 0.51 8.00m 0.83 [pool_tdata] [pool_tmeta]
When volumes are overprovisioned you probably want to monitor how much space really is free in the pool volume. This can happen:
The tinvolume 'storage' has an ext4 filesystem, 400GB are seen as usable. Files are stored here, when they are removed their space becomes immediately available in the pool volume - to be used for KVM machines. The KVM machines get 8GB virtual size each, yet they also only use up the space in pool where they have written to.