User Tools

Site Tools


software:lvm_thin_provisioning

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

software:lvm_thin_provisioning [2015/08/11 20:32] (current)
Line 1: Line 1:
 +===== What? =====
  
 +LVM thin provisioning is in the linux kernel since some versions, Fedora17 and RHEL6.3 support it. Why this is useful:
 +  * When creating usual LVM volumes the space is taken from the Volumegroup and no longer available to be used in other volumes.
 +  * For thin LVM volumes the space gets just allocated when really data is stored.
 +  * Its also possible to free up space in the thin LVM volume, which gets available for other volumes then.
 +
 +===== demonstrating usage =====
 +First lets install the utilities required to manage thin volumes.
 +<​code>​
 +# yum install device-mapper-persistent-data
 +</​code>​
 +Now creating a sparsefile and using this as backend for the thin provisioning testing. Then creating PV and VolumeGroup.
 +<​code>​
 +# dd if=/​dev/​zero of=sparsefile bs=1 count=1 seek=5G
 +1+0 records in
 +1+0 records out
 +1 byte (1 B) copied, 4.3124e-05 s, 23.2 kB/s
 +
 +# losetup -f sparsefile ​
 +
 +# pvcreate /dev/loop0
 +  Writing physical volume data to disk "/​dev/​loop0"​
 +  Physical volume "/​dev/​loop0"​ successfully created
 +
 +# vgcreate test /dev/loop0
 +  Volume group "​test"​ successfully created
 +</​code>​
 +Now creating the pool. Its a '​special kind of volume',​ the thin provisioned volumes will be stored inside of the space allocated for the pool. As parameter we need the size to allocate for the pool.
 +<​code>​
 +# lvcreate --thinpool pool -l 1200 test
 +  Rounding up size to full physical extent 8.00 MiB
 +  Logical volume "​pool"​ created
 +</​code>​
 +We now create a thin volume of 512M virtual size. The virtual size can also be higher than the size of the pool volume.
 +<​code>​
 +# lvcreate -T test/pool -V 512M 
 +  Logical volume "​lvol0"​ created
 +</​code>​
 +Now lets create a file on the volume, watch the space getting occupied in the pool volume.
 +<​code>​
 +# mkfs.ext4 /​dev/​test/​lvol0 ​
 +[...]
 +# mkdir -p /mnt/tmp
 +# mount -o discard /​dev/​test/​lvol0 /mnt/tmp
 +# lvs -olv_name,​lv_attr,​lv_size,​data_percent,​lv_metadata_size,​metadata_percent,​data_lv,​metadata_lv,​pool_lv test
 +  LV    Attr     ​LSize ​  ​Data% ​ MSize  Meta%  Data         ​Meta ​        Pool
 +  lvol0 Vwi-aotz 512.00m ​  ​4.79 ​                                        pool
 +  pool  twi-a-tz ​  ​4.69g ​  ​0.51 ​ 8.00m   0.83 [pool_tdata] [pool_tmeta] ​    
 +# dd if=/​dev/​urandom of=/​mnt/​tmp/​file bs=1M count=128
 +128+0 records in
 +128+0 records out
 +134217728 bytes (134 MB) copied, 14.7862 s, 9.1 MB/s
 +# lvs -olv_name,​lv_attr,​lv_size,​data_percent,​lv_metadata_size,​metadata_percent,​data_lv,​metadata_lv,​pool_lv test
 +  LV    Attr     ​LSize ​  ​Data% ​ MSize  Meta%  Data         ​Meta ​        Pool
 +  lvol0 Vwi-aotz 512.00m ​ 29.79                                         pool
 +  pool  twi-a-tz ​  ​4.69g ​  ​3.18 ​ 8.00m   1.61 [pool_tdata] [pool_tmeta] ​    
 +</​code>​
 +So the thin volime has filled a bit, also the pool volume. Lets remove the file, inform the lower layers that it has freed up, and thus the space is available again in the pool.
 +<​code>​
 +# rm -f /​mnt/​tmp/​file ​
 +# lvs -olv_name,​lv_attr,​lv_size,​data_percent,​lv_metadata_size,​metadata_percent,​data_lv,​metadata_lv,​pool_lv test
 +  LV    Attr     ​LSize ​  ​Data% ​ MSize  Meta%  Data         ​Meta ​        Pool
 +  lvol0 Vwi-aotz 512.00m ​ 29.79                                         pool
 +  pool  twi-a-tz ​  ​4.69g ​  ​3.18 ​ 8.00m   1.61 [pool_tdata] [pool_tmeta] ​    
 +# lvs -olv_name,​lv_attr,​lv_size,​data_percent,​lv_metadata_size,​metadata_percent,​data_lv,​metadata_lv,​pool_lv test
 +  LV    Attr     ​LSize ​  ​Data% ​ MSize  Meta%  Data         ​Meta ​        Pool
 +  lvol0 Vwi-aotz 512.00m ​  ​4.79 ​                                        pool
 +  pool  twi-a-tz ​  ​4.69g ​  ​0.51 ​ 8.00m   0.83 [pool_tdata] [pool_tmeta] ​    
 +</​code>​
 +
 +===== monitoring thin volumes =====
 +When volumes are overprovisioned you probably want to monitor how much space really is free in the pool volume. This can happen:
 +  * via dmeventd, this can send notifies if i.e. 80% are used
 +  * via wrappers that evaluate the fields in the output of '​lvs'​
 +
 +
 +===== new laptop setup =====
 +  * At start of the disc 3 partitions a 12GB, for having multiple distros installed in parallel
 +  * extended partition with inside:
 +  * swap, encrypted home
 +  * ..and a big drive used as a LVM PV
 +  * VolumeGroup,​ inside a big pool thinvolume, inside it:
 +    * thinvolume storage: thin provisioned,​ 400GB size
 +    * several thinvolumes for KVM machines
 +
 +The tinvolume '​storage'​ has an ext4 filesystem, 400GB are seen as usable. Files are stored here, when they are removed their space becomes immediately available in the pool volume - to be used for KVM machines. The KVM machines get 8GB virtual size each, yet they also only use up the space in pool where they have written to.
software/lvm_thin_provisioning.txt ยท Last modified: 2015/08/11 20:32 (external edit)