Table of Contents

Use External Disk for Caching

An external disk, such as an SSD or NVME can be used to cache an existing LVM logical volume. In combination with qemu/kvm virtualization, this can speed up virtual machines considerably as well as reduce pressure on mechanical storage.

Assuming that a physical volume along with a volume group and a few logical volumes exist that have to be cached, the following commands can be used to attach an SSD or NVME drive and enable caching for the various logical volumes.

The physical disk layout is the following (by issuing the pvs):

  PV           VG  Fmt  Attr PSize    PFree
  /dev/sda2    vms lvm2 a--   800.00g <410.20g

revealing a slow drive (origin drive) /dev/sda2.

Now, an NVME drive is used to create a new LVM physical disk:

pvcreate /dev/nvme0n1

resulting in the following configuration:

  PV           VG  Fmt  Attr PSize    PFree
  /dev/nvme0n1 vms lvm2 a--  <476.94g <476.94g
  /dev/sda2    vms lvm2 a--   800.00g <410.20g

Note that the entire NVME drive has been used instead of a partitioning the NVME drive (/dev/nvme0n1 instead of /dev/nvme0n1p1) but that is a matter of choice and a partition can be created instead to be used for caching.

Next, it is assumed that a volume group (vms) already exists (command issued: vgs):

  VG  #PV #LV #SN Attr   VSize VFree
  vms   2  11   0 wz--n- 1.15t <648.46g

and the volume group must be extended over the new NVME physical disk:

vgextend vms /dev/nvme0n1

Now the setup is ready for a few cache containers to be created for existing logical volumes. Assume the following LVM layout for logical volumes obtained via the lvs command:

  LV                         VG  Attr       LSize   Pool                     Origin                   Data%  Meta%  Move Log Cpy%Sync Convert
  find.vm              vms -wi-a-----  20.00g
  shinobi.vm           vms -wi-a-----  30.00g

where two logical volumes exist, find.vm with $20G$ storage size respectively shinoboi.vm with $20G$ storage size.

To create a cache for find.vm, first two new logical volumes are created:

using the following commands, in order:

lvcreate -L 20G -n find.vm-cache vms /dev/nvme0n1

where:

lvcreate -L 20M -n find.vm-cache-meta vms /dev/nvme0n1

where:

Issuing lvs at this point should reveal the cache volume and cache metadata logical volume along with the rest of the logical volumes:

  LV                         VG  Attr       LSize   Pool                     Origin                   Data%  Meta%  Move Log Cpy%Sync Convert
  find.vm              vms -wi-a-----  20.00g
  find.vm-cache        vms -wi-a-----  20.00g                              
  find.vm-cache-meta   vms -wi-a-----   8.00m   
  shinobi.vm           vms -wi-a-----  30.00g

With the two cache and cache metadata logical volumes created, both are then combined:

lvconvert --type cache-pool --cachemode writethrough --poolmetadata vms/find.vm-cache-meta vms/find.vm-cache

Now, the logical volume find.vm is converted to attach the cache volume find.vm-cache:

lvconvert --type cache --cachepool vms/find.vm-cache vms/find.vm

Issuing the lvs command should now reveal the cached logical volume:

  find.vm   vms Cwi-aoC---  20.00g [find.vm-cache]   [find.vm_corig]   0.42   13.11           0.00

Now the find.vm logical drive should be sped up by the cache.

The result can be seen by issuing the command: </code bash> lvs -a -o +devices </code>

Removing the Cache

To remove the caching, simply delete the cache volume:

lvremove vms/find.vm-cache

and LVM will take care to flush the cache to the cached logical drive before removing it.

Relocating Extents Before Resizing

When attempting to resize a physical volume containing a logical volume, it may happen that an error pops up regarding extents or blocks that are allocated later cannot resize to 383999 extents as later ones are allocated. Disappointingly for the current year, this happens with graphical interfaces such as gparted as well, that do not have an excuse for not automatically handling this situation because it is not difficult but rather tedious and requiring manual commands.

The reason for that is that the actual data on the drive is not written exactly at the start of the partition or there are holes within the free space but changing the size of the PV requires that the data is laid out contiguously on the drive.

This can be solved rather easily using pvmove and moving over the data such that there are no more holes between the written data. First, the pvs command is used to show the contiguous area of data on the drive (segments) such that they can be allocated:

pvs -v --segments /dev/sda4

Just as an example, the following data was printed in this case:

  PV         VG  Fmt  Attr PSize    PFree    Start SSize LV    Start Type   PE Ranges            
  /dev/sda4  vms lvm2 a--  <227.02g <177.02g     0 40960           0 free                        
  /dev/sda4  vms lvm2 a--  <227.02g <177.02g 40960 12800 seven     0 linear /dev/sda4:40960-53759
  /dev/sda4  vms lvm2 a--  <227.02g <177.02g 53760  4356           0 free                        

and the interpretation is that the drive, in this case partition, /dev/sda4 contains free space starting from 0 with a size of 40960, followed by actual data between 40960 with a size of 12800 and finally ending in another slice starting from 53769 with a size of 4356. What needs to be done in this case is to move the slice of data starting 40960 with a size of 12800 (namely, the logical volume seven) right to the start at 0 with a size of 40960. We already know that the slice of data is of size 12800 which should fit perfectly starting at 0 with a size of 40960.

Having understood the former, the command for this particular case will be:

pvmove --alloc anywhere /dev/sda4:40960+12800 /dev/sda4:0+40960

where:

This might be a length process but after it is completed, the size of the physical drive can be changed with a partition manager.