Use External Disk for Caching

An external disk, such as an SSD or NVME can be used to cache an existing LVM logical volume. In combination with qemu/kvm virtualization, this can speed up virtual machines considerably as well as reduce pressure on mechanical storage.

Assuming that a physical volume along with a volume group and a few logical volumes exist that have to be cached, the following commands can be used to attach an SSD or NVME drive and enable caching for the various logical volumes.

The physical disk layout is the following (by issuing the pvs):

  PV           VG  Fmt  Attr PSize    PFree
  /dev/sda2    vms lvm2 a--   800.00g <410.20g

revealing a slow drive (origin drive) /dev/sda2.

Now, an NVME drive is used to create a new LVM physical disk:

pvcreate /dev/nvme0n1

resulting in the following configuration:

  PV           VG  Fmt  Attr PSize    PFree
  /dev/nvme0n1 vms lvm2 a--  <476.94g <476.94g
  /dev/sda2    vms lvm2 a--   800.00g <410.20g

Note that the entire NVME drive has been used instead of a partitioning the NVME drive (/dev/nvme0n1 instead of /dev/nvme0n1p1) but that is a matter of choice and a partition can be created instead to be used for caching.

Next, it is assumed that a volume group (vms) already exists (command issued: vgs):

  VG  #PV #LV #SN Attr   VSize VFree
  vms   2  11   0 wz--n- 1.15t <648.46g

and the volume group must be extended over the new NVME physical disk:

vgextend vms /dev/nvme0n1

Now the setup is ready for a few cache containers to be created for existing logical volumes. Assume the following LVM layout for logical volumes obtained via the lvs command:

  LV                         VG  Attr       LSize   Pool                     Origin                   Data%  Meta%  Move Log Cpy%Sync Convert
  find.vm              vms -wi-a-----  20.00g
  shinobi.vm           vms -wi-a-----  30.00g

where two logical volumes exist, find.vm with $20G$ storage size respectively shinoboi.vm with $20G$ storage size.

To create a cache for find.vm, first two new logical volumes are created:

  • one logical volume acting as the actual cache,
  • one logical volume acting as the cache metadata

using the following commands, in order:

lvcreate -L 20G -n find.vm-cache vms /dev/nvme0n1

where:

  • 20G is the cache size - in this case, the cache is chosen to be the total size of the find.vm logical volume,
  • find.vm-cache is a name for the cache volume - this can be any user-chosen name,
  • vms is the name of the volume group containing both the slow physical disk (origin) /dev/sda2 and the fast NVME physical disk /dev/nvme0n1,
  • /dev/nvme0n1 is the path to the NVME physical disk
lvcreate -L 20M -n find.vm-cache-meta vms /dev/nvme0n1

where:

  • 20M is the size of the cache metadata logical volume - at the current time of writing, this cannot be less than 20M and it is recommended to be $\frac{1}{10000}$ of the cache volume created with the previous command,
  • find.vm-cache-meta is the name of the cache metadata logical volume - similarly to the previous command creating the cache, it can be any user chosen name,
  • vms is the name of the volume group containing both the slow physical disk (origin) /dev/sda2 and the fast NVME physical disk /dev/nvme0n1,
  • /dev/nvme0n1 is the path to the NVME physical disk

Issuing lvs at this point should reveal the cache volume and cache metadata logical volume along with the rest of the logical volumes:

  LV                         VG  Attr       LSize   Pool                     Origin                   Data%  Meta%  Move Log Cpy%Sync Convert
  find.vm              vms -wi-a-----  20.00g
  find.vm-cache        vms -wi-a-----  20.00g                              
  find.vm-cache-meta   vms -wi-a-----   8.00m   
  shinobi.vm           vms -wi-a-----  30.00g

With the two cache and cache metadata logical volumes created, both are then combined:

lvconvert --type cache-pool --cachemode writethrough --poolmetadata vms/find.vm-cache-meta vms/find.vm-cache

Now, the logical volume find.vm is converted to attach the cache volume find.vm-cache:

lvconvert --type cache --cachepool vms/find.vm-cache vms/find.vm

Issuing the lvs command should now reveal the cached logical volume:

  find.vm   vms Cwi-aoC---  20.00g [find.vm-cache]   [find.vm_corig]   0.42   13.11           0.00

Now the find.vm logical drive should be sped up by the cache.

The result can be seen by issuing the command: </code bash> lvs -a -o +devices </code>

Removing the Cache

To remove the caching, simply delete the cache volume:

lvremove vms/find.vm-cache

and LVM will take care to flush the cache to the cached logical drive before removing it.

Relocating Extents Before Resizing

When attempting to resize a physical volume containing a logical volume, it may happen that an error pops up regarding extents or blocks that are allocated later cannot resize to 383999 extents as later ones are allocated. Disappointingly for the current year, this happens with graphical interfaces such as gparted as well, that do not have an excuse for not automatically handling this situation because it is not difficult but rather tedious and requiring manual commands.

The reason for that is that the actual data on the drive is not written exactly at the start of the partition or there are holes within the free space but changing the size of the PV requires that the data is laid out contiguously on the drive.

This can be solved rather easily using pvmove and moving over the data such that there are no more holes between the written data. First, the pvs command is used to show the contiguous area of data on the drive (segments) such that they can be allocated:

pvs -v --segments /dev/sda4

Just as an example, the following data was printed in this case:

  PV         VG  Fmt  Attr PSize    PFree    Start SSize LV    Start Type   PE Ranges            
  /dev/sda4  vms lvm2 a--  <227.02g <177.02g     0 40960           0 free                        
  /dev/sda4  vms lvm2 a--  <227.02g <177.02g 40960 12800 seven     0 linear /dev/sda4:40960-53759
  /dev/sda4  vms lvm2 a--  <227.02g <177.02g 53760  4356           0 free                        

and the interpretation is that the drive, in this case partition, /dev/sda4 contains free space starting from 0 with a size of 40960, followed by actual data between 40960 with a size of 12800 and finally ending in another slice starting from 53769 with a size of 4356. What needs to be done in this case is to move the slice of data starting 40960 with a size of 12800 (namely, the logical volume seven) right to the start at 0 with a size of 40960. We already know that the slice of data is of size 12800 which should fit perfectly starting at 0 with a size of 40960.

Having understood the former, the command for this particular case will be:

pvmove --alloc anywhere /dev/sda4:40960+12800 /dev/sda4:0+40960

where:

  • /dev/sda4:40960+12800 is the slice of data, referred to by the PVS partition, the start at 40960 and its length of 12800 and,
  • /dev/sda4:0+40960 is the free space at the start of the drive

This might be a length process but after it is completed, the size of the physical drive can be changed with a partition manager.


fuss/lvm.txt ยท Last modified: 2024/11/23 15:12 by office

Access website using Tor Access website using i2p Wizardry and Steamworks PGP Key


For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.