An external disk, such as an SSD or NVME can be used to cache an existing LVM logical volume. In combination with qemu/kvm virtualization, this can speed up virtual machines considerably as well as reduce pressure on mechanical storage.
Assuming that a physical volume along with a volume group and a few logical volumes exist that have to be cached, the following commands can be used to attach an SSD or NVME drive and enable caching for the various logical volumes.
The physical disk layout is the following (by issuing the pvs
):
PV VG Fmt Attr PSize PFree /dev/sda2 vms lvm2 a-- 800.00g <410.20g
revealing a slow drive (origin drive) /dev/sda2
.
Now, an NVME drive is used to create a new LVM physical disk:
pvcreate /dev/nvme0n1
resulting in the following configuration:
PV VG Fmt Attr PSize PFree /dev/nvme0n1 vms lvm2 a-- <476.94g <476.94g /dev/sda2 vms lvm2 a-- 800.00g <410.20g
Note that the entire NVME drive has been used instead of a partitioning the NVME drive (/dev/nvme0n1
instead of /dev/nvme0n1p1
) but that is a matter of choice and a partition can be created instead to be used for caching.
Next, it is assumed that a volume group (vms
) already exists (command issued: vgs
):
VG #PV #LV #SN Attr VSize VFree vms 2 11 0 wz--n- 1.15t <648.46g
and the volume group must be extended over the new NVME physical disk:
vgextend vms /dev/nvme0n1
Now the setup is ready for a few cache containers to be created for existing logical volumes. Assume the following LVM layout for logical volumes obtained via the lvs
command:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert find.vm vms -wi-a----- 20.00g shinobi.vm vms -wi-a----- 30.00g
where two logical volumes exist, find.vm
with storage size respectively shinoboi.vm
with storage size.
To create a cache for find.vm
, first two new logical volumes are created:
using the following commands, in order:
lvcreate -L 20G -n find.vm-cache vms /dev/nvme0n1
where:
20G
is the cache size - in this case, the cache is chosen to be the total size of the find.vm
logical volume,find.vm-cache
is a name for the cache volume - this can be any user-chosen name,vms
is the name of the volume group containing both the slow physical disk (origin) /dev/sda2
and the fast NVME physical disk /dev/nvme0n1
,/dev/nvme0n1
is the path to the NVME physical disklvcreate -L 20M -n find.vm-cache-meta vms /dev/nvme0n1
where:
20M
is the size of the cache metadata logical volume - at the current time of writing, this cannot be less than 20M
and it is recommended to be of the cache volume created with the previous command,find.vm-cache-meta
is the name of the cache metadata logical volume - similarly to the previous command creating the cache, it can be any user chosen name,vms
is the name of the volume group containing both the slow physical disk (origin) /dev/sda2
and the fast NVME physical disk /dev/nvme0n1
,/dev/nvme0n1
is the path to the NVME physical disk
Issuing lvs
at this point should reveal the cache volume and cache metadata logical volume along with the rest of the logical volumes:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert find.vm vms -wi-a----- 20.00g find.vm-cache vms -wi-a----- 20.00g find.vm-cache-meta vms -wi-a----- 8.00m shinobi.vm vms -wi-a----- 30.00g
With the two cache and cache metadata logical volumes created, both are then combined:
lvconvert --type cache-pool --cachemode writethrough --poolmetadata vms/find.vm-cache-meta vms/find.vm-cache
Now, the logical volume find.vm
is converted to attach the cache volume find.vm-cache
:
lvconvert --type cache --cachepool vms/find.vm-cache vms/find.vm
Issuing the lvs
command should now reveal the cached logical volume:
find.vm vms Cwi-aoC--- 20.00g [find.vm-cache] [find.vm_corig] 0.42 13.11 0.00
Now the find.vm
logical drive should be sped up by the cache.
The result can be seen by issuing the command: </code bash> lvs -a -o +devices </code>
To remove the caching, simply delete the cache volume:
lvremove vms/find.vm-cache
and LVM will take care to flush the cache to the cached logical drive before removing it.