Differences

This shows you the differences between two versions of the page.


Previous revision
Last revision
fuss:lvm [2020/06/08 04:15] office
Line 1: Line 1:
 +====== Use External Disk for Caching ======
 +
 +An external disk, such as an SSD or NVME can be used to cache an existing LVM logical volume. In combination with qemu/kvm virtualization, this can speed up virtual machines considerably as well as reduce pressure on mechanical storage.
 +
 +Assuming that a physical volume along with a volume group and a few logical volumes exist that have to be cached, the following commands can be used to attach an SSD or NVME drive and enable caching for the various logical volumes.
 +
 +The physical disk layout is the following (by issuing the ''pvs''):
 +<code>
 +  PV           VG  Fmt  Attr PSize    PFree
 +  /dev/sda2    vms lvm2 a--   800.00g <410.20g
 +</code>
 +
 +revealing a slow drive (origin drive) ''/dev/sda2''.
 +
 +Now, an NVME drive is used to create a new LVM physical disk:
 +<code bash>
 +pvcreate /dev/nvme0n1
 +</code>
 +
 +resulting in the following configuration:
 +<code>
 +  PV           VG  Fmt  Attr PSize    PFree
 +  /dev/nvme0n1 vms lvm2 a--  <476.94g <476.94g
 +  /dev/sda2    vms lvm2 a--   800.00g <410.20g
 +</code>
 +
 +Note that the entire NVME drive has been used instead of a partitioning the NVME drive (''/dev/nvme0n1'' instead of ''/dev/nvme0n1p1'') but that is a matter of choice and a partition can be created instead to be used for caching.
 +
 +Next, it is assumed that a volume group (''vms'') already exists (command issued: ''vgs''):
 +<code>
 +  VG  #PV #LV #SN Attr   VSize VFree
 +  vms    11   0 wz--n- 1.15t <648.46g
 +</code>
 +
 +and the volume group must be extended over the new NVME physical disk:
 +<code bash>
 +vgextend vms /dev/nvme0n1
 +</code>
 +
 +Now the setup is ready for a few cache containers to be created for existing logical volumes. Assume the following LVM layout for logical volumes obtained via the ''lvs'' command:
 +<code>
 +  LV                         VG  Attr       LSize   Pool                     Origin                   Data%  Meta%  Move Log Cpy%Sync Convert
 +  find.vm              vms -wi-a-----  20.00g
 +  shinobi.vm           vms -wi-a-----  30.00g
 +
 +</code>
 +
 +where two logical volumes exist, ''find.vm'' with $20G$ storage size respectively ''shinoboi.vm'' with $20G$ storage size.
 +
 +To create a cache for ''find.vm'', first two new logical volumes are created:
 +  * one logical volume acting as the actual cache,
 +  * one logical volume acting as the cache metadata
 +
 +using the following commands, in order:
 +<code bash>
 +lvcreate -L 20G -n find.vm-cache vms /dev/nvme0n1
 +</code>
 +where:
 +  * ''20G'' is the cache size - in this case, the cache is chosen to be the total size of the ''find.vm'' logical volume,
 +  * ''find.vm-cache'' is a name for the cache volume - this can be any user-chosen name,
 +  * ''vms'' is the name of the volume group containing both the slow physical disk (origin) ''/dev/sda2'' and the fast NVME physical disk ''/dev/nvme0n1'',
 +  * ''/dev/nvme0n1'' is the path to the NVME physical disk
 +
 +<code bash>
 +lvcreate -L 20M -n find.vm-cache-meta vms /dev/nvme0n1
 +</code>
 +where:
 +  * ''20M'' is the size of the cache metadata logical volume - at the current time of writing, this cannot be less than ''20M'' and it is recommended to be $\frac{1}{10000}$ of the cache volume created with the previous command,
 +  * ''find.vm-cache-meta'' is the name of the cache metadata logical volume - similarly to the previous command creating the cache, it can be any user chosen name,
 +  * ''vms'' is the name of the volume group containing both the slow physical disk (origin) ''/dev/sda2'' and the fast NVME physical disk ''/dev/nvme0n1'',
 +  * ''/dev/nvme0n1'' is the path to the NVME physical disk
 +
 +Issuing ''lvs'' at this point should reveal the cache volume and cache metadata logical volume along with the rest of the logical volumes:
 +
 +<code>
 +  LV                         VG  Attr       LSize   Pool                     Origin                   Data%  Meta%  Move Log Cpy%Sync Convert
 +  find.vm              vms -wi-a-----  20.00g
 +  find.vm-cache        vms -wi-a-----  20.00g                              
 +  find.vm-cache-meta   vms -wi-a-----   8.00m   
 +  shinobi.vm           vms -wi-a-----  30.00g
 +
 +</code>
 +
 +With the two cache and cache metadata logical volumes created, both are then combined:
 +<code bash>
 +lvconvert --type cache-pool --cachemode writethrough --poolmetadata vms/find.vm-cache-meta vms/find.vm-cache
 +</code>
 +
 +Now, the logical volume ''find.vm'' is converted to attach the cache volume ''find.vm-cache'':
 +<code bash>
 +lvconvert --type cache --cachepool vms/find.vm-cache vms/find.vm
 +</code>
 +
 +Issuing the ''lvs'' command should now reveal the cached logical volume:
 +<code>
 +  find.vm   vms Cwi-aoC---  20.00g [find.vm-cache]   [find.vm_corig]   0.42   13.11           0.00
 +</code>
 +
 +Now the ''find.vm'' logical drive should be sped up by the cache.
 +
 +The result can be seen by issuing the command:
 +</code bash>
 +lvs -a -o +devices
 +</code>
 +
 +===== Removing the Cache =====
 +
 +To remove the caching, simply delete the cache volume:
 +<code bash>
 +lvremove vms/find.vm-cache
 +</code>
 +
 +and LVM will take care to flush the cache to the cached logical drive before removing it.
 +
 +
 +
  

fuss/lvm.txt · Last modified: 2022/04/19 08:28 by 127.0.0.1

Access website using Tor Access website using i2p Wizardry and Steamworks PGP Key


For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.