This is an internal documentation. There is a good chance you’re looking for something else. See Disclaimer.

LVM

Work through this document from top to bottom. A LVM is set up first, then various features are shown and, finally, LVM is removed again.

Tip

lvm2 and thin-provisioning-tools need to be installed:

apt install lvm2 thin-provisioning-tools

Terminology

  • PV (Physical Volume): a disk used for storage

  • VG (Volume Group): group of one or more disks (PVs)

  • LV (Logical Volume): a volume that can be accessed as virtual disk (similar to a partition)

  • Thin provisioning: Storage that’s not fully backed (only space used by FS is used on disk)

Set up Test Drives

Create storage for virtual disks:

truncate -s 2G disk1 disk2

Create virtual disks:

l1=$(losetup --show -f disk1)
l2=$(losetup --show -f disk2)

Creating Groups and Volumes

Create Volume Group (name main):

lvm vgcreate main $l1

Create a Logical Volume (named vol1):

lvm lvcreate -l 20%FREE -n vol1 main

Check VG/LV/PV status:

lvm vgs
lvm lvs
lvm pvs

Or verbose status:

lvm vgdisplay
lvm lvdisplay
lvm pvdisplay

Create pool for thin provisioning (named pool):

lvm lvcreate -l 60%FREE --thin main/pool

Create thin volume in pool (named vol2):

lvm lvcreate -V 1G --thin main/pool -n vol2

Show volumes:

lvm lvs

You should now see the thin pool main/pool and main/vol2. Both should use 0.00% data yet.

Create filesystem:

mkfs.ext4 /dev/mapper/main-vol2

Mount volume:

mkdir m
mount -o discard /dev/mapper/main-vol2 m

Tip

-o discard is needed to tell LVM about freed blocks. Alternatively, fstrim m can be used to free space on the underlying Logical Volume.

Write a file:

dd if=/dev/urandom of=m/file bs=1M count=50

Show volumes again:

lvm lvs

You should now see that some Data is used, on both the pool and volume.

Thin Volume Snapshots

Create and activate snapshot:

lvm lvcreate -k n -n vol2-snapshot1 -s main/vol2

A snapshot can be mounted just like any other volume:

mkdir n
mount -o discard /dev/mapper/main-vol2--snapshot1 n

Remove file again:

rm m/file

Umount volumes:

umount m n

Non-Thin Snapshots

Non-thin snapshots work slightly differently. You create a snapshot and give it a certain size. The size determines how much the origin can divert from the snapshot. Modifications to the snapshot and the origin are written (CoW-style) to the snapshot. Once a snapshot is no longer needed, it should be merged backed to the origin to make storage linear again.

Create and activate snapshot:

lvm lvcreate -k n -n vol1-snapshot1 -s main/vol1 -L 1G

Warning

Be sure that the specified size is large enough to hold any changes to the origin and snapshot volume. You’ll see an out-of-disk-space error otherwise.

Show snapshots and their origins:

lvm lvs

Merge snapshot back into origin:

lvconvert --merge main/vol1-snapshot1

Resize Physical Volume

In case the size of the underlying device changes, the corresponding PV needs to be resized too.

Resize disk:

truncate -s 4G $l1
losetup --set-capacity $l1

Resize PV to the size of underlying device:

lvm pvresize $l1

Check size:

lvm pvs

Volume group should now show some more free space:

lvm vgs

Resize Volume

Check available space (VFree):

lvm vgs

If no free space is available, check if spare disks are available within the RAID, add more disks, or replace existing disks with larger disks and grow the raid.

Resize Volume:

lvm lvresize main/vm1 -L +200M

You’ll also have to resize the filesystem:

# ext4
resize2fs /dev/mapper/main-vm1

# btrfs
btrfs filesystem resize max ${mountpoint}

Add Drive to VG

Add drive:

lvm vgextend main $l2

Show PVs:

lvm pvs

There should be some free space in the VG now:

lvm vgs

Starting LVM manually

Usually LVMs are started automatically. Should this not be the case for any reason, do it manually.

Scan drives for LVMs:

lvm pvscan

Once it detected the LVM you can:

  1. activate all LVs in all VGs:

    lvm vgchange --activate y
    
  2. activate all LVs of a specific VG:

    lvm vgchange --activate y main
    
  3. activate a specific LV:

    lvm lvchange --activate y main/vm1
    

Clean Up

Clean up virtual disks:

losetup -d $l1 $l2
rm disk[1-2]