Linux device-mapper maps LVM PV nested inside LV when taking snapshot
Solution 1:
Sometimes the relevant documentation is hidden away in configuration files rather than in, say, the documentation. So it seems with LVM.
By default LVM will automatically attempt to activate volumes on any physical devices which get connected to the system after boot, so long as all of the PVs are present, and lvmetad and udev (or more recently systemd) are running. When the LVM snapshot gets created, a udev event gets fired off, and since the snapshot contains a PV, lvmetad automatically runs pvscan
, and so forth.
By looking at /etc/lvm/backup/docker-volumes
I was able to determine that lvmetad had explicitly run pvscan
on the snapshot by using the device major and minor numbers, which bypassed LVM filters that would normally prevent this. The file contained:
description = "Created *after* executing 'pvscan --cache --activate ay 253:13'"
This behavior can be controlled by setting the auto_activation_volume_list
in /etc/lvm/lvm.conf
. It allows you to set which volume groups, volumes, or tags are allowed to be activated automatically.
So, I simply set the filter to contain both of the volume groups for the host; anything else won't match the filter and does not get automatically activated.
auto_activation_volume_list = [ "mandragora", "vm-volumes" ]
The guest's LVM volumes are no longer appearing on the host, and finally, my backups are running...
Solution 2:
You want to edit the 'filter' value in /etc/lvm/lvm.conf to inspect only the physical devices on the KVM host; the default value accepts 'every block device' which includes LVs themselves. The comment above the default value is fairly comprehensive and can do a better job of explaining usage than I can.
Solution 3:
I encountered roughly the same problem in combination with vgimportclone
. It would sometimes fail with this:
WARNING: Activation disabled. No device-mapper interaction will be attempted.
Physical volume "/tmp/snap.iwOkcP9B/vgimport0" changed
1 physical volume changed / 0 physical volumes not changed
WARNING: Activation disabled. No device-mapper interaction will be attempted.
Volume group "insidevgname" successfully changed
/dev/myvm-vg: already exists in filesystem
New volume group name "myvm-vg" is invalid
Fatal: Unable to rename insidevgname to myvm-vg, error: 5
At that point, if I wanted to destroy the snapshot, I first had to temporarily disable udev
because of the bug described at https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1088081
But even then, after seemingly successfully deactivating the nested LVM's volume group, the partition mapping for the nested PV, created by kpartx
, somehow remained in use.
The trick appeared to be that the device mapper kept an extra parent mapping using the old volume group name, like this in tree list:
insidevgname-lvroot (252:44)
└─outsidevgname-myvm--root-p2 (252:43)
└─outsidevgname-myvm--root (252:36)
The solution was to simply remove that particular mapping with dmsetup remove insidevgname-lvroot
. After that, kpartx -d
and lvremove
worked fine.