one: fs_lvm offline migration does not work

Description No way for offline migrate VM with system drive on fs_lvm VM is going to the FAILED state after migration, because lvm volume is not activated on target node.

Probably it is should be done by tm/fs_lvm/premigrate script, but it seems it is called only for online migration

To Reproduce Try to offline migrate.

Expected behavior VM succesful migrated

Details

  • Affected Component: Storage
  • Hypervisor: KVM
  • Version: 5.6.0

Additional context

For solve this problem need to activate volume after migration and before resuming VM:

lvchange -ay /dev/vg-one-137/lv-one-6446-0 

Progress Status

  • Branch created
  • Code committed to development branch
  • Testing - QA
  • Documentation
  • Release notes - resolved issues, compatibility, known issues
  • Code committed to upstream release/hotfix branches
  • Documentation committed to upstream release/hotfix branches

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 21 (20 by maintainers)

Commits related to this issue

Most upvoted comments

The hard-coded TYPE=FILE for the volatile disks rise other issues with recent libvirt. At least on CentOS7 libvirt is blocking the live-migration because it is assuming that the disks of type “file” are not on shared filesystem. But in fact this is just the definition, the real device is not file after all, but libvirt desn’t know that.

As a workaround I’ve created alternate deploy script that use a trivial helper to alter the domain XML and replaces the definition from file to block dev.

IMO there should be a DISK_TYPE option in the TM_MAD_CONF instead of the hard-coded type, just like it is for the IMAGE datastore.