The functionality of the module might change, but the main features will remain. Compatibility with future versions is guaranteed, but might require additional migration steps.

The module is guaranteed to work only with stock kernels that are shipped with the supported distributions.

The module may work with other kernels or distributions, but its stable operation and availability of all features is not guaranteed.

Why does creating BlockDevice and LVMVolumeGroup resources in a cluster fail?

  • In most cases, the creation of BlockDevice resources fails because the existing devices fail filtering by the controller. Please make sure that your devices meet the requirements.

  • Creating LVMVolumeGroup resources may fail due to the absence of BlockDevice resources in the cluster, as their names are used in the LVMVolumeGroup specification.

  • If the BlockDevice resources are present and the LVMVolumeGroup resources are not present, please make sure the existing LVM Volume Group on the node has a special tag storage.deckhouse.io/enabled=true attached.

I have deleted the LVMVolumeGroup resource, but the resource and its Volume Group are still there. What do I do?

Such a situation is possible in two cases:

  1. The Volume Group contains LV. The controller does not take responsibility for removing LV from the node, so if there are any logical volumes in the Volume Group created by the resource, you need to manually delete them on the node. After this, both the resource and the Volume Group (along with the PV) will be deleted automatically.

  2. The resource has an annotation storage.deckhouse.io/deletion-protection. This annotation protects the resource from deletion and, as a result, the Volume Group created by it. You need to remove the annotation manually with the command:

kubectl annotate lvg %lvg-name% storage.deckhouse.io/deletion-protection-

After the command’s execution, both the LVMVolumeGroup resource and Volume Group will be deleted automatically.

I’m trying to create a Volume Group using the LVMVolumeGroup resource, but I’m not getting anywhere. Why?

Most likely, your resource fails controller validation even if it has passed the Kubernetes validation successfully. The exact cause of the failure can be found in the status.message field of the resource itself. You can also refer to the controller’s logs.

The problem usually stems from incorrectly defined BlockDevice resources. Please make sure these resources meet the following requirements:

  • The Consumable field is set to true.
  • For a Volume Group of the Local type, the specified BlockDevice belong to the same node.
  • The current names of the BlockDevice resources are specified.

The full list of expected values can be found in the CR reference of the LVMVolumeGroup resource.

What happens if I unplug one of the devices in a Volume Group? Will the linked LVMVolumeGroup resource be deleted?

The LVMVolumeGroup resource will persist as long as the corresponding Volume Group exists. As long as at least one device exists, the Volume Group will be there, albeit in an unhealthy state. Note that these issues will be reflected in the resource’s status.

Once the unplugged device is plugged back in and reactivated, the LVM Volume Group will regain its functionality while the corresponding LVMVolumeGroup resource will also be updated to reflect the current state.

How to transfer control of an existing LVM Volume Group on the node to the controller?

Simply add the LVM tag storage.deckhouse.io/enabled=true to the LVM Volume Group on the node:

vgchange myvg-0 --addtag storage.deckhouse.io/enabled=true

How do I get the controller to stop monitoring the LVM Volume Group on the node?

Delete the storage.deckhouse.io/enabled=true LVM tag for the target Volume Group on the node:

vgchange myvg-0 --deltag storage.deckhouse.io/enabled=true

The controller will then stop tracking the selected Volume Group and delete the associated LVMVolumeGroup resource automatically.

I haven’t added the storage.deckhouse.io/enabled=true LVM tag to the Volume Group, but it is there. How is this possible?

This can happen if you created the LVM Volume Group using the LVMVolumeGroup resource, in which case the controller will automatically add this LVM tag to the created LVM Volume Group. This is also possible if the Volume Group or its Thin-pool already had the linstor-* LVM tag of the linstor module.

When you switch from the linstor module to the sds-node-configurator and sds-drbd modules, the linstor-* LVM tags are automatically replaced with the storage.deckhouse.io/enabled=true LVM tag in the Volume Group. This way, the sds-node-configurator gains control over these Volume Groups.