Available with limitations in: CE
Available without limitations in: SE, SE+, EE
Preliminary version. The functionality may change, but the basic features will be preserved. Compatibility with future versions is ensured, but may require additional migration actions.
The module is only guaranteed to work if the system requirements are met. As for any other configurations, the module may work, but its smooth operation is not guaranteed.
This module manages replicated block storage based on DRBD. Currently, LINSTOR is used as a control-plane/backend (without the possibility of direct user configuration).
The module allows you to create a Storage Pool as well as a StorageClass by creating Kubernetes custom resources.
To create a Storage Pool, you will need the LVMVolumeGroup configured on the cluster nodes. The LVM configuration is done by the sds-node-configurator module.
Caution. Before enabling the
sds-replicated-volumemodule, you must enable thesds-node-configuratormodule.Caution. Data synchronization during volume replication is carried out in synchronous mode only, asynchronous mode is not supported.
Caution. If your cluster has only a single node, use
sds-local-volumeinstead ofsds-replicated-volume. To usesds-replicated-volume, a minimum of 3 nodes is required. It is advisable to have 4 or more nodes to mitigate the impact of potential node failures.
After you enable the sds-replicated-volume module in the Deckhouse configuration, you will only have to create ReplicatedStoragePool and ReplicatedStorageClass.
To ensure the proper functioning of the sds-replicated-volume module, follow these steps:
- Enable the sds-node-configurator module.
Ensure that thesds-node-configuratormodule is enabled before enabling thesds-replicated-volumemodule.
Direct configuration of the LINSTOR backend by the user is prohibited.
Data synchronization during volume replication occurs only in synchronous mode. Asynchronous mode is not supported.
For working with snapshots, the snapshot-controller module must be connected.
-
Configure LVMVolumeGroup. Before creating a StorageClass, create the LVMVolumeGroup resource for the
sds-node-configuratormodule on the cluster nodes. -
Create Storage Pools and Corresponding StorageClasses.
Users are prohibited from creating StorageClasses for the
replicated.csi.storage.deckhouse.ioCSI driver.After the
sds-replicated-volumemodule is activated in the Deckhouse configuration, the cluster will automatically be set up to work with the LINSTOR backend. You only need to create the storage pools and StorageClasses.
The module supports two operating modes: LVM and LVMThin.
Each mode has its own characteristics, advantages, and limitations. Learn more about the differences in the FAQ.
Quickstart guide
Note that all commands must be run on a machine that has administrator access to the Kubernetes API.
Enabling modules
Enabling the sds-node-configurator module:
-
Create a ModuleConfig resource to enable the module:
kubectl apply -f - <<EOF apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: sds-node-configurator spec: enabled: true version: 1 EOF-
Wait for the
sds-node-configuratormodule to reaches theReadystate.kubectl get module sds-node-configurator -w -
Activate the
sds-replicated-volumemodule. Before enabling, it is recommended to review the available settings.
-
The example below launches the module with default settings, which will result in creating service pods for the sds-replicated-volume component on all cluster nodes, installing the DRBD kernel module, and registering the CSI driver:
kubectl apply -f - <<EOF
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: sds-replicated-volume
spec:
enabled: true
version: 1
EOF
-
Wait for the
sds-replicated-volumemodule to reach theReadystate.kubectl get module sds-replicated-volume -w-
Make sure that all pods in
d8-sds-replicated-volumeandd8-sds-node-configuratornamespaces areRunningorCompletedand are running on all nodes where DRBD resources are intended to be used:kubectl -n d8-sds-replicated-volume get pod -o wide -w kubectl -n d8-sds-node-configurator get pod -o wide -w
-
Configuring storage on nodes
You need to create LVM volume groups on the nodes using LVMVolumeGroup custom resources. As part of this quickstart guide, we will create a regular Thick storage. See usage examples to learn more about custom resources.
To configure the storage:
- List all the BlockDevice resources available in your cluster:
kubectl get bd
NAME NODE CONSUMABLE SIZE PATH
dev-0a29d20f9640f3098934bca7325f3080d9b6ef74 worker-0 true 30Gi /dev/vdd
dev-457ab28d75c6e9c0dfd50febaac785c838f9bf97 worker-0 false 20Gi /dev/vde
dev-49ff548dfacba65d951d2886c6ffc25d345bb548 worker-1 true 35Gi /dev/vde
dev-75d455a9c59858cf2b571d196ffd9883f1349d2e worker-2 true 35Gi /dev/vdd
dev-ecf886f85638ee6af563e5f848d2878abae1dcfd worker-0 true 5Gi /dev/vdb
- Create an LVMVolumeGroup resource for
worker-0:
kubectl apply -f - <<EOF
apiVersion: storage.deckhouse.io/v1alpha1
kind: LVMVolumeGroup
metadata:
name: "vg-1-on-worker-0" # The name can be any fully qualified resource name in Kubernetes. This LVMVolumeGroup resource name will be used to create ReplicatedStoragePool in the future
spec:
type: Local
local:
nodeName: "worker-0"
blockDeviceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- dev-0a29d20f9640f3098934bca7325f3080d9b6ef74
- dev-ecf886f85638ee6af563e5f848d2878abae1dcfd
actualVGNameOnTheNode: "vg-1" # the name of the LVM VG to be created from the above block devices on the node
EOF
- Wait for the created
LVMVolumeGroupresource to becomeReady:
kubectl get lvg vg-1-on-worker-0 -w
-
The resource becoming
Readymeans that an LVM VG namedvg-1made up of the/dev/vddand/dev/vdbblock devices has been created on theworker-0node.- Next, create an LVMVolumeGroup resource for
worker-1:
- Next, create an LVMVolumeGroup resource for
kubectl apply -f - <<EOF
apiVersion: storage.deckhouse.io/v1alpha1
kind: LVMVolumeGroup
metadata:
name: "vg-1-on-worker-1"
spec:
type: Local
local:
nodeName: "worker-1"
blockDeviceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- dev-49ff548dfacba65d951d2886c6ffc25d345bb548
actualVGNameOnTheNode: "vg-1"
EOF
- Wait for the created
LVMVolumeGroupresource to becomeReady:
kubectl get lvg vg-1-on-worker-1 -w
-
The resource becoming
Readymeans that an LVM VG namedvg-1made up of the/dev/vdeblock device has been created on theworker-1node.- Create an LVMVolumeGroup resource for
worker-2:
- Create an LVMVolumeGroup resource for
kubectl apply -f - <<EOF
apiVersion: storage.deckhouse.io/v1alpha1
kind: LVMVolumeGroup
metadata:
name: "vg-1-on-worker-2"
spec:
type: Local
local:
nodeName: "worker-2"
blockDeviceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- dev-75d455a9c59858cf2b571d196ffd9883f1349d2e
actualVGNameOnTheNode: "vg-1"
EOF
- Wait for the created
LVMVolumeGroupresource to becomeReady:
kubectl get lvg vg-1-on-worker-2 -w
-
The resource becoming
Readymeans that an LVM VG namedvg-1made up of the/dev/vddblock device has been created on theworker-2node.- Now that we have all the LVM VGs created on the nodes, create a ReplicatedStoragePool out of those VGs:
kubectl apply -f -<<EOF
apiVersion: storage.deckhouse.io/v1alpha1
kind: ReplicatedStoragePool
metadata:
name: data
spec:
type: LVM
lvmVolumeGroups: # Here, specify the names of the LVMVolumeGroup resources you created earlier
- name: vg-1-on-worker-0
- name: vg-1-on-worker-1
- name: vg-1-on-worker-2
EOF
- Wait for the created
ReplicatedStoragePoolresource to becomeCompleted:
kubectl get rsp data -w
- Confirm that the
dataStorage Pool has been created on nodesworker-0,worker-1andworker-2:
alias linstor='kubectl -n d8-sds-replicated-volume exec -ti deploy/linstor-controller -- linstor'
linstor sp l
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ worker-0 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ worker-0;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ worker-1 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ worker-1;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ worker-2 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ worker-2;DfltDisklessStorPool ┊
┊ data ┊ worker-0 ┊ LVM ┊ vg-1 ┊ 35.00 GiB ┊ 35.00 GiB ┊ False ┊ Ok ┊ worker-0;data ┊
┊ data ┊ worker-1 ┊ LVM ┊ vg-1 ┊ 35.00 GiB ┊ 35.00 GiB ┊ False ┊ Ok ┊ worker-1;data ┊
┊ data ┊ worker-2 ┊ LVM ┊ vg-1 ┊ 35.00 GiB ┊ 35.00 GiB ┊ False ┊ Ok ┊ worker-2;data ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
- Create a ReplicatedStorageClass resource for a zone-free cluster (see use cases for details on how zonal ReplicatedStorageClasses work):
kubectl apply -f -<<EOF
apiVersion: storage.deckhouse.io/v1alpha1
kind: ReplicatedStorageClass
metadata:
name: replicated-storage-class
spec:
storagePool: data # Here, specify the name of the ReplicatedStoragePool you created earlier
reclaimPolicy: Delete
topology: Ignored # - note that setting "ignored" means there should be no zones (nodes labeled topology.kubernetes.io/zone) in the cluster
EOF
- Wait for the created
ReplicatedStorageClassresource to becomeCreated:
kubectl get rsc replicated-storage-class -w
- Confirm that the corresponding
StorageClasshas been created:
kubectl get sc replicated-storage-class
- If
StorageClasswith the namereplicated-storage-classis shown, then the configuration of thesds-replicated-volumemodule is complete. Now users can create PVs by specifyingStorageClasswith the namereplicated-storage-class. Given the above settings, a volume will be created with 3 replicas on different nodes.
System requirements and recommendations
Requirements
Applicable to both single-zone clusters and clusters using multiple availability zones.
- Use stock kernels provided with supported distributions.
- A network infrastructure with a bandwidth of 10 Gbps or higher is required for network connectivity.
- To achieve maximum performance, the network latency between nodes should be between 0.5–1 ms.
- Do not use another SDS (Software Defined Storage) to provide disks for SDS Deckhouse.
Recommendations
- Avoid using RAID. The reasons are detailed in the FAQ.
- Use local physical disks. The reasons are detailed in the FAQ.
- In order for cluster to be operational, but with performance degradation, network latency should not be higher than 20ms between nodes