Available in: CE, SE, SE+, EE
The module lifecycle stage: General Availability
The module has requirements for installation
Ceph is a scalable distributed storage system that ensures high availability and fault tolerance of data. Deckhouse Kubernetes Platform (DKP) provides Ceph cluster integration using the csi-ceph module. This enables dynamic storage management and the use of StorageClass based on RADOS Block Device (RBD) or CephFS.
The snapshot-controller module must be connected for this module to operate.
This page provides instructions on connecting Ceph to Deckhouse, configuring authentication, creating StorageClass objects, and verifying storage functionality.
Migration from ceph-csi module
When switching from the ceph-csi module to csi-ceph, an automatic migration is performed, but its execution requires preliminary preparation:
-
Set the replica count to zero for all operators (redis, clickhouse, kafka, etc.). Exception: the
prometheusoperator will be disabled automatically. -
Disable the
ceph-csimodule and enablecsi-ceph. -
Wait for the operation to complete. The Deckhouse logs should show the message “Finished migration from Ceph CSI module”.
-
Verify functionality. Create test pods and PVCs to test CSI.
-
Restore the operators to working state.
If Ceph StorageClass was created without using the CephCSIDriver resource, manual migration will be required. Contact technical support.
Connecting to a Ceph cluster
To connect to a Ceph cluster, follow the step-by-step instructions below. Execute all commands on a machine with administrative access to the Kubernetes API.
-
Execute the command to activate the
csi-cephmodule:d8 s module enable csi-ceph -
Wait for the module to transition to
Readystate:d8 k get module csi-ceph -w -
Ensure that all pods in the
d8-csi-cephnamespace are inRunningorCompletedstate and deployed on all cluster nodes:d8 k -n d8-csi-ceph get pod -owide -w -
To configure the connection to the Ceph cluster, apply the CephClusterConnection resource.
Example command:
d8 k apply -f - <<EOF apiVersion: storage.deckhouse.io/v1alpha1 kind: CephClusterConnection metadata: name: ceph-cluster-1 spec: # FSID/UUID of the Ceph cluster. # Get the FSID/UUID of the Ceph cluster using the command `ceph fsid`. clusterID: 014df517-39d1-4453-b7b3-9930c563627c # List of IP addresses of ceph-mon in the format 10.0.0.10:6789. monitors: - 10.0.0.10:6789 # Username without `client.`. # Get the username using the command `ceph auth list`. userID: admin # Authorization key corresponding to userID. # Get the authorization key using the command `ceph auth get-key client.admin`. userKey: <your-ceph-auth-key> EOF -
Verify the connection creation with the command (
Phaseshould be inCreatedstatus):d8 k get cephclusterconnection ceph-cluster-1 -
Create a StorageClass object using the CephStorageClass resource. Manual creation of StorageClass without using CephStorageClass may lead to errors.
Example of creating StorageClass based on RBD:
d8 k apply -f - <<EOF apiVersion: storage.deckhouse.io/v1alpha1 kind: CephStorageClass metadata: name: ceph-rbd-sc spec: clusterConnectionName: ceph-cluster-1 reclaimPolicy: Delete type: RBD rbd: defaultFSType: ext4 pool: ceph-rbd-pool EOFExample of creating StorageClass based on Ceph filesystem:
d8 k apply -f - <<EOF apiVersion: storage.deckhouse.io/v1alpha1 kind: CephStorageClass metadata: name: ceph-fs-sc spec: clusterConnectionName: ceph-cluster-1 reclaimPolicy: Delete type: CephFS cephFS: fsName: cephfs EOF -
Verify that the created CephStorageClass resources have transitioned to
Createdstate:d8 k get cephstorageclassThis will output information about the created CephStorageClass resources:
NAME PHASE AGE ceph-rbd-sc Created 1h ceph-fs-sc Created 1h -
Verify the created StorageClass:
d8 k get scThis will output information about the created StorageClass:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ceph-rbd-sc rbd.csi.ceph.com Delete WaitForFirstConsumer true 15s ceph-fs-sc rbd.csi.ceph.com Delete WaitForFirstConsumer true 15s
Ceph cluster connection setup is complete. You can use the created StorageClass to create PersistentVolumeClaim in your applications.