The module is available only in Deckhouse Enterprise Edition.

The functionality of the module might significantly change. Compatibility with future versions is not guaranteed.

How to check module health?

To do this, you need to check the status of the pods in the d8-csi-s3 namespace. All pods should be in the Running or Completed state and should be running on all nodes.

kubectl -n d8-csi-s3 get pod -owide -w

Is it possible to change the parameters of S3 buckets for already created PVs?

No, the connection data to the storage cannot be changed. Changing the StorageClass also does not affect the connection settings in already existing PVs.

Why does mount point size in pods appeare as 1 petabyte in df -h?

This is a feature of the mounter used in module (geesefs). Used size does not change during usage as well.

What if I exceed bucket’s or user’s quota during usage of the module?

Quota exceeding is an abnormal situation. Users must avoid doing that. Further behavior depends on exact storage you use as a backend. Possible options:

  • You will be able to copy/edit files in the pods but changes will not reflect on the storage content.
  • Pod can crash and be restarted.

How do I get info about the space used?

As of today, the only way is to use an interface to the storage - either its webUI or cmdline.

Can I use several S3-storages in the same pod?

Yes, it’s possible. For that, you’ll have to create one more S3StorageClass and PVC. Then assign PVC and mount volume in pod like this:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: csi-s3-test-nginx
  namespace: default
spec:
  containers:
   - name: csi-s3-test-nginx
     image: nginx
     volumeMounts:
       - mountPath: /usr/share/nginx/html/s3
         name: webroot
       - mountPath: /opt/homedir
         name: homedir
  volumes:
   - name: webroot
     persistentVolumeClaim:
       claimName: csi-s3-pvc # PVC name
       readOnly: false   - name: webroot
   - name: homedir
       persistentVolumeClaim:
       claimName: csi-s3-pvc2 # PVC-2 name
       readOnly: false
EOF

Can I use the same bucket for several pods?

Yes. Specify bucketName in S3StorageClass. In this case, new folders will be created inside the bucket for each PV.

Troubleshooting

Issues while creating PVC

Check the logs of the provisioner: kubectl -n d8-csi-s3 logs -l app=csi-provisioner-s3 -c csi-s3

Issues creating containers

Ensure feature gate MountPropagation is not set to false

Check the logs of the s3-driver: kubectl -n d8-csi-s3 logs -l app=csi-s3 -c csi-s3