Overview of Object Storage CSI Drivers for Kubernetes
Please note that an official Zadara CSI driver is on our product roadmap.
When using 3rd party S3 CSI drivers with Zadara's Object Storage (ZOS), the third party CSI driver will often mount the file system via FUSE. This is often done using rclone or s3fs FUSE. If this is the case -- and the driver does not require AWS's IAM credentials -- it is likely that you can use ZOS as persistent Kubernetes storage.
3rd Party CSI Drivers
Please note that third party drivers are not Zadara supported but have been otherwise evaluated to connect and create persistent storage on ZOS.
Github ctrox / csi-s3
This S3 CSI driver has been evaluated using rclone as the fuse mount point for Kubernetes. (For more information on rclone and ZOS, this document tells you how to configure rclone for Zadara).
General Instructions
1. Download or clone the CSI driver from Github: https://github.com/ctrox/csi-s3
2. Follow the install instructions in the readme.md file for Kubernetes installation.
3. You will need a secret.yaml and storageclass.yaml. Use the baseline files given in the deploy/kubernetes/examples/ folder.
secrets.yaml
The secrets.yaml contains the credentials (accessKeyID, secretAccessKey) and endpoint. You retrieve these from your account information on ZOS. The endpoint will be the local IP address or the Zadara public URL where your region is.
Your file should have these parameters
apiVersion: v1
kind: Secret
metadata:
namespace: kube-system
name: csi-s3-secret
stringData:
accessKeyID:xxxxx
secretAccessKey: xxxxx
# Use https://aaa.bbb.ccc.ddd if it is a local IP address
endpoint: https://vsa-0000000c-cyxtera-01-public.zadarazios.com
region: ""
Use kubectl to add csi-s3-secret to the Kubernetes namespace
kubectl create -f secret.yaml
storageclass.yaml
From your ZOS account, use the console to create a bucket for the persistent storage. For driver context, I used ctrox-csi-test as the bucket name. Your apps will use this bucket for storage.
Again, use the file in the examples folder. Just change the bucket parameter. If you wish to use a different mounter other than rclone, you would specify it here.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: csi-s3
provisioner: ch.ctrox.csi.s3-driver
parameters:
# specify which mounter to use
# can be set to rclone, s3fs, goofys or s3backer
mounter: rclone
# to use an existing bucket, specify it here:
bucket: ctrox-csi-test
csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret
csi.storage.k8s.io/provisioner-secret-namespace: kube-system
csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret
csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret
csi.storage.k8s.io/node-stage-secret-namespace: kube-system
csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret
csi.storage.k8s.io/node-publish-secret-namespace: kube-system
Add the storageclass to Kubernetes
kubectl create -f storageclass.yaml
The last file you need to change is the pvc.yaml. You need to specify the amount of storage to allocate.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-s3-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: csi-s3
Next add the PVC to Kubernetes
kubectl create -f pvc.yaml
If all goes well, you will see a new pvc folder in your bucket.