Integration with Object Storage
Container Storage Interface allows you to dynamically reserve Yandex Object Storage buckets and mount them in Managed Service for Kubernetes cluster pods. You can mount existing buckets or create new ones.
To use Container Storage Interface capabilities:
See also:
Setting up a runtime environment
- Create a service account with the
storage.editor
role. - Create a static access key for your service account. Save the key ID and secret key, as you will need them when installing Container Storage Interface.
- Create an Object Storage bucket that will be mounted to a
PersistentVolume
. Save the bucket name, as you will need it when installing Container Storage Interface. -
Install kubectl
and configure it to work with the created cluster.
Configuring Container Storage Interface
Note
Before being published on the Yandex Cloud Marketplace, new application versions are performance tested in the Yandex Cloud infrastructure, so they may get updates with a delay. To use the latest version, install it using a Helm chart from the GitHub repository.
Install the Container Storage Interface application for S3 by following this step-by-step guide. While installing the application, specify the following parameters:
- Namespace:
kube-system
. - S3 key ID: Copy the key ID of the created service account into this field.
- S3 secret key: Copy the secret key of the created service account into this field.
- Shared S3 bucket for volumes: To use the existing bucket, specify its name. This setting is only relevant for dynamic
PersistentVolume
objects. - Storage class name:
csi-s3
. Also select the Create a storage class option. - Secret name:
csi-s3-secret
. Also select the Create a secret option.
Leave the default values for the other parameters.
After installing the application, you can create static and dynamic PersistentVolume
objects to use Object Storage buckets.
-
Create a file named
secret.yaml
and specify the Container Storage Interface access settings in it:--- apiVersion: v1 kind: Secret metadata: namespace: kube-system name: csi-s3-secret stringData: accessKeyID: <access_key_ID> secretAccessKey: <secret_key> endpoint: https://storage.yandexcloud.net
In the
accessKeyID
andsecretAccessKey
fields, specify the ID and secret key you obtained previously. -
Create a file with a description of the
storageclass.yaml
storage class:--- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: csi-s3 provisioner: ru.yandex.s3.csi parameters: mounter: geesefs options: "--memory-limit=1000 --dir-mode=0777 --file-mode=0666" bucket: <optionally:_existing_bucket_name> csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret csi.storage.k8s.io/provisioner-secret-namespace: kube-system csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret csi.storage.k8s.io/controller-publish-secret-namespace: kube-system csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret csi.storage.k8s.io/node-stage-secret-namespace: kube-system csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret csi.storage.k8s.io/node-publish-secret-namespace: kube-system
To use an existing bucket, specify its name in the
bucket
parameter. This setting is only relevant for dynamicPersistentVolume
objects. -
Clone the GitHub repository
containing the current Container Storage Interface driver:git clone https://github.com/yandex-cloud/k8s-csi-s3.git
-
Create resources for Container Storage Interface and your storage class:
kubectl create -f secret.yaml && \ kubectl create -f k8s-csi-s3/deploy/kubernetes/provisioner.yaml && \ kubectl create -f k8s-csi-s3/deploy/kubernetes/driver.yaml && \ kubectl create -f k8s-csi-s3/deploy/kubernetes/csi-s3.yaml && \ kubectl create -f storageclass.yaml
After installing the Container Storage Interface driver and configuring your storage class, you can create static and dynamic
PersistentVolume
objects to use Object Storage buckets.
Container Storage Interface usage
With Container Storage Interface configured, there are certain things to consider when creating static and dynamic PersistentVolumes
objects.
Dynamic PersistentVolume
For a dynamic PersistentVolume
:
- Specify the name of the storage class in the
spec.storageClassName
parameter when creating aPersistentVolumeClaim
. - If required, specify the bucket name in the
bucket
parameter (or the Shared S3 bucket for volumes field in the Yandex Cloud Marketplace settings) when creating a storage class. This affects Container Storage Interface behavior:-
If you specify the bucket name when configuring your storage class, Container Storage Interface will create a separate folder in this bucket for each created
PersistentVolume
. -
If you skip the bucket name, Container Storage Interface will create a separate bucket for each created
PersistentVolume
.
-
See also the example of creating a dynamic PersistentVolume
.
Static PersistentVolume
For a static PersistentVolume
:
-
When creating a
PersistentVolumeClaim
, set an empty value for thespec.storageClassName
parameter. -
Then, specify the name of the bucket or bucket directory in the
spec.csi.volumeHandle
parameter. If no such bucket exists, create one.Note
Deleting such a
PersistentVolume
will not automatically delete its associated bucket. -
To update GeeseFS options for working with a bucket, specify them in the
spec.csi.volumeAttributes.options
parameter when creating aPersistentVolume
. For example, in the--uid
option, you can specify the ID of the owner of all the files in a storage. To get a list of GeeseFS options, run thegeesefs -h
command or refer to the relevant GitHub repository .The GeeseFS options specified under
parameters.options
(or the GeeseFS mount options field in the Yandex Cloud Marketplace settings) inStorageClass
are ignored for a staticPersistentVolume
. For more information, see the Kubernetes documentation .
See also the example of creating a static PersistentVolume
.
Use cases
Dynamic PersistentVolume
To use Container Storage Interface with a dynamic PersistentVolume
:
-
Create
PersistentVolumeClaim
:-
Create a file named
pvc-dynamic.yaml
with thePersistentVolumeClaim
description:pvc-dynamic.yaml
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-s3-pvc-dynamic namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: csi-s3
You can change the requested storage size as you need in the
spec.resources.requests.storage
parameter value. -
Create
PersistentVolumeClaim
:kubectl create -f pvc-dynamic.yaml
-
Make sure the
PersistentVolumeClaim
status changed toBound
:kubectl get pvc csi-s3-pvc-dynamic
Result:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-s3-pvc-dynamic Bound pvc-<dynamic_bucket_name> 5Gi RWX csi-s3 73m
-
-
Create a pod to test your dynamic
PersistentVolume
.-
Create a file named
pod-dynamic.yaml
with the pod description:pod-dynamic.yaml
--- apiVersion: v1 kind: Pod metadata: name: csi-s3-test-nginx-dynamic namespace: default spec: containers: - name: csi-s3-test-nginx image: nginx volumeMounts: - mountPath: /usr/share/nginx/html/s3 name: webroot volumes: - name: webroot persistentVolumeClaim: claimName: csi-s3-pvc-dynamic readOnly: false
-
Create a pod to mount a bucket to for your dynamic
PersistentVolume
:kubectl create -f pod-dynamic.yaml
-
Make sure the pod status changed to
Running
:kubectl get pods
-
-
Create the
/usr/share/nginx/html/s3/hello_world
file in the container. For this, run the following command on the pod:kubectl exec -ti csi-s3-test-nginx-dynamic -- touch /usr/share/nginx/html/s3/hello_world
-
Make sure that the file is in the bucket:
- Go to the folder page and select Object Storage.
- Click the
pvc-<dynamic_bucket_name>
bucket. If you specified a bucket name when configuring your storage class, open the bucket and thepvc-<dynamic_bucket_name>
folder located inside.
Static PersistentVolume
To use Container Storage Interface with a static PersistentVolume
:
-
Create
PersistentVolumeClaim
:-
Create a file named
pvc-static.yaml
with thePersistentVolumeClaim
description:pvc-static.yaml
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-s3-pvc-static namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: ""
For a static
PersistentVolume
, you do not need to specify the storage class name in thespec.storageClassName
parameter. You can change the requested storage size as you need in thespec.resources.requests.storage
parameter value. -
Create a file named
pv-static.yaml
with the staticPersistentVolume
description, and specify thevolumeHandle
parameter value in the file:pv-static.yaml
--- apiVersion: v1 kind: PersistentVolume metadata: name: s3-volume spec: storageClassName: csi-s3 capacity: storage: 10Gi accessModes: - ReadWriteMany claimRef: namespace: default name: csi-s3-pvc-static csi: driver: ru.yandex.s3.csi volumeHandle: "<static_bucket_name>/<optionally:_path_to_folder_inside_bucket>" controllerPublishSecretRef: name: csi-s3-secret namespace: kube-system nodePublishSecretRef: name: csi-s3-secret namespace: kube-system nodeStageSecretRef: name: csi-s3-secret namespace: kube-system volumeAttributes: capacity: 10Gi mounter: geesefs options: "--memory-limit=1000 --dir-mode=0777 --file-mode=0666 --uid=1001"
Here, the GeeseFS settings for working with a bucket are different compared to
StorageClass
. There is an additional--uid
option which specifies the ID of the owner of all the files in the storage:1001
. See above for more information on setting up GeeseFS for a staticPersistentVolume
. -
Create
PersistentVolumeClaim
:kubectl create -f pvc-static.yaml
-
Create a static
PersistentVolume
:kubectl create -f pv-static.yaml
-
Make sure the
PersistentVolumeClaim
status changed toBound
:kubectl get pvc csi-s3-pvc-static
Result:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-s3-pvc-static Bound <PersistentVolume_name> 10Gi RWX csi-s3 73m
-
-
Create a pod to test your static
PersistentVolume
.-
Create a file named
pod-static.yaml
with the pod description:pod-static.yaml
--- apiVersion: v1 kind: Pod metadata: name: csi-s3-test-nginx-static namespace: default spec: containers: - name: csi-s3-test-nginx-static image: nginx volumeMounts: - mountPath: /usr/share/nginx/html/s3 name: s3-volume volumes: - name: s3-volume persistentVolumeClaim: claimName: csi-s3-pvc-static readOnly: false
-
Create a pod to mount a bucket to for your static
PersistentVolume
:kubectl create -f pod-static.yaml
-
Make sure the pod has entered the
Running
state:kubectl get pods
-
-
Create the
/usr/share/nginx/html/s3/hello_world_static
file in the container. For this, run the following command on the pod:kubectl exec -ti csi-s3-test-nginx-static -- touch /usr/share/nginx/html/s3/hello_world_static
-
Make sure that the file is in the bucket:
- Go to the folder page and select Object Storage.
- Click
<static_bucket_name>
. If you specified the path to the folder located inside the bucket in the staticPersistentVolume
description, you need to open that folder first.