Integration with Object Storage
Container Storage Interface allows you to dynamically reserve Yandex Object Storage buckets and mount them in Managed Service for Kubernetes cluster pods. You can mount existing buckets or create new ones.
To use Container Storage Interface capabilities:
See also:
Setting up a runtime environment
- Create a service account with the
storage.editor
role. - Create a static access key for the service account. Save the key ID and secret key, as you will need them when installing Container Storage Interface.
- Create an Object Storage bucket that will be mounted to a
PersistentVolume
. Save the bucket name, as you will need it when installing Container Storage Interface. -
Install kubectl
and configure it to work with the created cluster.
Set up Container Storage Interface
Install the Container Storage Interface application for S3 by following this step-by-step guide. While installing the application, specify the following parameters:
- Namespace:
kube-system
. - S3 key ID: Copy the key ID of the created service account into this field.
- S3 secret key: Copy the secret key of the created service account into this field.
- Shared S3 bucket for volumes: To use the existing bucket, specify its name. This setting is only relevant for dynamic
PersistentVolumes
. - Storage class name:
csi-s3
. Also select the Create a storage class option. - Secret name:
csi-s3-secret
. Also select the Create a secret option.
Leave the default values for the other parameters.
After installing the application, you can create static and dynamic PersistentVolumes
to use Object Storage buckets.
-
Create a file named
secret.yaml
and specify the Container Storage Interface access settings in it:--- apiVersion: v1 kind: Secret metadata: namespace: kube-system name: csi-s3-secret stringData: accessKeyID: <access_key_ID> secretAccessKey: <secret_key> endpoint: https://storage.yandexcloud.net
In the
accessKeyID
andsecretAccessKey
fields, specify the previously received ID and secret key value. -
Create a file with a description of the
storageclass.yaml
storage class:--- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: csi-s3 provisioner: ru.yandex.s3.csi parameters: mounter: geesefs options: "--memory-limit=1000 --dir-mode=0777 --file-mode=0666" bucket: <optional:_existing_bucket_name> csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret csi.storage.k8s.io/provisioner-secret-namespace: kube-system csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret csi.storage.k8s.io/controller-publish-secret-namespace: kube-system csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret csi.storage.k8s.io/node-stage-secret-namespace: kube-system csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret csi.storage.k8s.io/node-publish-secret-namespace: kube-system
To use an existing bucket, specify its name in the
bucket
parameter. This setting is only relevant for dynamicPersistentVolumes
. -
Clone the GitHub repository
containing the current Container Storage Interface driver:git clone https://github.com/yandex-cloud/k8s-csi-s3.git
-
Create resources for Container Storage Interface and your storage class:
kubectl create -f secret.yaml && \ kubectl create -f k8s-csi-s3/deploy/kubernetes/provisioner.yaml && \ kubectl create -f k8s-csi-s3/deploy/kubernetes/driver.yaml && \ kubectl create -f k8s-csi-s3/deploy/kubernetes/csi-s3.yaml && \ kubectl create -f storageclass.yaml
After installing the Container Storage Interface driver and configuring your storage class, you can create static and dynamic
PersistentVolumes
to use Object Storage buckets.
Container Storage Interface usage
With Container Storage Interface configured, there are certain things to note about creating static and dynamic PersistentVolumes
.
Dynamic PersistentVolume
For a dynamic PersistentVolume
:
- Specify the name of the storage class in the
spec.storageClassName
parameter when creating aPersistentVolumeClaim
. - If required, specify the bucket name in the
bucket
parameter (or the Shared S3 bucket for volumes field in the Yandex Cloud Marketplace settings) when creating a storage class. This affects Container Storage Interface behavior:-
If you specify the bucket name when configuring your storage class, Container Storage Interface will create a separate folder in this bucket for each created
PersistentVolume
. -
If you skip the bucket name, Container Storage Interface will create a separate bucket for each
PersistentVolume
created.
-
For more information, refer to the example of creating a dynamic PersistentVolume
.
Static PersistentVolume
For a static PersistentVolume
:
-
Leave the
spec.storageClassName
parameter empty when creatingPersistentVolumeClaim
. -
Specify the name of the bucket or bucket directory of your choice in the
spec.csi.volumeHandle
parameter when creatingPersistentVolume
. If no such bucket exists, create one.Note
Deleting this type of
PersistentVolume
will not automatically delete its associated bucket. -
To update GeeseFS options for working with a bucket, specify them in the
spec.csi.volumeAttributes.options
parameter when creating aPersistentVolume
. For example, in the--uid
option, you can specify the ID of the user being the owner of all files in storage. To get a list of GeeseFS options, run thegeesefs -h
command or find it in the GitHub repository .The GeeseFS options specified under
parameters.options
(or the GeeseFS mount options field in the Yandex Cloud Marketplace settings) inStorageClass
are ignored for staticPersistentVolumes
. For more information, see the Kubernetes documentation .
For more information, refer to the example of creating a static PersistentVolume
.
Use cases
Dynamic PersistentVolume
To use Container Storage Interface with a dynamic PersistentVolume
:
-
Create a
PersistentVolumeClaim
:-
Create a file named
pvc-dynamic.yaml
with a description of yourPersistentVolumeClaim
:pvc-dynamic.yaml
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-s3-pvc-dynamic namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: csi-s3
If necessary, change the requested storage size in the
spec.resources.requests.storage
parameter value. -
Create a
PersistentVolumeClaim
:kubectl create -f pvc-dynamic.yaml
-
Make sure your
PersistentVolumeClaim
status has changed toBound
:kubectl get pvc csi-s3-pvc-dynamic
Result:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-s3-pvc-dynamic Bound pvc-<dynamic_bucket_name> 5Gi RWX csi-s3 73m
-
-
Create a pod to test your dynamic
PersistentVolume
.-
Create a file named
pod-dynamic.yaml
with the pod description:pod-dynamic.yaml
--- apiVersion: v1 kind: Pod metadata: name: csi-s3-test-nginx-dynamic namespace: default spec: containers: - name: csi-s3-test-nginx image: nginx volumeMounts: - mountPath: /usr/share/nginx/html/s3 name: webroot volumes: - name: webroot persistentVolumeClaim: claimName: csi-s3-pvc-dynamic readOnly: false
-
Create a pod to mount a bucket to for your dynamic
PersistentVolume
:kubectl create -f pod-dynamic.yaml
-
Make sure the pod status changed to
Running
:kubectl get pods
-
-
Create the
/usr/share/nginx/html/s3/hello_world
file in the container. For this, run the following command on the pod:kubectl exec -ti csi-s3-test-nginx -- touch /usr/share/nginx/html/s3/hello_world
-
Make sure that the file is in the bucket:
- Go to the folder page and select Object Storage.
- Click the
pvc-<dynamic_bucket_name>
bucket. If you specified the bucket name when configuring your storage class, open the specified bucket and thepvc-<dynamic_bucket_name>
folder located inside.
Static PersistentVolume
To use Container Storage Interface with a static PersistentVolume
:
-
Create a
PersistentVolumeClaim
:-
Create a file named
pvc-static.yaml
with a description of yourPersistentVolumeClaim
:pvc-static.yaml
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-s3-pvc-static namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: ""
For a static
PersistentVolume
, you do not need to specify the storage class name in thespec.storageClassName
parameter. If necessary, change the requested storage size in thespec.resources.requests.storage
parameter value. -
Create a file named
pv-static.yaml
containing a description of your staticPersistentVolume
:pv-static.yaml
--- apiVersion: v1 kind: PersistentVolume metadata: name: s3-volume spec: storageClassName: csi-s3 capacity: storage: 10Gi accessModes: - ReadWriteMany claimRef: namespace: default name: csi-s3-pvc-static csi: driver: ru.yandex.s3.csi volumeHandle: "<static_bucket_name>/<optional:_path_to_the_folder_in_the_bucket>" controllerPublishSecretRef: name: csi-s3-secret namespace: kube-system nodePublishSecretRef: name: csi-s3-secret namespace: kube-system nodeStageSecretRef: name: csi-s3-secret namespace: kube-system volumeAttributes: capacity: 10Gi mounter: geesefs options: "--memory-limit=1000 --dir-mode=0777 --file-mode=0666 --uid=1001"
In this example, GeeseFS settings for working with a bucket are changed as compared to
StorageClass
. The--uid
option is added to them. It specifies the ID of the user being the owner of all files in storage:1001
. For more information about setting up GeeseFS for a staticPersistentVolume
, see this section. -
Create a
PersistentVolumeClaim
:kubectl create -f pvc-static.yaml
-
Create a static
PersistentVolume
:kubectl create -f pv-static.yaml
-
Make sure your
PersistentVolumeClaim
status has changed toBound
:kubectl get pvc csi-s3-pvc-static
Result:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-s3-pvc-static Bound <PersistentVolume_name> 10Gi RWX csi-s3 73m
-
-
Create a pod to test your static
PersistentVolume
.-
Create a file named
pod-static.yaml
with the pod description:pod-static.yaml
--- apiVersion: v1 kind: Pod metadata: name: csi-s3-test-nginx-static namespace: default spec: containers: - name: csi-s3-test-nginx-static image: nginx volumeMounts: - mountPath: /usr/share/nginx/html/s3 name: s3-volume volumes: - name: s3-volume persistentVolumeClaim: claimName: csi-s3-pvc-static readOnly: false
-
Create a pod to mount a bucket to for your static
PersistentVolume
:kubectl create -f pod-static.yaml
-
Make sure the pod status changed to
Running
:kubectl get pods
-
-
Create the
/usr/share/nginx/html/s3/hello_world_static
file in the container. For this, run the following command on the pod:kubectl exec -ti csi-s3-test-nginx-static -- touch /usr/share/nginx/html/s3/hello_world_static
-
Make sure that the file is in the bucket:
- Go to the folder page and select Object Storage.
- Click
<static_bucket_name>
. If you specified the path to the folder located inside the bucket in the staticPersistentVolume
description, you need to open the specified folder first.