Working with snapshots
Managed Service for Kubernetes supports snapshots, which are point-in-time PersistentVolume copies. For more information about snapshots, see this Kubernetes article
To create a snapshot and then use it for restoring:
If you no longer need the resources you created, delete them.
Required paid resources
The support cost for this solution includes:
- Fee for using the master and outgoing traffic in a Managed Service for Kubernetes cluster (see Managed Service for Kubernetes pricing).
- Fee for using computing resources, OS, and storage in cluster nodes (VMs) (see Compute Cloud pricing).
- Fee for a public IP address assigned to cluster nodes (see Virtual Private Cloud pricing).
Getting started
-
Create Kubernetes resources:
ManuallyTerraform-
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a Managed Service for Kubernetes cluster and node group with any suitable configuration. When creating, specify the preconfigured security groups.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-cluster.tf
cluster configuration file to the same working directory. This file describes:-
Network.
-
Subnet.
-
Managed Service for Kubernetes cluster.
-
Service account required to create the Managed Service for Kubernetes cluster and node group.
-
Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
-
Specify the folder ID in the configuration file:
-
Make sure the Terraform configuration files are correct using this command:
terraform validateTerraform will show any errors found in your configuration files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
-
-
Install kubect
and configure it to work with the new cluster.
Set up a test environment
To test snapshots, you will create a PersistentVolumeClaim and a pod to simulate the workload.
-
Create the
01-pvc.yamlfile with thePersistentVolumeClaimmanifest:--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-dynamic spec: accessModes: - ReadWriteOnce storageClassName: yc-network-hdd resources: requests: storage: 5Gi -
Create a
PersistentVolumeClaim:kubectl apply -f 01-pvc.yaml -
Make sure the
PersistentVolumeClaimwas created and isPending:kubectl get pvc pvc-dynamic -
Create the
02-pod.yamlfile with thepod-sourcepod manifest:--- apiVersion: v1 kind: Pod metadata: name: pod-source spec: containers: - name: app image: ubuntu command: ["/bin/sh"] args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /data volumes: - name: persistent-storage persistentVolumeClaim: claimName: pvc-dynamicThe pod container will write the current date and time to the
/data/out.txtfile. -
Create a pod named
pod-source:kubectl apply -f 02-pod.yaml -
Make sure the pod status changed to
Running:kubectl get pod pod-source -
Check that
/data/out.txtshows lines with date and time: For this, run the following command on the pod:kubectl exec pod-source -- tail /data/out.txtResult:
Thu Feb 3 04:55:21 UTC 2022 Thu Feb 3 04:55:26 UTC 2022 ...
Create a snapshot
-
Create the
03-snapshot.yamlfile with the snapshot manifest:--- apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: new-snapshot-test spec: volumeSnapshotClassName: yc-csi-snapclass source: persistentVolumeClaimName: pvc-dynamic -
Create a snapshot:
kubectl apply -f 03-snapshot.yaml -
Make sure the snapshot was created:
kubectl get volumesnapshots.snapshot.storage.k8s.io -
Make sure the VolumeSnapshotContent
was created:kubectl get volumesnapshotcontents.snapshot.storage.k8s.io
Restore objects from the snapshot
When restoring objects from the snapshot
PersistentVolumeClaimobject namedpvc-restore.- Pod named
pod-restorewith entries in/data/out.txt.
To restore from the snapshot:
-
Create the
04-restore-snapshot.yamlfile with the newPersistentVolumeClaimmanifest:--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-restore spec: storageClassName: yc-network-hdd dataSource: name: new-snapshot-test kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 10GiTip
You can resize the
PersistentVolumeClaimbeing created. To do this, specify its new size in thespec.resources.requests.storagesetting. -
Create the new
PersistentVolumeClaim:kubectl apply -f 04-restore-snapshot.yaml -
Make sure the
PersistentVolumeClaimwas created and isPending:kubectl get pvc pvc-restore -
Create the
05-pod-restore.yamlfile with the newpod-restorepod manifest:--- apiVersion: v1 kind: Pod metadata: name: pod-restore spec: containers: - name: app-restore image: ubuntu command: ["/bin/sh"] args: ["-c", "while true; do sleep 5; done"] volumeMounts: - name: persistent-storage-r mountPath: /data volumes: - name: persistent-storage-r persistentVolumeClaim: claimName: pvc-restoreThe new pod container will not perform any actions with
/data/out.txt. -
Create a pod named
pod-restore:kubectl apply -f 05-pod-restore.yaml -
Make sure the pod status changed to
Running:kubectl get pod pod-restore -
Make sure the new
PersistentVolumeClaimstatus changed toBound:kubectl get pvc pvc-restore -
Make sure
/data/out.txton the new pod contains all entries added bypod-sourcecontainer before creating the snapshot:kubectl exec pod-restore -- tail /data/out.txtResult:
Thu Feb 3 04:55:21 UTC 2022 Thu Feb 3 04:55:26 UTC 2022 ...
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
-
Delete the Managed Service for Kubernetes cluster:
ManuallyTerraform-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy -
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-
-
-
Delete the cluster public static IP address if you reserved one.