Working with snapshots
Managed Service for Kubernetes supports snapshots, which are a point-in-time copy of a PersistentVolume. For more information about snapshots, see the Kubernetes documentation
To create a snapshot and then restore it:
If you no longer need the resources you created, delete them.
Getting started
-
Create Kubernetes resources:
ManuallyTerraform-
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a Managed Service for Kubernetes cluster and a node group in any suitable configuration. When creating them, specify the security groups prepared earlier.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-cluster.tf
cluster configuration file to the same working directory. The file describes:-
Network.
-
Subnet.
-
Managed Service for Kubernetes cluster.
-
Service account required to create the Managed Service for Kubernetes cluster and node group.
-
Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
-
Specify the folder ID in the configuration file.
-
Check that the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
-
-
Install kubectl
and configure it to work with the created cluster.
Prepare a test environment
To test snapshots, a PersistentVolumeClaim and a pod to simulate the workload will be created.
-
Create the
01-pvc.yaml
file with thePersistentVolumeClaim
manifest:--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-dynamic spec: accessModes: - ReadWriteOnce storageClassName: yc-network-hdd resources: requests: storage: 5Gi
-
Create a
PersistentVolumeClaim
:kubectl apply -f 01-pvc.yaml
-
Make sure the
PersistentVolumeClaim
has been created and its status isPending
:kubectl get pvc pvc-dynamic
-
Create the
02-pod.yaml
file with thepod-source
pod manifest:--- apiVersion: v1 kind: Pod metadata: name: pod-source spec: containers: - name: app image: ubuntu command: ["/bin/sh"] args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /data volumes: - name: persistent-storage persistentVolumeClaim: claimName: pvc-dynamic
The pod container will write the current date and time to the
/data/out.txt
file. -
Create a pod named
pod-source
:kubectl apply -f 02-pod.yaml
-
Make sure the pod has entered the
Running
state:kubectl get pod pod-source
-
Make sure the date and time are written to
/data/out.txt
. For this, run the following command on the pod:kubectl exec pod-source -- tail /data/out.txt
Result:
Thu Feb 3 04:55:21 UTC 2022 Thu Feb 3 04:55:26 UTC 2022 ...
Create a snapshot
-
Create the
03-snapshot.yaml
file with the snapshot manifest:--- apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: new-snapshot-test spec: volumeSnapshotClassName: yc-csi-snapclass source: persistentVolumeClaimName: pvc-dynamic
-
Create a snapshot:
kubectl apply -f 03-snapshot.yaml
-
Check that the snapshot has been created:
kubectl get volumesnapshots.snapshot.storage.k8s.io
-
Make sure the VolumeSnapshotContent
has been created:kubectl get volumesnapshotcontents.snapshot.storage.k8s.io
Restore objects from the snapshot
When restoring objects from the snapshot
PersistentVolumeClaim
object namedpvc-restore
.- Pod named
pod-restore
with entries in/data/out.txt
.
To restore the snapshot:
-
Create the
04-restore-snapshot.yaml
file with the newPersistentVolumeClaim
manifest:--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-restore spec: storageClassName: yc-network-hdd dataSource: name: new-snapshot-test kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
Tip
You can resize the new
PersistentVolumeClaim
. To do this, specify its new size in thespec.resources.requests.storage
setting value. -
Create a new
PersistentVolumeClaim
:kubectl apply -f 04-restore-snapshot.yaml
-
Make sure the
PersistentVolumeClaim
has been created and its status isPending
:kubectl get pvc pvc-restore
-
Create the
05-pod-restore.yaml
file with a manifest for the new pod, i.e.,pod-restore
:--- apiVersion: v1 kind: Pod metadata: name: pod-restore spec: containers: - name: app-restore image: ubuntu command: ["/bin/sh"] args: ["-c", "while true; do sleep 5; done"] volumeMounts: - name: persistent-storage-r mountPath: /data volumes: - name: persistent-storage-r persistentVolumeClaim: claimName: pvc-restore
The new pod container will not perform any actions with
/data/out.txt
. -
Create a pod named
pod-restore
:kubectl apply -f 05-pod-restore.yaml
-
Make sure the pod has entered the
Running
state:kubectl get pod pod-restore
-
Make sure the new
PersistentVolumeClaim
has entered theBound
state:kubectl get pvc pvc-restore
-
Make sure the
/data/out.txt
file on the new pod contains records that thepod-source
pod container added to the file before creating the snapshot:kubectl exec pod-restore -- tail /data/out.txt
Result:
Thu Feb 3 04:55:21 UTC 2022 Thu Feb 3 04:55:26 UTC 2022 ...
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
-
Delete the Managed Service for Kubernetes cluster:
ManuallyTerraform-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy
-
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-
-
-
Delete the cluster's public static IP address if you had reserved one.