Working with snapshots
Managed Service for Kubernetes supports snapshots, which are a point-in-time copy of a PersistentVolume. For more information about snapshots, see the Kubernetes documentation
To create a snapshot and then restore it:
If you no longer need the resources you created, delete them.
Getting started
-
Create Kubernetes resources:
ManuallyTerraform-
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a Managed Service for Kubernetes cluster and a node group in any suitable configuration. When creating, specify the security groups prepared earlier.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-cluster.tf
cluster configuration file to the same working directory. The file describes:-
Network.
-
Subnet.
-
Managed Service for Kubernetes cluster.
-
Service account required to create a Managed Service for Kubernetes cluster and node group.
-
Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
-
Specify the folder ID in the configuration file.
-
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
-
-
Install kubectl
and configure it to work with the created cluster.
Prepare a test environment
To test snapshots, a PersistentVolumeClaim and pod are created to simulate the workload.
-
Create a file named
01-pvc.yaml
with thePersistentVolumeClaim
manifest:--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-dynamic spec: accessModes: - ReadWriteOnce storageClassName: yc-network-hdd resources: requests: storage: 5Gi
-
Create a
PersistentVolumeClaim
:kubectl apply -f 01-pvc.yaml
-
Make sure the
PersistentVolumeClaim
has been created and its status isPending
:kubectl get pvc pvc-dynamic
-
Create a file named
02-pod.yaml
with thepod-source
manifest:--- apiVersion: v1 kind: Pod metadata: name: pod-source spec: containers: - name: app image: ubuntu command: ["/bin/sh"] args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /data volumes: - name: persistent-storage persistentVolumeClaim: claimName: pvc-dynamic
The pod container will write the current date and time to the
/data/out.txt
file. -
Create a pod named
pod-source
:kubectl apply -f 02-pod.yaml
-
Make sure the pod status changed to
Running
:kubectl get pod pod-source
-
Make sure the date and time are written to the
/data/out.txt
file. For this, run the following command on the pod:kubectl exec pod-source -- tail /data/out.txt
Result:
Thu Feb 3 04:55:21 UTC 2022 Thu Feb 3 04:55:26 UTC 2022 ...
Create a snapshot
-
Create a file named
03-snapshot.yaml
with the snapshot manifest:--- apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: new-snapshot-test spec: volumeSnapshotClassName: yc-csi-snapclass source: persistentVolumeClaimName: pvc-dynamic
-
Create a snapshot:
kubectl apply -f 03-snapshot.yaml
-
Check that the snapshot has been created:
kubectl get volumesnapshots.snapshot.storage.k8s.io
-
Make sure the VolumeSnapshotContent
has been created:kubectl get volumesnapshotcontents.snapshot.storage.k8s.io
Restore objects from the snapshot
When restoring objects from the snapshot
PersistentVolumeClaim
object namedpvc-restore
.- Pod named
pod-restore
with entries in the/data/out.txt
file.
To restore the snapshot:
-
Create a file named
04-restore-snapshot.yaml
with a manifest of a newPersistentVolumeClaim
:--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-restore spec: storageClassName: yc-network-hdd dataSource: name: new-snapshot-test kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
Tip
You can change the size of the
PersistentVolumeClaim
being created. To do this, specify the desired size in thespec.resources.requests.storage
setting value. -
Create a new
PersistentVolumeClaim
:kubectl apply -f 04-restore-snapshot.yaml
-
Make sure the
PersistentVolumeClaim
has been created and its status isPending
:kubectl get pvc pvc-restore
-
Create a file named
05-pod-restore.yaml
with a manifest of a newpod-restore
pod:--- apiVersion: v1 kind: Pod metadata: name: pod-restore spec: containers: - name: app-restore image: ubuntu command: ["/bin/sh"] args: ["-c", "while true; do sleep 5; done"] volumeMounts: - name: persistent-storage-r mountPath: /data volumes: - name: persistent-storage-r persistentVolumeClaim: claimName: pvc-restore
The new pod container will not perform any actions with the
/data/out.txt
file. -
Create a pod named
pod-restore
:kubectl apply -f 05-pod-restore.yaml
-
Make sure the pod status changed to
Running
:kubectl get pod pod-restore
-
Make sure the new
PersistentVolumeClaim
switched to theBound
status:kubectl get pvc pvc-restore
-
Make sure the
/data/out.txt
file on the new pod contains records that thepod-source
pod container added to the file before creating the snapshot:kubectl exec pod-restore -- tail /data/out.txt
Result:
Thu Feb 3 04:55:21 UTC 2022 Thu Feb 3 04:55:26 UTC 2022 ...
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
-
Delete the Managed Service for Kubernetes cluster:
ManuallyTerraform-
In the command line, go to the folder that houses the current Terraform configuration file with an infrastructure plan.
-
Delete the resources using this command:
terraform destroy
Alert
Terraform will delete all the resources you created using it, such as clusters, networks, subnets, and VMs.
-
Confirm the deletion of resources.
-
-
If you reserved a public static IP address for the cluster, delete it.