Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Managed Service for Kubernetes
  • Comparing with other Yandex Cloud services
  • Getting started
    • All tutorials
    • Creating a new Kubernetes project in Yandex Cloud
    • Creating a Kubernetes cluster with no internet access
    • Running workloads with GPUs
    • Using node groups with GPUs and no pre-installed drivers
    • Setting up time-slicing GPUs
    • Migrating resources to a different availability zone
    • Encrypting secrets in Managed Service for Kubernetes
    • Creating a Kubernetes cluster using the Yandex Cloud provider for the Kubernetes Cluster API
    • Accessing the Yandex Cloud API from a Managed Service for Kubernetes cluster using a workload identity federation
      • Backups in Object Storage
      • Working with snapshots
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes

In this article:

  • Required paid resources
  • Getting started
  • Set up a test environment
  • Create a snapshot
  • Restore objects from the snapshot
  • Delete the resources you created
  1. Tutorials
  2. Backups
  3. Working with snapshots

Working with snapshots

Written by
Yandex Cloud
Updated at November 21, 2025
  • Required paid resources
  • Getting started
  • Set up a test environment
  • Create a snapshot
  • Restore objects from the snapshot
  • Delete the resources you created

Managed Service for Kubernetes supports snapshots, which are point-in-time PersistentVolume copies. For more information about snapshots, see this Kubernetes article.

To create a snapshot and then use it for restoring:

  1. Set up a test environment.
  2. Create a snapshot.
  3. Restore objects from the snapshot.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost for this solution includes:

  • Fee for using the master and outgoing traffic in a Managed Service for Kubernetes cluster (see Managed Service for Kubernetes pricing).
  • Fee for using computing resources, OS, and storage in cluster nodes (VMs) (see Compute Cloud pricing).
  • Fee for a public IP address assigned to cluster nodes (see Virtual Private Cloud pricing).

Getting startedGetting started

  1. Create Kubernetes resources:

    Manually
    Terraform
    1. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

      Warning

      The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

    2. Create a Managed Service for Kubernetes cluster and node group with any suitable configuration. When creating, specify the preconfigured security groups.

    1. If you do not have Terraform yet, install it.

    2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

    3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

    4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

    5. Download the k8s-cluster.tf cluster configuration file to the same working directory. This file describes:

      • Network.

      • Subnet.

      • Managed Service for Kubernetes cluster.

      • Service account required to create the Managed Service for Kubernetes cluster and node group.

      • Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.

        Warning

        The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

    6. Specify the folder ID in the configuration file:

    7. Make sure the Terraform configuration files are correct using this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

    8. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

      All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

  2. Install kubect and configure it to work with the new cluster.

Set up a test environmentSet up a test environment

To test snapshots, you will create a PersistentVolumeClaim and a pod to simulate the workload.

  1. Create the 01-pvc.yaml file with the PersistentVolumeClaim manifest:

    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-dynamic
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: yc-network-hdd
      resources:
        requests:
          storage: 5Gi
    
  2. Create a PersistentVolumeClaim:

    kubectl apply -f 01-pvc.yaml
    
  3. Make sure the PersistentVolumeClaim was created and is Pending:

    kubectl get pvc pvc-dynamic
    
  4. Create the 02-pod.yaml file with the pod-source pod manifest:

    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-source
    spec:
      containers:
        - name: app
          image: ubuntu
          command: ["/bin/sh"]
          args:
            ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
          volumeMounts:
            - name: persistent-storage
              mountPath: /data
      volumes:
        - name: persistent-storage
          persistentVolumeClaim:
            claimName: pvc-dynamic
    

    The pod container will write the current date and time to the /data/out.txt file.

  5. Create a pod named pod-source:

    kubectl apply -f 02-pod.yaml
    
  6. Make sure the pod status changed to Running:

    kubectl get pod pod-source
    
  7. Check that /data/out.txt shows lines with date and time: For this, run the following command on the pod:

    kubectl exec pod-source -- tail /data/out.txt
    

    Result:

    Thu Feb 3 04:55:21 UTC 2022
    Thu Feb 3 04:55:26 UTC 2022
    ...
    

Create a snapshotCreate a snapshot

  1. Create the 03-snapshot.yaml file with the snapshot manifest:

    ---
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: new-snapshot-test
    spec:
      volumeSnapshotClassName: yc-csi-snapclass
      source:
        persistentVolumeClaimName: pvc-dynamic
    
  2. Create a snapshot:

    kubectl apply -f 03-snapshot.yaml
    
  3. Make sure the snapshot was created:

    kubectl get volumesnapshots.snapshot.storage.k8s.io
    
  4. Make sure the VolumeSnapshotContent was created:

    kubectl get volumesnapshotcontents.snapshot.storage.k8s.io
    

Restore objects from the snapshotRestore objects from the snapshot

When restoring objects from the snapshot, the cluster will create:

  • PersistentVolumeClaim object named pvc-restore.
  • Pod named pod-restore with entries in /data/out.txt.

To restore from the snapshot:

  1. Create the 04-restore-snapshot.yaml file with the new PersistentVolumeClaim manifest:

    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-restore
    spec:
      storageClassName: yc-network-hdd
      dataSource:
        name: new-snapshot-test
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
    

    Tip

    You can resize the PersistentVolumeClaim being created. To do this, specify its new size in the spec.resources.requests.storage setting.

  2. Create the new PersistentVolumeClaim:

    kubectl apply -f 04-restore-snapshot.yaml
    
  3. Make sure the PersistentVolumeClaim was created and is Pending:

    kubectl get pvc pvc-restore
    
  4. Create the 05-pod-restore.yaml file with the new pod-restore pod manifest:

    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-restore
    spec:
      containers:
        - name: app-restore
          image: ubuntu
          command: ["/bin/sh"]
          args: ["-c", "while true; do sleep 5; done"]
          volumeMounts:
            - name: persistent-storage-r
              mountPath: /data
      volumes:
        - name: persistent-storage-r
          persistentVolumeClaim:
            claimName: pvc-restore
    

    The new pod container will not perform any actions with /data/out.txt.

  5. Create a pod named pod-restore:

    kubectl apply -f 05-pod-restore.yaml
    
  6. Make sure the pod status changed to Running:

    kubectl get pod pod-restore
    
  7. Make sure the new PersistentVolumeClaim status changed to Bound:

    kubectl get pvc pvc-restore
    
  8. Make sure /data/out.txt on the new pod contains all entries added by pod-source container before creating the snapshot:

    kubectl exec pod-restore -- tail /data/out.txt
    

    Result:

    Thu Feb 3 04:55:21 UTC 2022
    Thu Feb 3 04:55:26 UTC 2022
    ...
    

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

  1. Delete the Managed Service for Kubernetes cluster:

    Manually
    Terraform

    Delete the Managed Service for Kubernetes cluster.

    1. In the terminal window, go to the directory containing the infrastructure plan.

      Warning

      Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

    2. Delete resources:

      1. Run this command:

        terraform destroy
        
      2. Confirm deleting the resources and wait for the operation to complete.

      All the resources described in the Terraform manifests will be deleted.

  2. Delete the cluster public static IP address if you reserved one.

  3. Delete the disk snapshot.

Was the article helpful?

Previous
Backups in Object Storage
Next
Cluster monitoring with the help of Prometheus Operator with Yandex Monitoring support
© 2025 Direct Cursus Technology L.L.C.