Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Compute Cloud
  • Yandex Container Solution
    • All tutorials
    • Configuring time synchronization using NTP
    • Autoscaling an instance group to process messages from a queue
    • Updating an instance group under load
    • Deploying Remote Desktop Gateway
    • Getting started with Packer
    • Transferring logs from a VM to Yandex Cloud Logging
    • Building a VM image with infrastructure tools using Packer
    • Migrating data to Yandex Cloud using Hystax Acura
    • Fault protection with Hystax Acura
    • VM backups using Hystax Acura
    • Deploying a fault-tolerant architecture with preemptible VMs
    • Configuring a fault-tolerant architecture in Yandex Cloud
    • Creating a budget trigger that invokes a function to stop a VM
    • Creating triggers that invoke a function to stop a VM and send a Telegram notification
    • Creating a Python web application with Flask
    • Creating an SAP program in Yandex Cloud
    • Deploying a Minecraft server in Yandex Cloud
    • Automating image builds using Jenkins and Packer
    • Creating test VMs via GitLab CI
    • High-performance computing on preemptible VMs
    • Configuring an SFTP server based on CentOS 7
    • Deploying GlusterFS in high availability mode
    • Deploying GlusterFS in high performance mode
    • Backing up to Object Storage with Bacula
    • Building a CI/CD pipeline in GitLab using serverless products
    • Implementing a secure high-availability network infrastructure with a dedicated DMZ based on the Check Point NGFW
    • Cloud infrastructure segmentation with the Check Point next-generation firewall
    • Configuring a secure GRE tunnel over IPsec
    • Creating a bastion host
    • Implementing fault-tolerant scenarios for NAT VMs
    • Creating a tunnel between two subnets using OpenVPN Access Server
    • Creating an external table from a Object Storage bucket table using a configuration file
    • Setting up network connectivity between BareMetal and Virtual Private Cloud subnets
    • Working with snapshots in Managed Service for Kubernetes
    • Launching the DeepSeek-R1 language model in a GPU cluster
    • Running a vLLM library with the Gemma 3 language model on a VM with GPU
  • Access management
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes

In this article:

  • Required paid resources
  • Getting started
  • Prepare a test environment
  • Create a snapshot
  • Restore objects from the snapshot
  • Delete the resources you created
  1. Tutorials
  2. Working with snapshots in Managed Service for Kubernetes

Working with snapshots in Yandex Managed Service for Kubernetes

Written by
Yandex Cloud
Updated at May 5, 2025
  • Required paid resources
  • Getting started
  • Prepare a test environment
  • Create a snapshot
  • Restore objects from the snapshot
  • Delete the resources you created

Managed Service for Kubernetes supports snapshots, which are a point-in-time copy of a PersistentVolume. For more information about snapshots, see the Kubernetes documentation.

To create a snapshot and then restore it:

  1. Prepare a test environment.
  2. Create a snapshot.
  3. Restore objects from the snapshot.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Managed Service for Kubernetes cluster fee: using the master and outgoing traffic (see Managed Service for Kubernetes pricing).
  • Cluster nodes (VM) fee: using computing resources, operating system, and storage (see Compute Cloud pricing).
  • Fee for a public IP address assigned to cluster nodes (see Virtual Private Cloud pricing).

Getting startedGetting started

  1. Create Kubernetes resources:

    Manually
    Terraform
    1. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

      Warning

      The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

    2. Create a Managed Service for Kubernetes cluster and a node group in any suitable configuration. When creating them, specify the security groups prepared earlier.

    1. If you do not have Terraform yet, install it.

    2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

    3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

    4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

    5. Download the k8s-cluster.tf cluster configuration file to the same working directory. This file describes:

      • Network.

      • Subnet.

      • Managed Service for Kubernetes cluster.

      • Service account required to create the Managed Service for Kubernetes cluster and node group.

      • Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.

        Warning

        The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

    6. Specify the folder ID in the configuration file:

    7. Make sure the Terraform configuration files are correct using this command:

      terraform validate
      

      If there are any errors in the configuration files, Terraform will point them out.

    8. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

      All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

  2. Install kubect and configure it to work with the new cluster.

Prepare a test environmentPrepare a test environment

To test snapshots, a PersistentVolumeClaim and a pod to simulate the workload will be created.

  1. Create the 01-pvc.yaml file with the PersistentVolumeClaim manifest:

    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-dynamic
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: yc-network-hdd
      resources:
        requests:
          storage: 5Gi
    
  2. Create a PersistentVolumeClaim:

    kubectl apply -f 01-pvc.yaml
    
  3. Make sure the PersistentVolumeClaim has been created and its status is Pending:

    kubectl get pvc pvc-dynamic
    
  4. Create the 02-pod.yaml file with the pod-source pod manifest:

    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-source
    spec:
      containers:
        - name: app
          image: ubuntu
          command: ["/bin/sh"]
          args:
            ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
          volumeMounts:
            - name: persistent-storage
              mountPath: /data
      volumes:
        - name: persistent-storage
          persistentVolumeClaim:
            claimName: pvc-dynamic
    

    The pod container will write the current date and time to the /data/out.txt file.

  5. Create a pod named pod-source:

    kubectl apply -f 02-pod.yaml
    
  6. Make sure the pod has entered the Running state:

    kubectl get pod pod-source
    
  7. Make sure the date and time are written to /data/out.txt. For this, run the following command on the pod:

    kubectl exec pod-source -- tail /data/out.txt
    

    Result:

    Thu Feb 3 04:55:21 UTC 2022
    Thu Feb 3 04:55:26 UTC 2022
    ...
    

Create a snapshotCreate a snapshot

  1. Create the 03-snapshot.yaml file with the snapshot manifest:

    ---
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: new-snapshot-test
    spec:
      volumeSnapshotClassName: yc-csi-snapclass
      source:
        persistentVolumeClaimName: pvc-dynamic
    
  2. Create a snapshot:

    kubectl apply -f 03-snapshot.yaml
    
  3. Check that the snapshot has been created:

    kubectl get volumesnapshots.snapshot.storage.k8s.io
    
  4. Make sure the VolumeSnapshotContent has been created:

    kubectl get volumesnapshotcontents.snapshot.storage.k8s.io
    

Restore objects from the snapshotRestore objects from the snapshot

When restoring objects from the snapshot, the following items are created in the cluster:

  • PersistentVolumeClaim object named pvc-restore.
  • Pod named pod-restore with entries in /data/out.txt.

To restore the snapshot:

  1. Create the 04-restore-snapshot.yaml file with the new PersistentVolumeClaim manifest:

    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-restore
    spec:
      storageClassName: yc-network-hdd
      dataSource:
        name: new-snapshot-test
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
    

    Tip

    You can resize the new PersistentVolumeClaim. To do this, specify its new size in the spec.resources.requests.storage setting value.

  2. Create a new PersistentVolumeClaim:

    kubectl apply -f 04-restore-snapshot.yaml
    
  3. Make sure the PersistentVolumeClaim has been created and its status is Pending:

    kubectl get pvc pvc-restore
    
  4. Create the 05-pod-restore.yaml file with a manifest for the new pod, i.e., pod-restore:

    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-restore
    spec:
      containers:
        - name: app-restore
          image: ubuntu
          command: ["/bin/sh"]
          args: ["-c", "while true; do sleep 5; done"]
          volumeMounts:
            - name: persistent-storage-r
              mountPath: /data
      volumes:
        - name: persistent-storage-r
          persistentVolumeClaim:
            claimName: pvc-restore
    

    The new pod container will not perform any actions with /data/out.txt.

  5. Create a pod named pod-restore:

    kubectl apply -f 05-pod-restore.yaml
    
  6. Make sure the pod has entered the Running state:

    kubectl get pod pod-restore
    
  7. Make sure the new PersistentVolumeClaim has entered the Bound state:

    kubectl get pvc pvc-restore
    
  8. Make sure the /data/out.txt file on the new pod contains records that the pod-source pod container added to the file before creating the snapshot:

    kubectl exec pod-restore -- tail /data/out.txt
    

    Result:

    Thu Feb 3 04:55:21 UTC 2022
    Thu Feb 3 04:55:26 UTC 2022
    ...
    

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

  1. Delete the Managed Service for Kubernetes cluster:

    Manually
    Terraform

    Delete the Managed Service for Kubernetes cluster.

    1. In the terminal window, go to the directory containing the infrastructure plan.

      Warning

      Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

    2. Delete resources:

      1. Run this command:

        terraform destroy
        
      2. Confirm deleting the resources and wait for the operation to complete.

      All the resources described in the Terraform manifests will be deleted.

  2. Delete the cluster's public static IP address if you had reserved one.

  3. Delete the disk snapshot.

Was the article helpful?

Previous
Setting up network connectivity between BareMetal and Virtual Private Cloud subnets
Next
Creating a VM from a Container Optimized Image
© 2025 Direct Cursus Technology L.L.C.