Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Managed Service for Kubernetes
  • Comparing with other Yandex Cloud services
  • Getting started
    • All tutorials
    • Creating a new Kubernetes project in Yandex Cloud
    • Creating a Kubernetes cluster with no internet access
    • Running workloads with GPUs
    • Using node groups with GPUs and no pre-installed drivers
    • Setting up time-slicing GPUs
    • Migrating resources to a different availability zone
      • Managing Kubernetes resources via the Terraform provider
      • Using Yandex Cloud modules in Terraform
    • Encrypting secrets in Managed Service for Kubernetes
    • Creating a Kubernetes cluster using the Yandex Cloud provider for the Kubernetes Cluster API
    • Accessing the Yandex Cloud API from a Managed Service for Kubernetes cluster using a workload identity federation
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes

In this article:

  • Required paid resources
  • Set up the infrastructure for Managed Service for Kubernetes
  • Set up the kubernetes provider
  • Create Kubernetes resources
  • Make sure the cluster application is available from the internet
  • Delete the resources you created
  • Example of setting up a persistent volume with Terraform
  1. Tutorials
  2. Creating a project using Terraform
  3. Managing Kubernetes resources via the Terraform provider

Managing Kubernetes resources in a Managed Service for Kubernetes cluster via the Terraform provider

Written by
Yandex Cloud
Updated at November 21, 2025
  • Required paid resources
  • Set up the infrastructure for Managed Service for Kubernetes
  • Set up the kubernetes provider
  • Create Kubernetes resources
  • Make sure the cluster application is available from the internet
  • Delete the resources you created
  • Example of setting up a persistent volume with Terraform

You can use Terraform manifests to create Kubernetes resources. To do this, set up the kubernetes Terraform provider. It supports Terraform resources that are mapped to YAML configuration files for various Kubernetes resources.

Using Terraform to create Kubernetes resources is convenient if you are already managing your Yandex Managed Service for Kubernetes cluster infrastructure through Terraform. This allows you to describe all resources using the same markup language.

In addition, Terraform tracks dependencies between resources and prevents a resource from being created, modified, or deleted until its dependencies are in place. Let’s assume you are creating a PersistentVolumeClaim resource. It requires a certain amount of storage for a PersistentVolume, yet the required free space is lacking. Terraform will detect the lack and prevent creation of the PersistentVolumeClaim resource.

The example below illustrates how to create standard Kubernetes resources using Terraform.

To create Kubernetes resources with Terraform:

  1. Set up your infrastructure.
  2. Set up the kubernetes provider.
  3. Create Kubernetes resources.
  4. Make sure the cluster application is available from the internet.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost for this solution includes:

  • Fee for using the master and outgoing traffic in a Managed Service for Kubernetes cluster (see Managed Service for Kubernetes pricing).
  • Fee for using computing resources, OS, and storage in cluster nodes (VMs) (see Compute Cloud pricing).
  • Fee for an NLB (see Network Load Balancer pricing).
  • Fee for public IP addresses for the VM and NLB (see Virtual Private Cloud pricing).

Set up the infrastructure for Managed Service for KubernetesSet up the infrastructure for Managed Service for Kubernetes

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

    At this step, the file should not contain kubernetes provider settings. You will add them later.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the k8s-cluster.tf configuration file to the same working directory.

    This file describes:

    • Network.
    • Subnet.
    • Two security groups: one for the cluster and one for the node group.
    • Cloud service account with the k8s.clusters.agent, k8s.tunnelClusters.agent, vpc.publicAdmin, load-balancer.admin, and container-registry.images.puller roles.
    • Managed Service for Kubernetes cluster.
    • Kubernetes node group.
  6. Specify the variable values in the k8s-cluster.tf file.

  7. Make sure the Terraform configuration files are correct using this command:

    terraform validate
    

    Terraform will show any errors found in your configuration files.

  8. Create an infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

  9. Install kubect and configure it to work with the new cluster.

Set up the kubernetes providerSet up the kubernetes provider

  1. In the working directory, open the file with yandex provider settings. It must have the following structure:

    terraform {
      required_providers {
        yandex = {
          source = "yandex-cloud/yandex"
        }
      }
      required_version = ">= 0.13"
    }
    
    provider "yandex" {
      token     = "<IAM_token>"
      cloud_id  = "<cloud_ID>"
      folder_id = "<folder_ID>"
      zone      = "<default_availability_zone>"
    }
    
  2. In the file, specify the parameters required for the kubernetes provider:

    1. Under required_providers, add:

      kubernetes = {
        source = "hashicorp/kubernetes"
      }
      
    2. Under required_providers, change required_version to ">= 0.14.8".

    3. Add a new section at the end of the file:

      data "yandex_client_config" "client" {}
      
      provider "kubernetes" {
        host                   = yandex_kubernetes_cluster.k8s-cluster.master[0].external_v4_endpoint
        cluster_ca_certificate = yandex_kubernetes_cluster.k8s-cluster.master[0].cluster_ca_certificate
        token                  = data.yandex_client_config.client.iam_token
      }
      
  3. Make sure the file looks like this after completing the above steps:

    terraform {
      required_providers {
        yandex = {
          source = "yandex-cloud/yandex"
        }
        kubernetes = {
          source = "hashicorp/kubernetes"
        }
      }
      required_version = ">= 0.14.8"
    }
    
    provider "yandex" {
      token     = "<IAM_token>"
      cloud_id  = "<cloud_ID>"
      folder_id = "<folder_ID>"
      zone      = "<default_availability_zone>"
    }
    
    data "yandex_client_config" "client" {}
    
    provider "kubernetes" {
      host                   = yandex_kubernetes_cluster.k8s-cluster.master[0].external_v4_endpoint
      cluster_ca_certificate = yandex_kubernetes_cluster.k8s-cluster.master[0].cluster_ca_certificate
      token                  = data.yandex_client_config.client.iam_token
    }
    
  4. Initialize the kubernetes provider:

    terraform init
    

Create Kubernetes resourcesCreate Kubernetes resources

Create a test application and a LoadBalancer service:

  1. In the working directory, create a file named deployment.tf describing the Deployment resource:

    resource "kubernetes_deployment" "demo-app-deployment" {
      metadata {
        name = "hello"
        labels = {
          app = "hello"
          version = "v1"
        }
      }
      spec {
        replicas = 2
        selector {
          match_labels = {
            app = "hello"
          }
        }
        template {
          metadata {
            labels = {
              app = "hello"
              version = "v1"
            }
          }
          spec {
            container {
              name  = "hello-app"
              image = "cr.yandex/crpjd37scfv653nl11i9/hello:1.1"
            }
          }
        }
      }
    }
    
  2. In the working directory, create a file named service.tf describing the Service resource:

    resource "kubernetes_service" "demo-lb-service" {
      metadata {
        name = "hello"
      }
      spec {
        selector = {
          app = kubernetes_deployment.demo-app-deployment.spec.0.template.0.metadata[0].labels.app
        }
        type = "LoadBalancer"
        port {
          port = 80
          target_port = 8080
        }
      }
    }
    
  3. Create Kubernetes resources:

    1. View the planned changes:

      terraform plan
      
    2. If the changes are acceptable, apply them:

      terraform apply
      

    After you run terraform apply, you may get this error:

    Error: Waiting for rollout to finish: 2 replicas wanted; 0 replicas Ready
    │ 
    │   with kubernetes_deployment.demo-app-deployment,
    │   on deployment.tf line 1, in resource "kubernetes_deployment" "demo-app-deployment":
    │   1: resource "kubernetes_deployment" "demo-app-deployment" {
    │ 
    

    It means the Deployment resources are not ready yet. Check their readiness using the kubectl get deployment command, which will return this result:

    NAME         READY   UP-TO-DATE   AVAILABLE   AGE
    hello        0/2     2            0           12m
    

    When the READY column shows 2/2, run the terraform apply command again.

You can also create other standard Kubernetes resources using Terraform manifests. Use the YAML configuration of the resource you need as a base (see this example for a pod). Take the structure and parameters from the configuration and apply the Terraform markup. For example, replace the containerPort parameter from the YAML file with the container_port parameter in Terraform. For a full list of Terraform resources for Kubernetes, see this Kubernetes provider article.

For information about creating custom resources using Terraform, see this Terraform tutorial.

Make sure the cluster application is available from the internetMake sure the cluster application is available from the internet

  1. View information about the created load balancer:

    kubectl describe service hello
    

    Result:

     Name:                     hello
     Namespace:                default
     Labels:                   <none>
     Annotations:              <none>
     Selector:                 app=hello
     Type:                     LoadBalancer
     IP Family Policy:         SingleStack
     IP Families:              IPv4
     IP:                       10.96.228.81
     IPs:                      10.96.228.81
     LoadBalancer Ingress:     84.201.148.8
     Port:                     <unset>  80/TCP
     TargetPort:               8080/TCP
     NodePort:                 <unset>  32532/TCP
     Endpoints:                10.112.128.7:8080,10.112.128.8:8080
     Session Affinity:         None
     External Traffic Policy:  Cluster
     Internal Traffic Policy:  Cluster
     Events:
       Type    Reason                Age    From                Message
       ----    ------                ----   ----                -------
       Normal  EnsuringLoadBalancer  5m32s  service-controller  Ensuring load balancer
       Normal  EnsuredLoadBalancer   5m25s  service-controller  Ensured load balancer
    
  2. Copy the IP address from the LoadBalancer Ingress field.

  3. Open the app URL in your browser:

    http://<copied_IP_address>
    

    Result:

    Hello, world!
    Running in 'hello-5c46b*****-nc**'
    

    Note

    If the resource is unavailable at the specified URL, make sure that the security groups for the Managed Service for Kubernetes cluster and its node groups are configured correctly. If any rule is missing, add it.

Delete the resources you createdDelete the resources you created

  1. In the terminal, navigate to the directory containing the infrastructure plan.

  2. Run this command:

    terraform destroy
    

    Terraform will delete all resources you created in the current configuration.

Example of setting up a persistent volume with TerraformExample of setting up a persistent volume with Terraform

Provide a persistent volume for the Managed Service for Kubernetes cluster. To do this, use a configuration file:

pv-pvc.tf
resource "yandex_compute_disk" "pv_disk" {
  name = "pv-disk"
  zone = "ru-central1-a"
  size = 10
  type = "network-ssd"
}

resource "kubernetes_storage_class" "pv_sc" {
  metadata {
    name = "pv-sc"
  }
  storage_provisioner = "disk-csi-driver.mks.ycloud.io"

  parameters = {
    "csi.storage.k8s.io/fstype" = "ext4"
  }

  reclaim_policy      = "Retain"
  volume_binding_mode = "WaitForFirstConsumer"
}

resource "kubernetes_persistent_volume" "my_pv" {
  metadata {
    name = "my-pv"
  }
  spec {
    capacity = {
      storage = "10Gi"
    }
    access_modes       = ["ReadWriteOnce"]
    storage_class_name = "pv-sc"
    persistent_volume_source {
      csi {
        driver        = "disk-csi-driver.mks.ycloud.io"
        volume_handle = yandex_compute_disk.pv_disk.id
      }
    }
  }
}

resource "kubernetes_persistent_volume_claim" "my_pvc" {
  metadata {
    name = "my-pvc"
  }
  spec {
    access_modes = ["ReadWriteOnce"]
    resources {
      requests = {
        storage = "5Gi"
      }
    }
    storage_class_name = "pv-sc"
    volume_name        = "my-pv"
  }
}

The pv-pvc.tf file describes:

  • Compute Cloud disk used as a storage for PersistentVolume:

    • Name: pv-disk.
    • Availability zone: ru-central1-a.
    • Disk size: 10 GB.
    • Disk type: network-ssd.
  • Custom StorageClass:

    • Name: pv-sc.
    • Storage provider: disk-csi-driver.mks.ycloud.io.
    • File system type: ext4.
    • Reuse policy: Retain. The PersistentVolume object will not be deleted after the deletion of its associated PersistentVolumeClaim object.
    • Volume binding mode: WaitForFirstConsumer. PersistentVolume and PersistentVolumeClaim will only be bound when the pod requests the volume.

    Learn more about storage class parameters here.

  • PersistentVolume object:

    • Name: my-pv.
    • Size: 10 GB.
    • Access mode: ReadWriteOnce. Only pods located on the same node can read and write data to this PersistentVolume object. Pods on other nodes will not be able to access this object.
    • Storage class: pv-sc. If not specified, the default storage class will be used.
    • Data source: pv-disk.

    Learn more about PersistentVolume parameters here.

  • PersistentVolumeClaim object:

    • Name: my-pvc.
    • Access mode: ReadWriteOnce. Only pods located on the same node can read and write data to this PersistentVolume object. Pods on other nodes will not be able to access this object.
    • Requested storage size: 5GB.
    • Storage class: pv-sc. If not specified, the default storage class will be used.
    • Volume name: PersistentVolume object to bind with PersistentVolumeClaim.

    Learn more about PersistentVolumeClaim parameters here.

See alsoSee also

  • Terraform tutorial for creating Kubernetes resources
  • Provider documentation

Was the article helpful?

Previous
Migrating resources to a different availability zone
Next
Using Yandex Cloud modules in Terraform
© 2025 Direct Cursus Technology L.L.C.