Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Managed Service for Kubernetes
  • Comparing with other Yandex Cloud services
  • Getting started
    • All guides
    • Connecting to a node over SSH
    • Connecting to a node via OS Login
    • Updating Kubernetes
    • Configuring autoscaling
    • Activating a Kubernetes Terraform provider
    • Installing applications from Yandex Cloud Marketplace using Terraform
      • Getting information about a Kubernetes cluster
      • Viewing operations with a Kubernetes cluster
      • Creating a Kubernetes cluster
      • Updating a Kubernetes cluster
      • Creating a namespace in a Kubernetes cluster
      • Managing Kubernetes cluster access
      • Monitoring Kubernetes cluster state
      • Deleting a Kubernetes cluster
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes

In this article:

  • Getting started
  • Create a Managed Service for Kubernetes cluster
  • Examples
  • Creating a Managed Service for Kubernetes cluster with a basic master
  • Creating a Managed Service for Kubernetes cluster with a highly available master in three availability zones
  • Creating a Managed Service for Kubernetes cluster with a highly available master in a single availability zone
  • See also
  1. Step-by-step guides
  2. Managing a Kubernetes cluster
  3. Creating a Kubernetes cluster

Creating a Managed Service for Kubernetes cluster

Written by
Yandex Cloud
Improved by
Dmitry A.
Updated at November 26, 2025
  • Getting started
  • Create a Managed Service for Kubernetes cluster
  • Examples
    • Creating a Managed Service for Kubernetes cluster with a basic master
    • Creating a Managed Service for Kubernetes cluster with a highly available master in three availability zones
    • Creating a Managed Service for Kubernetes cluster with a highly available master in a single availability zone
  • See also

Create a Managed Service for Kubernetes cluster and then create a node group.

To create a cluster with no internet access, see Creating and configuring a Kubernetes cluster with no internet access.

Getting startedGetting started

Management console
  1. Go to the management console. If you have not signed up yet, navigate to the management console and follow the instructions.

  2. On the Yandex Cloud Billing page, make sure you have a billing account linked and its status is ACTIVE or TRIAL_ACTIVE. If you do not have a billing account yet, create one.

  3. If you do not have a folder yet, create one.

  4. Make sure the account you are using to create a Managed Service for Kubernetes cluster has all the relevant roles.

  5. Make sure you have enough resources available in the cloud.

  6. If you do not have a network yet, create one.

  7. If you do not have any subnets yet, create them in the availability zones where the new Managed Service for Kubernetes cluster and node group will reside.

  8. Create these service accounts:

    • Service account with the k8s.clusters.agent and vpc.publicAdmin roles for the folder where you want to create a Managed Service for Kubernetes cluster. This service account will be used to create resources for your Managed Service for Kubernetes cluster.
    • Service account with the container-registry.images.puller role for the folder containing the Docker image registry. Nodes will use this account to pull the required Docker images from the registry.

    You can use the same service account for both operations.

    Note

    To create a cluster with tunnel mode, the cluster service account requires the k8s.tunnelClusters.agent role.

  9. Create and configure the security groups.

  10. Check the recommendations on using Managed Service for Kubernetes.

Create a Managed Service for Kubernetes clusterCreate a Managed Service for Kubernetes cluster

Note

The feature of selecting and updating a master configuration is at the Preview stage.

Warning

Starting from Kubernetes version 1.30, in the RAPID release channel, the basic node image is changed from Ubuntu 20.04 to Ubuntu 22.04. In the existing clusters and node groups, the OS version will be upgraded using the method you select. This upgrade will later become available in the REGULAR and STABLE release channels.

For OS upgrade details and recommendations, see Updating node group OS.

Management console
CLI
Terraform
API
  1. In the management console, select the folder where you want to create a Managed Service for Kubernetes cluster.

  2. Select Managed Service for Kubernetes.

  3. Click Create cluster.

  4. Enter a name and description for your Managed Service for Kubernetes cluster. The Managed Service for Kubernetes cluster name must be unique within Yandex Cloud.

  5. Specify a Service account for resources to use to create your resources.

  6. Specify a Service account for nodes the Managed Service for Kubernetes nodes will use to access the Docker image registry in Yandex Container Registry.

  7. Optionally, specify the Encryption key for secret encryption.

    You will not be able to edit this setting once you create a cluster.

  8. Specify a release channel.

    You will not be able to edit this setting once you create a cluster.

  9. Add cloud labels in the Labels field.

  10. Under Master configuration:

    • Optionally, expand the Compute resources section and select a resource configuration for the master.

      The selected configuration allocates minimum resources to the master. Depending on the load, the amount of RAM and number of vCPUs will increase automatically.

      By default the following resources are provided for the operation of one master host:

      • Platform: Intel Cascade Lake
      • Guaranteed vCPU share: 100%
      • vCPU: 2.
      • RAM: 8 GB
    • In the Kubernetes version field, select the Kubernetes version to be installed on the Managed Service for Kubernetes master.

    • In the Public address field, select an IP address assignment method:

      • Auto: Assign a random IP address from the Yandex Cloud IP pool.
      • No address: Do not assign a public IP address.

      Warning

      Do not place a cluster with a public IP address in subnets with internet access via a NAT instance. With this configuration in place, your request to the cluster’s public IP address will get a response from the NAT instance’s IP address, and the client will reject it. For more information, see Route priority in complex scenarios.

      You will not be able to edit this setting once you create a cluster.

    • In the Type of master field, select the Managed Service for Kubernetes master type:

      • Basic: Contains one master host in one availability zone. This type of master is cheaper, but it is not fault-tolerant. Its former name is zonal.

        Warning

        A base master is billed as a zonal one and displayed in Yandex Cloud Billing as Managed Kubernetes. Zonal Master - small.

      • Highly available: Contains three master hosts. It's former name was regional.

        Warning

        A highly-available master is billed as a regional one and displayed in Yandex Cloud Billing as Managed Kubernetes. Regional Master - small.

    • In the Cloud network field, select the network to create a Managed Service for Kubernetes master in. If there are no networks available, create one.

      Note

      If you select a cloud network from another folder, assign the resource service account the following roles in that folder:

      • vpc.privateAdmin
      • vpc.user
      • vpc.bridgeAdmin

      To use a public IP address, also assign the vpc.publicAdmin role.

    • For a highly available master, select master host placement in the Distribution of masters by availability zone field:

      • One zone: In one availability zone and one subnet. This is a good choice of master if you want to ensure high availability of the cluster and reduce network latency within it.
      • Different zones: In three different availability zones. This master ensures the greatest fault tolerance: if one zone becomes unavailable, the master will remain operational.
    • Depending on the type of master you select:

      • For a basic or highly available master in a single zone, specify the availability zone and subnet.
      • For a highly available master in different zones, specify subnets in each zone.

      If there are no subnets, create them.

      Warning

      You cannot change the master type and location after you create a cluster.

    • Select security groups for the Managed Service for Kubernetes cluster's network traffic.

      Warning

      The configuration of security groups determines cluster performance, availability, and services running in the cluster.

  11. Under Maintenance window settings:

    • In the Maintenance frequency / Disable field, configure the maintenance window:
      • Disable: Automatic updates disabled.
      • Anytime: Updates allowed at any time.
      • Daily: Updates will take place within the time interval specified in the Time (UTC) and duration field.
      • Custom: Updates will take place within the time interval specified in the Weekly schedule field.
  12. Under Cluster network settings:

    • (Optional) Select the network policy controller:

      You will not be able to edit this setting once you create a cluster.

      Warning

      You cannot enable the Calico network policy controller and the Cilium tunnel mode at the same time.

      • Enable network policy to use Calico.
      • Enable tunnel mode to use Cilium.
    • Specify the CIDR cluster, which is a range of IP addresses to allocate pod IPs from.

    • Specify the CIDR services, which is a range of IP addresses to allocate service IPs from.

    • Set the subnet mask for the Managed Service for Kubernetes nodes and the maximum number of pods per node.

  13. Click Create.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

To create a Managed Service for Kubernetes cluster:

  1. Specify the Managed Service for Kubernetes cluster parameters in the create command (not all parameters are given in the example):

    yc managed-kubernetes cluster create \
      --name test-k8s \
      --network-name default \
      --public-ip \
      --release-channel regular \
      --version 1.27 \
      --cluster-ipv4-range 10.1.0.0/16 \
      --service-ipv4-range 10.2.0.0/16 \
      --security-group-ids enpe5sdn7vs5********,enpj6c5ifh75******** \
      --service-account-name default-sa \
      --node-service-account-name default-sa \
      --master-location zone=ru-central1-a,subnet-id=mysubnet \
      --master-scale-policy policy=auto,min-resource-preset-id=<master_host_class> \
      --daily-maintenance-window start=22:00,duration=10h
      --labels <cloud_label_name=cloud_label_value>
    

    Where:

    • --name: Managed Service for Kubernetes cluster name.

    • --network-name: Network name.

      Note

      If you select a cloud network from another folder, assign the resource service account the following roles in that folder:

      • vpc.privateAdmin
      • vpc.user
      • vpc.bridgeAdmin

      To use a public IP address, also assign the vpc.publicAdmin role.

    • --public-ip: Flag indicating that the Managed Service for Kubernetes cluster needs a public IP address.

      Warning

      Do not place a cluster with a public IP address in subnets with internet access via a NAT instance. With this configuration in place, your request to the cluster’s public IP address will get a response from the NAT instance’s IP address, and the client will reject it. For more information, see Route priority in complex scenarios.

      You will not be able to edit this setting once you create a cluster.

    • --release-channel: Release channel.

      You will not be able to edit this setting once you create a cluster.

    • --version: Kubernetes version. Specify a version available for the selected release channel.

    • --cluster-ipv4-range: Range of IP addresses for allocating pod addresses.

    • --service-ipv4-range: Range of IP addresses for allocating service addresses.

    • --security-group-ids: List of Managed Service for Kubernetes cluster security group IDs.

      Warning

      The configuration of security groups determines cluster performance, availability, and services running in the cluster.

    • --service-account-id: Unique ID of the service account for the resources. This service account will be used to create resources for your Managed Service for Kubernetes cluster.

    • --node-service-account-id: Unique ID of the service account for the nodes. Nodes will use this account to pull the required Docker images from the registry.

    • --master-location: Master configuration. Specify the availability zone and subnet where the master will reside.

      The number of --master-location parameters depends on the type of master:

      • For the basic master, provide one --master-location parameter.
      • For a highly available master hosted across three availability zones, provide three --master-location parameters. In each one, specify different availability zones and subnets.
      • For a highly available master hosted in a single availability zone, provide three --master-location parameters. In each one, specify the same availability zone and subnet.
    • --master-scale-policy: Master's computing resource configuration.

      The selected configuration allocates minimum resources to the master. Depending on the load, the amount of RAM and number of vCPUs will increase automatically.

      Note

      If you do not provide the --master-scale-policy parameter, the minimum available master configuration will be applied.

      By default the following resources are provided for the operation of one master host:

      • Platform: Intel Cascade Lake
      • Guaranteed vCPU share: 100%
      • vCPU: 2.
      • RAM: 8 GB
    • --daily-maintenance-window: Maintenance window settings.

    • --labels: Cloud labels for the cluster.

    Result:

    done (5m47s)
    id: cathn0s6qobf********
    folder_id: b1g66jflru0e********
    ...
      service_account_id: aje3932acd0c********
      node_service_account_id: aje3932acd0c********
      release_channel: REGULAR
    
  2. Configure the container network interface for your cluster:

    You will not be able to edit this setting once you create a cluster.

    Warning

    You cannot enable the Calico network policy controller and the Cilium tunnel mode at the same time.

    • To enable the Calico network policy controller, set the --enable-network-policy flag in the Managed Service for Kubernetes cluster create command:

      yc managed-kubernetes cluster create \
      ...
        --enable-network-policy
      
    • To enable tunnel mode for Cilium, provide the --cilium flag in the Managed Service for Kubernetes cluster create command:

      yc managed-kubernetes cluster create \
      ...
        --cilium
      
  3. To use the Yandex Key Management Service encryption key for protecting sensitive data, provide the key name or ID in the Managed Service for Kubernetes cluster creation command:

    yc managed-kubernetes cluster create \
    ...
      --kms-key-name <encryption_key_name> \
      --kms-key-id <encryption_key_ID>
    

    You will not be able to edit this setting once you create a cluster.

  4. To enable sending logs to Yandex Cloud Logging, provide the logging settings in the --master-logging parameter of the Managed Service for Kubernetes cluster create command:

    yc managed-kubernetes cluster create \
    ...
      --master-logging enabled=<send_logs>,`
        `log-group-id=<log_group_ID>,`
        `folder-id=<folder_ID>,`
        `kube-apiserver-enabled=<send_kube-apiserver_logs>,`
        `cluster-autoscaler-enabled=<send_cluster-autoscaler_logs>,`
        `events-enabled=<send_Kubernetes_events>`
        `audit-enabled=<send_audit_events>
    

    Where:

    • enabled: Flag that enables log sending, true or false.
    • log-group-id: ID of the log group to send the logs to.
    • folder-id: ID of the folder to send the logs to. The logs will be sent to the log group of the default folder.
    • kube-apiserver-enabled: Flag that enables kube-apiserver log sending, true or false.
    • cluster-autoscaler-enabled: Flag that enables cluster-autoscaler log sending, true or false.
    • events-enabled: Flag that enables Kubernetes event sending, true or false.
    • audit-enabled: Flag that enables audit event sending, true or false.

    If log sending is enabled but neither log-group-id nor folder-id is specified, the logs will be sent to the default log group of the folder with the Managed Service for Kubernetes cluster. You cannot set both log-group-id and folder-id at the same time.

With Terraform, you can quickly create a cloud infrastructure in Yandex Cloud and manage it using configuration files. These files store the infrastructure description written in HashiCorp Configuration Language (HCL). If you change the configuration files, Terraform automatically detects which part of your configuration is already deployed, and what should be added or removed.

Terraform is distributed under the Business Source License. The Yandex Cloud provider for Terraform is distributed under the MPL-2.0 license.

For more information about the provider resources, see the relevant documentation on the Terraform website or its mirror.

If you do not have Terraform yet, install it and configure the Yandex Cloud provider.

To create a Managed Service for Kubernetes cluster:

  1. In the configuration file, describe the properties of resources you want to create:

    • Managed Service for Kubernetes cluster: Cluster description.

    • Network: Description of the cloud network to host the Managed Service for Kubernetes cluster. If you already have a suitable network, you do not need to describe it again.

      Note

      If you select a cloud network from another folder, assign the resource service account the following roles in that folder:

      • vpc.privateAdmin
      • vpc.user
      • vpc.bridgeAdmin

      To use a public IP address, also assign the vpc.publicAdmin role.

    • Subnets: Description of the subnets to connect the Managed Service for Kubernetes cluster hosts to. If you already have suitable subnets, you do not need to describe them again.

    • Service account for the Managed Service for Kubernetes cluster and nodes and role settings for this account. Create separate service accounts for the Managed Service for Kubernetes cluster and nodes, as required. If you already have a suitable service account, you do not need to describe it again.

    Here is an example of the configuration file structure:

    resource "yandex_kubernetes_cluster" "<Managed_Service_for_Kubernetes_cluster_name>" {
     network_id = yandex_vpc_network.<network_name>.id
     master {
       master_location {
         zone      = yandex_vpc_subnet.<subnet_name>.zone
         subnet_id = yandex_vpc_subnet.<subnet_name>.id
       }
     }
     service_account_id      = yandex_iam_service_account.<service_account_name>.id
     node_service_account_id = yandex_iam_service_account.<service_account_name>.id
       depends_on = [
         yandex_resourcemanager_folder_iam_member.k8s-clusters-agent,
         yandex_resourcemanager_folder_iam_member.vpc-public-admin,
         yandex_resourcemanager_folder_iam_member.images-puller
       ]
    }
     labels {
       "<cloud_label_name>"="<cloud_label_value>"
     }
    resource "yandex_vpc_network" "<network_name>" { name = "<network_name>" }
    
    resource "yandex_vpc_subnet" "<subnet_name>" {
     v4_cidr_blocks = ["<subnet_IP_address_range>"]
     zone           = "<availability_zone>"
     network_id     = yandex_vpc_network.<network_name>.id
    }
    
    resource "yandex_iam_service_account" "<service_account_name>" {
     name        = "<service_account_name>"
     description = "<service_account_description>"
    }
    
    resource "yandex_resourcemanager_folder_iam_member" "k8s-clusters-agent" {
     # The service account gets the "k8s.clusters.agent" role.
     folder_id = "<folder_ID>"
     role      = "k8s.clusters.agent"
     member    = "serviceAccount:${yandex_iam_service_account.<service_account_name>.id}"
    }
    
    resource "yandex_resourcemanager_folder_iam_member" "vpc-public-admin" {
     # The service account gets the "vpc.publicAdmin" role.
     folder_id = "<folder_ID>"
     role      = "vpc.publicAdmin"
     member    = "serviceAccount:${yandex_iam_service_account.<service_account_name>.id}"
    }
    
    resource "yandex_resourcemanager_folder_iam_member" "images-puller" {
     # The service account gets the "container-registry.images.puller" role.
     folder_id = "<folder_ID>"
     role      = "container-registry.images.puller"
     member    = "serviceAccount:${yandex_iam_service_account.<service_account_name>.id}"
    }
    

    Note

    Cloud labels for a Kubernetes cluster are composed according to certain rules.

    To configure the master's computing resources, add the following section to the Managed Service for Kubernetes cluster description:

    resource "yandex_kubernetes_cluster" "<cluster_name>" {
     ...
     master {
       ...
       scale_policy {
         auto_scale  {
           min_resource_preset_id = "<master_host_class>"
         }
       }
     }
    }
    

    The selected configuration allocates minimum resources to the master. Depending on the load, the amount of RAM and number of vCPUs will increase automatically.

    Note

    If you do not provide the scale_policy parameter, the minimum available master configuration will be applied.

    By default the following resources are provided for the operation of one master host:

    • Platform: Intel Cascade Lake
    • Guaranteed vCPU share: 100%
    • vCPU: 2.
    • RAM: 8 GB

    To enable the Cilium tunnel mode, add the following section to the Managed Service for Kubernetes cluster description:

    network_implementation {
     cilium {}
    }
    

    To enable the Calico network policy controller, add the following line to the Managed Service for Kubernetes cluster description:

    network_policy_provider = "CALICO"
    

    Warning

    You cannot enable the Calico network policy controller and the Cilium tunnel mode at the same time. Also, you cannot enable them after creating a cluster.

    To enable sending logs to Yandex Cloud Logging, add the master_logging section to the Managed Service for Kubernetes cluster description:

    resource "yandex_kubernetes_cluster" "<cluster_name>" {
     ...
     master {
       ...
       master_logging {
         enabled                    = <log_sending>
         log_group_id               = "<log_group_ID>"
         folder_id                  = "<folder_ID>"
         kube_apiserver_enabled     = <kube-apiserver_log_sending>
         cluster_autoscaler_enabled = <cluster-autoscaler_log_sending>
         events_enabled             = <Kubernetes_event_sending>
         audit_enabled              = <audit_event_sending>
       }
     }
    }
    

    Where:

    • enabled: Flag that enables log sending, true or false.
    • log_group_id: ID of the log group to send the logs to.
    • folder_id: ID of the folder to send the logs to. The logs will be sent to the log group of the default folder.
    • kube_apiserver_enabled: Flag that enables kube-apiserver log sending, true or false.
    • cluster_autoscaler_enabled: Flag that enables cluster-autoscaler log sending, true or false.
    • events_enabled: Flag that enables Kubernetes event sending, true or false.
    • audit_enabled: Flag that enables audit event sending, true or false.

    If log sending is enabled but neither log_group_id nor folder_id is specified, the logs will be sent to the default log group of the folder with the Managed Service for Kubernetes cluster. You cannot set both log_group_id and folder_id at the same time.

    For more information, see this Terraform provider guide.

  2. Make sure the configuration files are correct.

    1. In the command line, go to the folder where you created the configuration file.

    2. Run a check using this command:

      terraform plan
      

    If the configuration is described correctly, the terminal will display a list of created resources and their parameters. If the configuration contains any errors, Terraform will point them out. This is a test step; no resources will be created.

  3. Create a Managed Service for Kubernetes cluster.

    1. If the configuration does not contain any errors, run this command:

      terraform apply
      
    2. Confirm that you want to create the resources.

    After this, all required resources will be created in the specified folder and the IP addresses of the VMs will be displayed in the terminal. You can check the new resources and their configuration using the management console.

    Timeouts

    The Terraform provider limits the time for creating and updating a Managed Service for Kubernetes cluster to 30 minutes.

    Operations in excess of this time will be interrupted.

    How do I modify these limits?

    Add a timeouts block to the cluster description, e.g.:

    resource "yandex_kubernetes_cluster" "<cluster_name>" {
      ...
      timeouts {
        create = "60m"
        update = "60m"
      }
    }
    

To create a Managed Service for Kubernetes cluster, use the create method for the Cluster resource.

The request body depends on the master type:

  • For the basic master, provide one masterSpec.locations parameter in the request.
  • For a highly available master hosted across three availability zones, provide three masterSpec.locations parameters in the request. In each one, specify different availability zones and subnets.
  • For a highly available master hosted in a single availability zone, provide three masterSpec.locations parameters in the request. In each one, specify the same availability zone and subnet.

Note

If you select a cloud network from another folder, assign the resource service account the following roles in that folder:

  • vpc.privateAdmin
  • vpc.user
  • vpc.bridgeAdmin

To use a public IP address, also assign the vpc.publicAdmin role.

When providing the masterSpec.locations parameter, you do not need to specify masterSpec.zonalMasterSpec or masterSpec.regionalMasterSpec.

To specify the master's computing resource configuration, provide its value in masterSpec.scalePolicy.autoScale.minResourcePresetId.

The selected configuration allocates minimum resources to the master. Depending on the load, the amount of RAM and number of vCPUs will increase automatically.

Note

If you do not provide the masterSpec.scalePolicy parameter, the minimum available master configuration will be applied.

By default the following resources are provided for the operation of one master host:

  • Platform: Intel Cascade Lake
  • Guaranteed vCPU share: 100%
  • vCPU: 2.
  • RAM: 8 GB

To use a Yandex Key Management Service encryption key to protect secrets, provide its ID in the kmsProvider.keyId parameter.

To enable sending logs to Yandex Cloud Logging, provide the logging settings in the masterSpec.masterLogging parameter.

To add a cloud label, provide its name and value in the labels parameter.

ExamplesExamples

Creating a Managed Service for Kubernetes cluster with a basic masterCreating a Managed Service for Kubernetes cluster with a basic master

CLI
Terraform

Create a Managed Service for Kubernetes cluster with the following test specifications:

  • Name: k8s-single.
  • Network: mynet.
  • Availability zone: ru-central1-a.
  • Subnet: mysubnet.
  • Service account: myaccount.
  • Security group ID: enp6saqnq4ie244g67sb.

To create a Managed Service for Kubernetes cluster with a basic master, run this command:

yc managed-kubernetes cluster create \
   --name k8s-single \
   --network-name mynet \
   --master-location zone=ru-central1-a,subnet-name=mysubnet \
   --service-account-name myaccount \
   --node-service-account-name myaccount \
   --security-group-ids enp6saqnq4ie244g67sb

Create a Managed Service for Kubernetes cluster and its network with the following test specifications:

  • Name: k8s-single.

  • Folder ID: b1gia87mbaomkfvsleds.

  • Network: mynet.

  • Subnet: mysubnet. Its network settings are as follows:

    • Availability zone: ru-central1-a.
    • Range: 10.1.0.0/16.
  • Service account: myaccount.

  • Service account roles: k8s.clusters.agent, vpc.publicAdmin, container-registry.images.puller, and kms.keys.encrypterDecrypter.

  • Yandex Key Management Service encryption key: kms-key.

  • Security group: k8s-public-services. It contains rules for connecting to services from the internet.

Install Terraform (unless you already have it), configure the provider according to this guide, and apply the configuration file:

locals {
  folder_id   = "b1gia87mbaomkfvsleds"
}

resource "yandex_kubernetes_cluster" "k8s-single" {
  name = "k8s-single"
  network_id = yandex_vpc_network.mynet.id
  master {
    master_location {
      zone      = yandex_vpc_subnet.mysubnet.zone
      subnet_id = yandex_vpc_subnet.mysubnet.id
    }
    security_group_ids = [yandex_vpc_security_group.k8s-public-services.id]
  }
  service_account_id      = yandex_iam_service_account.myaccount.id
  node_service_account_id = yandex_iam_service_account.myaccount.id
  depends_on = [
    yandex_resourcemanager_folder_iam_member.k8s-clusters-agent,
    yandex_resourcemanager_folder_iam_member.vpc-public-admin,
    yandex_resourcemanager_folder_iam_member.images-puller,
    yandex_resourcemanager_folder_iam_member.encrypterDecrypter
  ]
  kms_provider {
    key_id = yandex_kms_symmetric_key.kms-key.id
  }
}

resource "yandex_vpc_network" "mynet" {
  name = "mynet"
}

resource "yandex_vpc_subnet" "mysubnet" {
  name = "mysubnet"
  v4_cidr_blocks = ["10.1.0.0/16"]
  zone           = "ru-central1-a"
  network_id     = yandex_vpc_network.mynet.id
}

resource "yandex_iam_service_account" "myaccount" {
  name        = "myaccount"
  description = "Service account for the single Kubernetes cluster"
}

resource "yandex_resourcemanager_folder_iam_member" "k8s-clusters-agent" {
  # The service account gets the "k8s.clusters.agent" role.
  folder_id = local.folder_id
  role      = "k8s.clusters.agent"
  member    = "serviceAccount:${yandex_iam_service_account.myaccount.id}"
}

resource "yandex_resourcemanager_folder_iam_member" "vpc-public-admin" {
  # The service account gets the "vpc.publicAdmin" role.
  folder_id = local.folder_id
  role      = "vpc.publicAdmin"
  member    = "serviceAccount:${yandex_iam_service_account.myaccount.id}"
}

resource "yandex_resourcemanager_folder_iam_member" "images-puller" {
  # The service account gets the "container-registry.images.puller" role.
  folder_id = local.folder_id
  role      = "container-registry.images.puller"
  member    = "serviceAccount:${yandex_iam_service_account.myaccount.id}"
}

resource "yandex_resourcemanager_folder_iam_member" "encrypterDecrypter" {
  # The service account gets the "kms.keys.encrypterDecrypter" role.
  folder_id = local.folder_id
  role      = "kms.keys.encrypterDecrypter"
  member    = "serviceAccount:${yandex_iam_service_account.myaccount.id}"
}

resource "yandex_kms_symmetric_key" "kms-key" {
  # A Yandex Key Management Service key for encrypting critical information, including passwords, OAuth tokens, and SSH keys.
  name              = "kms-key"
  default_algorithm = "AES_128"
  rotation_period   = "8760h" # 1 year.
}

resource "yandex_vpc_security_group" "k8s-public-services" {
  name        = "k8s-public-services"
  description = "Group rules allow connections to services from the internet. Apply the rules for node groups only."
  network_id  = yandex_vpc_network.mynet.id
  ingress {
    protocol          = "TCP"
    description       = "The rule allows availability checks from the load balancer's range of addresses. It is required for a fault-tolerant Managed Service for Kubernetes cluster and load balancer services."
    predefined_target = "loadbalancer_healthchecks"
    from_port         = 0
    to_port           = 65535
  }
  ingress {
    protocol          = "ANY"
    description       = "The rule allows master-to-node and node-to-node communication inside a security group."
    predefined_target = "self_security_group"
    from_port         = 0
    to_port           = 65535
  }
  ingress {
    protocol          = "ANY"
    description       = "The rule allows pod-to-pod and service-to-service interactions. Specify the subnets of your Managed Service for Kubernetes cluster and services."
    v4_cidr_blocks    = concat(yandex_vpc_subnet.mysubnet.v4_cidr_blocks)
    from_port         = 0
    to_port           = 65535
  }
  ingress {
    protocol          = "ICMP"
    description       = "The rule allows debug ICMP packets from internal subnets."
    v4_cidr_blocks    = ["10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"]
  }
  ingress {
    protocol          = "TCP"
    description       = "The rule allows incoming traffic from the internet to a range of NodePorts. Add ports or replace the existing ones as required."
    v4_cidr_blocks    = ["0.0.0.0/0"]
    from_port         = 30000
    to_port           = 32767
  }
  egress {
    protocol          = "ANY"
    description       = "The rule allows all outgoing traffic. Nodes can connect to Yandex Container Registry, Yandex Object Storage, Docker Hub, etc."
    v4_cidr_blocks    = ["0.0.0.0/0"]
    from_port         = 0
    to_port           = 65535
  }
}

Creating a Managed Service for Kubernetes cluster with a highly available master in three availability zonesCreating a Managed Service for Kubernetes cluster with a highly available master in three availability zones

CLI
Terraform

Create a Managed Service for Kubernetes cluster with the following test specifications:

  • Name: k8s-ha-three-zones.
  • Network: my-ha-net.
  • Subnet for the ru-central1-a availability zone: mysubnet-a.
  • Subnet for the ru-central1-b availability zone: mysubnet-b.
  • Subnet for the ru-central1-d availability zone: mysubnet-d.
  • Service account: ha-k8s-account.
  • Security group ID: enp6saqnq4ie244g67sb.

To create a Managed Service for Kubernetes cluster with a highly available master in three availability zones, run this command:

yc managed-kubernetes cluster create \
   --name k8s-ha-three-zones \
   --network-name my-ha-net \
   --master-location zone=ru-central1-a,subnet-name=mysubnet-a \
   --master-location zone=ru-central1-b,subnet-name=mysubnet-b \
   --master-location zone=ru-central1-d,subnet-name=mysubnet-d \
   --service-account-name ha-k8s-account \
   --node-service-account-name ha-k8s-account \
   --security-group-ids enp6saqnq4ie244g67sb

Create a Managed Service for Kubernetes cluster and its network with the following test specifications:

  • Name: k8s-ha-three-zones.

  • Folder ID: b1gia87mbaomkfvsleds.

  • Network: my-ha-net.

  • Subnet: mysubnet-a. Its network settings are as follows:

    • Availability zone: ru-central1-a.
    • Range: 10.5.0.0/16.
  • Subnet: mysubnet-b. Its network settings are as follows:

    • Availability zone: ru-central1-b.
    • Range: 10.6.0.0/16.
  • Subnet: mysubnet-d. Its network settings are as follows:

    • Availability zone: ru-central1-d.
    • Range: 10.7.0.0/16.
  • Service account: ha-k8s-account.

  • Service account roles: k8s.clusters.agent, vpc.publicAdmin, container-registry.images.puller, and kms.keys.encrypterDecrypter.

  • Yandex Key Management Service encryption key: kms-key.

  • Security group: regional-k8s-sg. It contains rules for service traffic.

Install Terraform (unless you already have it), configure the provider according to this guide, and apply the configuration file:

locals {
  folder_id   = "b1gia87mbaomkfvsleds"
}

resource "yandex_kubernetes_cluster" "k8s-ha-three-zones" {
  name = "k8s-ha-three-zones"
  network_id = yandex_vpc_network.my-ha-net.id
  master {
    master_location {
      zone      = yandex_vpc_subnet.mysubnet-a.zone
      subnet_id = yandex_vpc_subnet.mysubnet-a.id
    }
    master_location {
      zone      = yandex_vpc_subnet.mysubnet-b.zone
      subnet_id = yandex_vpc_subnet.mysubnet-b.id
    }
    master_location {
      zone      = yandex_vpc_subnet.mysubnet-d.zone
      subnet_id = yandex_vpc_subnet.mysubnet-d.id
    }
    security_group_ids = [yandex_vpc_security_group.ha-k8s-sg.id]
  }
  service_account_id      = yandex_iam_service_account.ha-k8s-account.id
  node_service_account_id = yandex_iam_service_account.ha-k8s-account.id
  depends_on = [
    yandex_resourcemanager_folder_iam_member.k8s-clusters-agent,
    yandex_resourcemanager_folder_iam_member.vpc-public-admin,
    yandex_resourcemanager_folder_iam_member.images-puller,
    yandex_resourcemanager_folder_iam_member.encrypterDecrypter
  ]
  kms_provider {
    key_id = yandex_kms_symmetric_key.kms-key.id
  }
}

resource "yandex_vpc_network" "my-ha-net" {
  name = "my-ha-net"
}

resource "yandex_vpc_subnet" "mysubnet-a" {
  name = "mysubnet-a"
  v4_cidr_blocks = ["10.5.0.0/16"]
  zone           = "ru-central1-a"
  network_id     = yandex_vpc_network.my-ha-net.id
}

resource "yandex_vpc_subnet" "mysubnet-b" {
  name = "mysubnet-b"
  v4_cidr_blocks = ["10.6.0.0/16"]
  zone           = "ru-central1-b"
  network_id     = yandex_vpc_network.my-ha-net.id
}

resource "yandex_vpc_subnet" "mysubnet-d" {
  name = "mysubnet-d"
  v4_cidr_blocks = ["10.7.0.0/16"]
  zone           = "ru-central1-d"
  network_id     = yandex_vpc_network.my-ha-net.id
}

resource "yandex_iam_service_account" "ha-k8s-account" {
  name        = "ha-k8s-account"
  description = "Service account for the highly available Kubernetes cluster"
}

resource "yandex_resourcemanager_folder_iam_member" "k8s-clusters-agent" {
  # The service account gets the "k8s.clusters.agent" role.
  folder_id = local.folder_id
  role      = "k8s.clusters.agent"
  member    = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}

resource "yandex_resourcemanager_folder_iam_member" "vpc-public-admin" {
  # The service account gets the "vpc.publicAdmin" role.
  folder_id = local.folder_id
  role      = "vpc.publicAdmin"
  member    = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}

resource "yandex_resourcemanager_folder_iam_member" "images-puller" {
  # The service account gets the "container-registry.images.puller" role.
  folder_id = local.folder_id
  role      = "container-registry.images.puller"
  member    = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}

resource "yandex_resourcemanager_folder_iam_member" "encrypterDecrypter" {
  # The service account gets the "kms.keys.encrypterDecrypter" role.
  folder_id = local.folder_id
  role      = "kms.keys.encrypterDecrypter"
  member    = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}

resource "yandex_kms_symmetric_key" "kms-key" {
  # A Yandex Key Management Service key for encrypting critical information, including passwords, OAuth tokens, and SSH keys.
  name              = "kms-key"
  default_algorithm = "AES_128"
  rotation_period   = "8760h" # 1 year.
}

resource "yandex_vpc_security_group" "ha-k8s-sg" {
  name        = "ha-k8s-sg"
  description = "Group rules ensure the basic performance of the Managed Service for Kubernetes cluster. Apply them to the cluster and node groups."
  network_id  = yandex_vpc_network.my-ha-net.id
  ingress {
    protocol          = "TCP"
    description       = "The rule allows availability checks from the load balancer's range of addresses. It is required for a fault-tolerant Managed Service for Kubernetes cluster and load balancer services."
    predefined_target = "loadbalancer_healthchecks"
    from_port         = 0
    to_port           = 65535
  }
  ingress {
    protocol          = "ANY"
    description       = "The rule allows master-to-node and node-to-node communication inside a security group."
    predefined_target = "self_security_group"
    from_port         = 0
    to_port           = 65535
  }
  ingress {
    protocol          = "ANY"
    description       = "The rule allows pod-to-pod and service-to-service interactions. Specify the subnets of your Managed Service for Kubernetes cluster and services."
    v4_cidr_blocks    = concat(yandex_vpc_subnet.mysubnet-a.v4_cidr_blocks, yandex_vpc_subnet.mysubnet-b.v4_cidr_blocks, yandex_vpc_subnet.mysubnet-d.v4_cidr_blocks)
    from_port         = 0
    to_port           = 65535
  }
  ingress {
    protocol          = "ICMP"
    description       = "The rule allows debug ICMP packets from internal subnets."
    v4_cidr_blocks    = ["10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"]
  }
  ingress {
    protocol          = "TCP"
    description       = "The rule allows incoming traffic from the internet to a range of NodePorts. Add ports or replace the existing ones as required."
    v4_cidr_blocks    = ["0.0.0.0/0"]
    from_port         = 30000
    to_port           = 32767
  }
  egress {
    protocol          = "ANY"
    description       = "The rule allows all outgoing traffic. Nodes can connect to Yandex Container Registry, Yandex Object Storage, Docker Hub, etc."
    v4_cidr_blocks    = ["0.0.0.0/0"]
    from_port         = 0
    to_port           = 65535
  }
}

Creating a Managed Service for Kubernetes cluster with a highly available master in a single availability zoneCreating a Managed Service for Kubernetes cluster with a highly available master in a single availability zone

CLI
Terraform

Create a Managed Service for Kubernetes cluster with the following test specifications:

  • Name: k8s-ha-one-zone.
  • Network: my-ha-net.
  • Subnet for the ru-central1-a availability zone: my-ha-subnet.
  • Number of identical --master-location parameters: three. This creates three instances of the master in one availability zone.
  • Availability zone: ru-central1-a.
  • Service account: ha-k8s-account.
  • Security group ID: enp6saqnq4ie244g67sb.

To create a Managed Service for Kubernetes cluster with a highly available master in a single availability zone , run this command:

yc managed-kubernetes cluster create \
   --name k8s-ha-one-zone \
   --network-name my-ha-net \
   --master-location zone=ru-central1-a,subnet-name=my-ha-subnet \
   --master-location zone=ru-central1-a,subnet-name=my-ha-subnet \
   --master-location zone=ru-central1-a,subnet-name=my-ha-subnet \
   --service-account-name ha-k8s-account \
   --node-service-account-name ha-k8s-account \
   --security-group-ids enp6saqnq4ie244g67sb

Create a Managed Service for Kubernetes cluster and its network with the following test specifications:

  • Name: k8s-ha-one-zone.

  • Folder ID: b1gia87mbaomkfvsleds.

  • Network: my-ha-net.

  • Subnet: my-ha-subnet. Its network settings are as follows:

    • Availability zone: ru-central1-a.
    • Range: 10.5.0.0/16.
  • Service account: ha-k8s-account.

  • Service account roles: k8s.clusters.agent, vpc.publicAdmin, container-registry.images.puller, and kms.keys.encrypterDecrypter.

  • Yandex Key Management Service encryption key: kms-key.

  • Security group: ha-k8s-sg. It contains rules for service traffic.

Install Terraform (unless you already have it), configure the provider according to this guide, and apply the configuration file:

locals {
  folder_id   = "b1gia87mbaomkfvsleds"
}

resource "yandex_kubernetes_cluster" "k8s-ha-one-zone" {
  name = "k8s-ha-one-zone"
  network_id = yandex_vpc_network.my-ha-net.id
  master {
    master_location {
      zone      = yandex_vpc_subnet.my-ha-subnet.zone
      subnet_id = yandex_vpc_subnet.my-ha-subnet.id
    }
    master_location {
      zone      = yandex_vpc_subnet.my-ha-subnet.zone
      subnet_id = yandex_vpc_subnet.my-ha-subnet.id
    }
    master_location {
      zone      = yandex_vpc_subnet.my-ha-subnet.zone
      subnet_id = yandex_vpc_subnet.my-ha-subnet.id
    }
    security_group_ids = [yandex_vpc_security_group.ha-k8s-sg.id]
  }
  service_account_id      = yandex_iam_service_account.ha-k8s-account.id
  node_service_account_id = yandex_iam_service_account.ha-k8s-account.id
  depends_on = [
    yandex_resourcemanager_folder_iam_member.k8s-clusters-agent,
    yandex_resourcemanager_folder_iam_member.vpc-public-admin,
    yandex_resourcemanager_folder_iam_member.images-puller,
    yandex_resourcemanager_folder_iam_member.encrypterDecrypter
  ]
  kms_provider {
    key_id = yandex_kms_symmetric_key.kms-key.id
  }
}

resource "yandex_vpc_network" "my-ha-net" {
  name = "my-ha-net"
}

resource "yandex_vpc_subnet" "my-ha-subnet" {
  name = "my-ha-subnet"
  v4_cidr_blocks = ["10.5.0.0/16"]
  zone           = "ru-central1-a"
  network_id     = yandex_vpc_network.my-ha-net.id
}

resource "yandex_iam_service_account" "ha-k8s-account" {
  name        = "ha-k8s-account"
  description = "Service account for the highly available Kubernetes cluster"
}

resource "yandex_resourcemanager_folder_iam_member" "k8s-clusters-agent" {
  # The service account gets the "k8s.clusters.agent" role.
  folder_id = local.folder_id
  role      = "k8s.clusters.agent"
  member    = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}

resource "yandex_resourcemanager_folder_iam_member" "vpc-public-admin" {
  # The service account gets the "vpc.publicAdmin" role.
  folder_id = local.folder_id
  role      = "vpc.publicAdmin"
  member    = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}

resource "yandex_resourcemanager_folder_iam_member" "images-puller" {
  # The service account gets the "container-registry.images.puller" role.
  folder_id = local.folder_id
  role      = "container-registry.images.puller"
  member    = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}

resource "yandex_resourcemanager_folder_iam_member" "encrypterDecrypter" {
  # The service account gets the "kms.keys.encrypterDecrypter" role.
  folder_id = local.folder_id
  role      = "kms.keys.encrypterDecrypter"
  member    = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}

resource "yandex_kms_symmetric_key" "kms-key" {
  # A Yandex Key Management Service key for encrypting critical information, including passwords, OAuth tokens, and SSH keys.
  name              = "kms-key"
  default_algorithm = "AES_128"
  rotation_period   = "8760h" # 1 year.
}

resource "yandex_vpc_security_group" "ha-k8s-sg" {
  name        = "ha-k8s-sg"
  description = "Group rules ensure the basic performance of the Managed Service for Kubernetes cluster. Apply them to the cluster and node groups."
  network_id  = yandex_vpc_network.my-ha-net.id
  ingress {
    protocol          = "TCP"
    description       = "The rule allows availability checks from the load balancer's range of addresses. It is required for a fault-tolerant Managed Service for Kubernetes cluster and load balancer services."
    predefined_target = "loadbalancer_healthchecks"
    from_port         = 0
    to_port           = 65535
  }
  ingress {
    protocol          = "ANY"
    description       = "The rule allows master-to-node and node-to-node communication inside a security group."
    predefined_target = "self_security_group"
    from_port         = 0
    to_port           = 65535
  }
  ingress {
    protocol          = "ANY"
    description       = "The rule allows pod-to-pod and service-to-service interactions. Specify the subnets of your Managed Service for Kubernetes cluster and services."
    v4_cidr_blocks    = concat(yandex_vpc_subnet.my-ha-subnet.v4_cidr_blocks, yandex_vpc_subnet.my-ha-subnet.v4_cidr_blocks, yandex_vpc_subnet.my-ha-subnet.v4_cidr_blocks)
    from_port         = 0
    to_port           = 65535
  }
  ingress {
    protocol          = "ICMP"
    description       = "The rule allows debug ICMP packets from internal subnets."
    v4_cidr_blocks    = ["10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"]
  }
  ingress {
    protocol          = "TCP"
    description       = "The rule allows incoming traffic from the internet to a range of NodePorts. Add ports or replace the existing ones as required."
    v4_cidr_blocks    = ["0.0.0.0/0"]
    from_port         = 30000
    to_port           = 32767
  }
  egress {
    protocol          = "ANY"
    description       = "The rule allows all outgoing traffic. Nodes can connect to Yandex Container Registry, Yandex Object Storage, Docker Hub, etc."
    v4_cidr_blocks    = ["0.0.0.0/0"]
    from_port         = 0
    to_port           = 65535
  }
}

See alsoSee also

Overview of cluster connection methods

Was the article helpful?

Previous
Viewing operations with a Kubernetes cluster
Next
Updating a Kubernetes cluster
© 2025 Direct Cursus Technology L.L.C.