Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Managed Service for Kubernetes
  • Comparing with other Yandex Cloud services
  • Getting started
    • Resource relationships
    • Release channels and updates
    • Updating node group OS
    • Encryption
    • Networking in Managed Service for Kubernetes
    • Network settings and cluster policies
    • Autoscaling
    • Audit policy
    • External cluster nodes
    • Quotas and limits
    • Recommendations on using Managed Service for Kubernetes
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes

In this article:

  • Kubernetes cluster
  • Cluster labels
  • Master
  • Master computing resources
  • Node group
  • Configuration
  • Connecting to group nodes
  • Taints and tolerations
  • Node labels
  • Pod
  • Masquerading IP addresses for pods
  • Service
  • Namespace
  • Service accounts
  • Managed Service for Kubernetes cluster statistics
  • Use cases
  1. Concepts
  2. Resource relationships

Resource relationships in Managed Service for Kubernetes

Written by
Yandex Cloud
Improved by
Dmitry A.
Updated at November 11, 2025
  • Kubernetes cluster
    • Cluster labels
  • Master
    • Master computing resources
  • Node group
    • Configuration
    • Connecting to group nodes
    • Taints and tolerations
    • Node labels
  • Pod
    • Masquerading IP addresses for pods
  • Service
  • Namespace
  • Service accounts
  • Managed Service for Kubernetes cluster statistics
  • Use cases

Kubernetes is a containerized application management system. Kubernetes provides tools for working with clusters to automate deployment, scaling, and management of applications in containers.

The main entity Kubernetes leverages is a Kubernetes cluster.

Kubernetes clusterKubernetes cluster

Kubernetes clusters consist of a master and one or multiple node groups. The master manages a Kubernetes cluster. Containerized user applications run on nodes.

Kubernetes fully manages the master and monitors the state and health of node groups. Users can manage nodes directly and configure Kubernetes clusters using the Yandex Cloud management console and the Managed Service for Kubernetes CLI and API.

Warning

Kubernetes node groups require internet access to download images and components.

Internet access can be provided through:

  • By assigning a public IP address to each node in the group.
  • Configuring a VM as a NAT instance.
  • Setting up a NAT gateway.

Kubernetes clusters in the Yandex Cloud infrastructure use the following resources:

Resource Quantity Comment
Subnet 2 Kubernetes reserves IP address ranges to use for pods and services.
Public IP address N N includes:
* One public IP address for the NAT instance.
* A public IP address for each node in the group if you use one-to-one NAT.

Cluster labelsCluster labels

To break Kubernetes into logical groups, use cloud labels.

Cloud labels for Kubernetes clusters are subject to the following rules:

  • Label key parameters:

    • Must be from 1 to 63 characters long.
    • May contain lowercase Latin letters, numbers, hyphens, and underscores.
    • Use a letter for the first character.
  • Label value parameters:

    • May be up to 63 characters long.
    • May contain lowercase Latin letters, numbers, hyphens, and underscores.

Learn more about managing cloud labels in Updating a cluster.

Note

You cannot assign Kubernetes labels to a cluster.

MasterMaster

A master is a component that manages a Kubernetes cluster.

A master runs Kubernetes control processes including the Kubernetes API server, scheduler, and main resource controllers. A master lifecycle is managed by Kubernetes when creating or deleting a Kubernetes cluster. The master is responsible for global solutions executed on all Kubernetes cluster nodes. These include scheduling workloads, such as containerized applications, managing the lifecycle of workloads, and scaling.

There are two types of masters that differ by number of master hosts and by their availability zone placement:

  • Base: Contains one master host in a single availability zone. This type of master is cheaper but not fault-tolerant. Its former name is zonal.

    Warning

    A base master is billed as a zonal one and displayed in Yandex Cloud Billing as Managed Kubernetes. Zonal Master - small.

  • Highly available: Contains three master hosts that you can place as follows:

    • In one availability zone and one subnet. Choose this type if you want to ensure high availability of the cluster and reduce its internal network latency.
    • In three different availability zones. This master ensures the best fault tolerance: if one zone becomes unavailable, the master will continue to function.

    The internal IP address of a highly available master is available only within a single Yandex Virtual Private Cloud cloud network.

    Its former name is regional.

    Warning

    A highly-available master is billed as a regional one and displayed in Yandex Cloud Billing as Managed Kubernetes. Regional Master - small.

For more information about master settings, see Creating a Managed Service for Kubernetes cluster.

Master computing resourcesMaster computing resources

By default the following resources are provided for the operation of one master host:

  • Platform: Intel Cascade Lake
  • Guaranteed vCPU share: 100%
  • vCPU: 2.
  • RAM: 8 GB

When creating or updating a cluster, you can select a master configuration suitable for your tasks.

The selected configuration allocates minimum resources to the master. Depending on the load, the amount of RAM and number of vCPUs will increase automatically.

Note

The feature of selecting and updating a master configuration is at the Preview stage.

The following master configurations are available for Intel Cascade Lake with a guaranteed vCPU share of 100%:

  • Standard: Standard hosts with 4:1 RAM to vCPU ratio:

    Number of vCPUs RAM
    2 8
    4 16
    8 32
    16 64
    32 128
    64 256
    80 320
  • CPU-optimized: Hosts with a decreased RAM to vCPU ratio of 2:1:

    Number of vCPUs RAM
    4 8
    8 16
    16 32
    32 64
  • Memory-optimized: Hosts with an increased RAM to vCPU ratio of 8:1:

    Number of vCPUs RAM
    2 16
    4 32
    8 64
    16 128
    32 256

You can update the master configuration without stopping your Managed Service for Kubernetes cluster.

Node groupNode group

A node group is a Yandex Compute Cloud instance group in a Kubernetes cluster, where VM instances share the same configuration and are used to run the user's containers.

Individual nodes in node groups are Yandex Compute Cloud virtual machines with automatically generated names. To configure nodes, follow the node group management guides.

Alert

Do not change node VM settings, including names, network interfaces, and SSH keys, using the Compute Cloud interfaces or SSH connections to the VM.

This can disrupt the operation of individual nodes, groups of nodes, and the whole Managed Service for Kubernetes cluster.

See also the description of instance groups during a zonal incident and our mitigation guidelines.

ConfigurationConfiguration

When creating a node group, you can configure the following VM parameters:

  • VM type.

  • Type and number of cores (vCPUs).

  • Amount of memory (RAM) and disk space.

  • Placement group.

    Note

    The placement group determines the maximum available node group size:

    • In an instance group with the spread placement strategy, the maximum number of instances depends on the limits.
    • In an instance group with the partition placement strategy, the maximum number of instances in a partition depends on the quotas.
  • Kernel parameters.

    • Safe kernel parameters are isolated between pods.
    • Unsafe parameters affect the operation of the pods and the node as a whole. In Managed Service for Kubernetes, you cannot change unsafe kernel parameters unless you explicitly specified their names when creating a node group.

    Note

    You should only specify kernel parameters that belong to namespaces, e.g., net.ipv4.ping_group_range. Parameters that do not belong to namespaces, e.g., vm.max_map_count, should be resolved directly in the OS or using a DaemonSet with containers in privileged mode after creating a Managed Service for Kubernetes node group.

    For more information about kernel parameters, see this Kubernetes guide.

You can create groups with different configurations in a single Kubernetes cluster and spread them across multiple availability zones.

For Managed Service for Kubernetes, only containerd is available as a container runtime environment.

Connecting to group nodesConnecting to group nodes

You can connect to nodes in a group in the following ways:

  • Via an SSH client using a standard SSH key pair, see Connecting to a node over SSH.
  • Via an SSH client and the CLI using OS Login, see Connecting to a node via OS Login.

Taints and tolerationsTaints and tolerations

Taints are special policies placed on nodes in the group. Using taints, you can ensure that certain pods are not scheduled onto inappropriate nodes. For example, you can allow the rendering pods to schedule only on nodes with GPU.

Taints give you the following advantages:

  • Taints persist when a node is restarted or replaced with a new one.
  • When adding nodes to a group, taints are placed on the node automatically.
  • Taints are automatically placed on new nodes when scaling a node group.

You can place a taint on a node group when creating or updating the group. If you place a taint on a previously created node group or remove a taint from it, such group will be recreated. First, all nodes in the group are deleted, then nodes with the new configuration are added to the group.

Each taint has three parts:

<key> = <value>:<effect>

The following taint effects are available:

  • NO_SCHEDULE: Prohibit scheduling new pods on the group nodes. It does not affect currently running pods.
  • PREFER_NO_SCHEDULE: Avoid scheduling pods on the group nodes if there are resources available for this purpose in other groups.
  • NO_EXECUTE: Stop pods on the nodes, evict them to other groups, and prohibit running new pods.

Tolerations: Exceptions from taints. With tolerations, you can allow particular pods to run on nodes even if the node group's taint prohibits this.

There are two types of tolerations:

  • Equal triggers if the key, value, and effect of the taint match those of the toleration. It is used by default.

  • Exists triggers if the key and effect of the taint match those of the toleration. The key value is ignored.

For example, if the key1=value1:NoSchedule taint is set for the group's nodes, you can use tolerations to place pods on a node as follows:

apiVersion: v1
kind: Pod
...
spec:
  ...
  tolerations:
  - key: "key1"
    operator: "Equal"
    value: "value1"
    effect: "NoSchedule"

Or, alternatively:

apiVersion: v1
kind: Pod
...
spec:
  ...
  tolerations:
  - key: "key1"
    operator: "Exists"
    effect: "NoSchedule"

Note

To system pods, tolerations are added automatically so they can run on any available node.

For more information about taints and tolerations, see this Kubernetes guide.

Node labelsNode labels

You can group nodes in Managed Service for Kubernetes using node labels. There are two types of node labels:

  • Cloud labels are used for logical separation and labeling of resources. For example, you can use cloud labels to track how much you spend on different node groups. They are designated as template-labels in the CLI and as labels in Terraform.

    Cloud labels for nodes are subject to the following rules:

    Label key parameters:

    • It may contain lowercase Latin letters, numbers, and -_./\@ symbols.
    • Use a letter for the first character.
    • The maximum length is 63 characters.

    Label value parameters:

    • It may contain lowercase Latin letters, numbers, and -_./\@ symbols.
    • The maximum length is 63 characters.

    Learn more about managing cloud labels in Updating a node group.

  • Kubernetes labels are used to group Kubernetes objects and distribute pods across cluster nodes. They are designated as node-labels in the CLI and as node_labels in Terraform.

    When adding Kubernetes labels, specify the node properties to group objects by. You can find examples of Kubernetes labels in this Kubernetes guide.

    You can define a set of key: value Kubernetes labels for every object. All of its keys must be unique.

    Kubernetes label keys of nodes may consist of two parts separated by /: prefix and name.

    A prefix is an optional part of a key. The prefix requirements are as follows:

    • It must be a DNS subdomain, i.e., a series of DNS tags separated by ..
    • It may be up to 253 characters long.
    • The last character must be followed by /.

    A name is a required part of a key. Follow these naming requirements:

    • May be up to 63 characters long.
    • It may contain lowercase Latin letters, numbers, and -_. symbols.
    • Use a letter or number for the first and last characters.

    The same rules apply to the value as to the name.

    Label example: app.kubernetes.io/name: mysql, where app.kubernetes.io/ is the prefix, name is the name, and mysql, the value.

    You can use the Managed Service for Kubernetes API and Kubernetes API to manage Kubernetes labels. Their features:

    • Kubernetes labels added via the Kubernetes API may be missing because, when updating or modifying a node group, some nodes are recreated with different names and some of the old ones are deleted.
    • If Kubernetes labels are created via the Managed Service for Kubernetes API, you cannot delete them using the Kubernetes API. They will be restored once you delete them.

    Warning

    To make sure no labels are missing, use the Managed Service for Kubernetes API.

    For more information about adding and deleting Kubernetes labels, see Managing Kubernetes node labels. Adding or deleting a label will not result in the node group recreation.

You can use both types of labels concurrently, e.g., when creating a node group via the CLI or Terraform.

PodPod

A pod is a request to run one or multiple containers on a group node. In a Kubernetes cluster, each pod has its unique IP address so that applications do not conflict when using ports.

Containers are described in pods via JSON or YAML objects.

Masquerading IP addresses for podsMasquerading IP addresses for pods

If a pod needs access to resources outside the cluster, its IP address will be replaced with the IP address of the node the pod is running on. For this, the cluster uses IP masquerading.

By default, masquerading is enabled for the entire IP address range except for pod CIDRs and link-local address CIDRs.

To implement IP masquerading, the ip-masq-agent pod is deployed on each cluster node. The settings for this pod are stored in a ConfigMap object called ip-masq-agent. If you need to disable pod IP masquerading in a particular direction, e.g., to access the pods over a VPN or Yandex Cloud Interconnect, specify these IP address ranges in the data.config.nonMasqueradeCIDRs parameter:

...
data:
  config: |+
    nonMasqueradeCIDRs:
      - <non-masquerade_CIDRs_of_IP_addresses>
...

To view how the IP masquerading rules are configured in iptables on a specific node, connect to the node over SSH and run this command:

sudo iptables -t nat -L IP-MASQ -v -n

For more information, see the ip-masq-agent page on GitHub.

ServiceService

A service is an abstraction that provides network load balancing. Traffic rules are configured for pods grouped by a set of labels.

By default, a service is only available within a specific Kubernetes cluster, but it can be public and receive requests from outside the Kubernetes cluster.

NamespaceNamespace

A namespace is an abstraction that logically isolates Kubernetes cluster resources and distributes quotas for them. This is useful for isolating resources of different teams and projects in a single Kubernetes cluster.

Service accountsService accounts

Managed Service for Kubernetes clusters use two types of service accounts:

  • Cloud service accounts

    These accounts exist at the level of an individual folder in the cloud and can be used by Managed Service for Kubernetes as well as by other services.

    For more information, see Access management in Managed Service for Kubernetes and Service accounts.

  • Kubernetes service accounts

    These accounts exist and run only at a level of an individual Managed Service for Kubernetes cluster. Kubernetes uses them to:

    • Authenticate cluster API calls from applications deployed in the cluster.
    • Configure access for these applications.

    When deploying a Managed Service for Kubernetes cluster, a set of Kubernetes service accounts is automatically created in the kube-system namespace.

    For authentication within the Kubernetes cluster hosting the service account, create a token for this account manually.

    For more information, see Creating a static configuration file and this Kubernetes guide.

Warning

Do not confuse cloud service accounts with Kubernetes service accounts.

In our Managed Service for Kubernetes guides, a service account means a regular cloud service account unless otherwise specified.

Managed Service for Kubernetes cluster statisticsManaged Service for Kubernetes cluster statistics

Managed Service for Kubernetes automatically sends cluster metrics to Yandex Monitoring. Metrics are available for the following Kubernetes objects:

  • Container
  • Master
  • Node
  • Pod
  • Persistent volume

You can get cluster metrics using the following tools:

  • Management console
  • Monitoring interface
  • Monitoring API
  • Metrics Provider app
  • Prometheus Operator app

For more information, see Monitoring cluster state Managed Service for Kubernetes.

You can find metrics description in Yandex Monitoring metric reference.

Use casesUse cases

  • Creating and configuring a Kubernetes cluster with no internet access
  • Managed Service for Kubernetes cluster backups in Object Storage
  • Cluster monitoring with Prometheus and Grafana
  • Updating the Metrics Server parameters
  • Using node groups with GPUs and no pre-installed drivers

See alsoSee also

  • Kubernetes: Why use it, how it works, and what makes it an industry standard

Was the article helpful?

Previous
Using HashiCorp Vault to store secrets
Next
Release channels and updates
© 2025 Direct Cursus Technology L.L.C.