Resource relationships in Managed Service for Kubernetes
Kubernetes
The main entity Kubernetes operates is a Kubernetes cluster.
Kubernetes cluster
Kubernetes clusters consist of a master and one or more node groups. The master is responsible for managing the Kubernetes cluster. Containerized user applications run on nodes.
Kubernetes fully manages the master and monitors the state and health of node groups. Users can manage nodes directly and configure Kubernetes clusters using the Yandex Cloud management console and the Managed Service for Kubernetes CLI and API.
Warning
Kubernetes node groups require internet access to download images and components.
Internet access can be provided through:
- By assigning a public IP address to each node in the group.
- Configuring a VM as a NAT instance.
- Setting up a NAT gateway.
Kubernetes clusters in the Yandex Cloud infrastructure use the following resources:
Resource | Amount | Comment |
---|---|---|
Subnet | 2 | Kubernetes reserves IP address ranges to be used for pods and services. |
Public IP | N | N includes: * One public IP address for the NAT instance. * A public IP address for each node in the group if you use one-to-one NAT. |
Master
A master is a component that manages a Kubernetes cluster.
A master runs Kubernetes control processes including the Kubernetes API server, scheduler, and main resource controllers. Master lifecycle is managed by Kubernetes when creating or deleting a Kubernetes cluster. A master is responsible for the global solutions executed on all Kubernetes cluster nodes. These include scheduling workloads (such as containerized applications), managing the lifecycle of workloads, and scaling.
There are two types of masters that differ by their location in availability zones:
-
Zonal: Master created in a subnet in one availability zone.
-
Regional: Master created in a distributed manner in three subnets in each availability zone. If a zone becomes unavailable, the regional master remains functional.
Warning
The internal IP address of a regional master is only available within a single Yandex Virtual Private Cloud cloud network.
Node group
A node group is a group of VMs in a Kubernetes cluster that have the same configuration and run the user's containers.
Individual nodes in node groups are Yandex Compute Cloud virtual machines with automatically generated names. To configure nodes, follow the node group management guides.
Alert
Do not change node VM settings, including names, network interfaces, and SSH keys, using the Compute Cloud interfaces or SSH connections to the VM.
This can disrupt the operation of individual nodes, groups of nodes, and the whole Managed Service for Kubernetes cluster.
Configuration
When creating a group of nodes, you can configure the following VM parameters:
-
VM type.
-
Type and number of cores (vCPU).
-
Amount of memory (RAM) and disk space.
-
Kernel parameters.
- Safe kernel parameters are isolated between pods.
- Unsafe parameters affect the operation of the pods and the node as a whole. In Managed Service for Kubernetes, you cannot change unsafe kernel parameters unless their names have been explicitly specified when creating a node group.
Note
You should only specify kernel parameters that belong to namespaces, e.g.,
net.ipv4.ping_group_range
. Parameters that do not belong to namespaces, e.g.,vm.max_map_count
, should be resolved directly in the OS or using a DaemonSet with containers in privileged mode after creating a Managed Service for Kubernetes node group.For more information about kernel parameters, see the Kubernetes documentation
.
You can create groups with different configurations in a Kubernetes cluster and place them in different availability zones.
For Managed Service for Kubernetes, only containerd
Connecting to group nodes
You can connect to nodes in a group in the following ways:
- Via an SSH client using a standard SSH key pair, see Connecting to a node over SSH.
- Via an SSH client and the YC CLI using OS Login, see Connecting to a node via OS Login.
Taints and tolerations policies
Taints are special policies assigned to nodes in the group. Using taints, you can ensure that certain pods are not scheduled onto inappropriate nodes. For example, you can allow the rendering pods to run only on nodes with GPU.
Benefits of taints include:
- The policies persist when a node is restarted or replaced with a new one.
- When adding nodes to a group, the policies are assigned to the node automatically.
- The policies are automatically assigned to new nodes when scaling a node group.
You can place a taint on a node group when creating or changing the group. If you place a taint on a previously created node group or remove a taint from it, such group will be recreated. First, all nodes in the group are deleted, then nodes with the new configuration are added to the group.
Each taint has three parts:
<key> = <value>:<effect>
List of available taint effects:
NO_SCHEDULE
: Prohibit running new pods on the group nodes (it does not affect the running ones).PREFER_NO_SCHEDULE
: Avoid running pods on the group nodes if there are resources available for this purpose in other groups.NO_EXECUTE
: Stop pods on the group's nodes, evict them to other groups, and prohibit running new pods.
Tolerations: Exceptions from taint policies. Using tolerations, you can allow certain pods to run on nodes, even if the taint of the node group prohibits this.
For example, if the key1=value1:NoSchedule
taint is set for group nodes, you can place pods on this node using tolerations:
apiVersion: v1
kind: Pod
...
spec:
...
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
Note
System pods are automatically assigned tolerations so they can run on any available node.
For more information about taints and tolerations, see the Kubernetes documentation
Node labels
Node labels is a mechanism for grouping nodes in Managed Service for Kubernetes. There are different types of labels:
-
Node group cloud labels are used to logically separate and label resources. For example, you can use cloud labels to track your expenses for different node groups. They are indicated as
template-labels
in the CLI and aslabels
in Terraform. -
Kubernetes node labels
are used to group Kubernetes objects and distribute pods across cluster nodes . They are indicated asnode-labels
in the CLI and asnode_labels
in Terraform.When setting Kubernetes labels, specify the node characteristics to group objects by. You can find sample Kubernetes labels in the Kubernetes documentation
.
You can use both types of labels at the same time, e.g., when creating a node group in the CLI or Terraform.
You can use the Managed Service for Kubernetes API and Kubernetes API
- Kubernetes labels added via the Kubernetes API may be lost because, when updating or modifying a node group, some nodes are recreated with different names and some of the old ones are deleted.
- If Kubernetes labels are created via the Managed Service for Kubernetes API, you cannot delete them using the Kubernetes API. Otherwise, the labels will be restored once they are deleted.
Warning
To make sure no labels are lost, use the Managed Service for Kubernetes API.
You can define a set of key: value
Kubernetes labels for every object. All of its keys must be unique.
Kubernetes label keys may consist of two parts separated by /
: prefix and name.
A prefix is an optional part of a key. The prefix requirements are as follows:
- It must be a DNS subdomain, i.e., a series of DNS tags separated by
.
. - It may be up to 253 characters long.
- The last character must be followed by
/
.
A name is a required part of a key. The naming requirements are as follows:
- May be up to 63 characters long.
- It may contain lowercase Latin letters, numbers, hyphens, underscores, and periods.
- Use a letter or number for the first and last characters.
For more information about adding and deleting Kubernetes labels, see Managing Kubernetes node labels. Adding or deleting a label will not result in the node group recreation.
Pod
A pod is a request to run one or more containers on a group node. In a Kubernetes cluster, each pod has a unique IP address so that applications do not conflict when using ports.
Containers are described in pods via JSON or YAML objects.
IP masquerade for pods
If a pod needs access to resources outside the cluster, its IP address will be replaced by the IP address of the node the pod is running on. For this, the cluster uses IP masquerading
By default, IP masquerade is enabled for the entire range of pod IP addresses.
To implement IP masquerading, the ip-masq-agent
pod is deployed on each cluster node. The settings for this pod are stored in a ConfigMap object called ip-masq-agent
. If you need to disable pod IP masquerading, e.g., to access the pods over a VPN or Yandex Cloud Interconnect, specify the IP ranges you need in the data.config.nonMasqueradeCIDRs
parameter:
...
data:
config: |+
nonMasqueradeCIDRs:
- <CIDR_IP_addresses_of_pods_that_do_not_require_masking>
...
Service
A service is an abstraction that provides network load balancing functions. Traffic rules are configured for a group of pods united by a set of labels.
By default, a service is only available within a specific Kubernetes cluster, but it can be public and receive requests from outside the Kubernetes cluster.
Namespace
A namespace is an abstraction that logically isolates Kubernetes cluster resources and distributes quotas
Service accounts
Managed Service for Kubernetes clusters use two types of service accounts:
-
Cloud service accounts
These accounts exist at the level of an individual folder in the cloud and can be used both by Managed Service for Kubernetes and other services.
For more information, see Access management in Managed Service for Kubernetes and Service accounts.
-
Kubernetes service accounts
These accounts exist and run only at a level of an individual Managed Service for Kubernetes cluster. Kubernetes uses them for:
- To authenticate cluster API calls from applications deployed in the cluster.
- To configure access for these applications.
When deploying a Managed Service for Kubernetes cluster, a set of Kubernetes service accounts is automatically created in the
kube-system
namespace.For authentication within the Kubernetes cluster hosting the service account, create a token for this account manually.
For more information, see Creating a static configuration file and the Kubernetes documentation
.
Warning
Do not confuse cloud service accounts with Kubernetes service accounts.
In the service documentation, service account refers to a regular cloud service account unless otherwise specified.
Managed Service for Kubernetes cluster statistics
Managed Service for Kubernetes automatically sends cluster metrics to Yandex Monitoring. Metrics are available for the following Kubernetes objects:
- Container
- Master
- Node
- Pod
- Persistent volume
You can get cluster metrics using the following tools:
For more information, see Monitoring cluster state Managed Service for Kubernetes.
You can find metrics description in Yandex Monitoring metric reference.