Creating a node group
A node group is a group of VMs in a Managed Service for Kubernetes cluster that have the same configuration and run user containers.
Before creating a node group, create a Managed Service for Kubernetes cluster first and make sure your cloud has enough resources.
Warning
Starting from Kubernetes version 1.30, in the RAPID release channel, the basic node image is changed from Ubuntu 20.04 to Ubuntu 22.04. In the existing clusters and node groups, the OS version will be upgraded using the method you select. This upgrade will later become available in the REGULAR and STABLE release channels.
For OS upgrade details and recommendations, see Updating node group OS.
To create a Managed Service for Kubernetes node group:
-
In the management console
, select the folder where you want to create a Managed Service for Kubernetes cluster. -
From the list of services, select Managed Service for Kubernetes.
-
Select the Managed Service for Kubernetes cluster to create a node group for.
-
On the Managed Service for Kubernetes cluster page, go to the Node manager tab.
-
Click Create a node group.
-
Enter a name and description for the Managed Service for Kubernetes node group.
-
In the Kubernetes version field, select the Kubernetes version for the Managed Service for Kubernetes nodes.
-
In the Container runtime field, select
containerd. -
In the Labels field, add the node cloud labels.
-
Under Scaling, select its type:
-
Fixed: To keep a fixed number of Managed Service for Kubernetes nodes in the group. Specify the number of Managed Service for Kubernetes nodes in the group.The Number of nodes setting will become available.
-
Automatic: To manage the number of group nodes using the Managed Service for Kubernetes cluster autoscaling.The following settings will become available:
- Minimum number of nodes.
- Maximum number of nodes.
- Initial number of nodes with which the Managed Service for Kubernetes group will be created.
Warning
You cannot change the scaling type after creating a Managed Service for Kubernetes node group.
-
-
Under Changes during creation and updates, specify the maximum number of nodes by which you can exceed the size of the group when updating it, as well as the maximum number of unavailable nodes during the update.
-
Under Computing resources:
-
Select a platform.
-
Enter the required number of GPUs and vCPUs, guaranteed vCPU performance, and the amount of RAM.
-
Optionally, make the VM instance preemptible by checking the relevant box.
-
Optionally, enable a software-accelerated network.
Warning
Before activating a software-accelerated network, make sure that you have sufficient cloud resources available to create an additional Managed Service for Kubernetes node.
Note
The set of parameters depends on the platform you select.
-
-
Optionally, under GPU settings, specify whether the Managed Service for Kubernetes node group should have no pre-installed NVIDIA® drivers and CUDA® libraries for GPU acceleration.
-
Optionally, under Placement, enter a name for the Managed Service for Kubernetes node placement group. You will not be able to edit this setting after creating the Managed Service for Kubernetes node group.
Note
The placement group determines the maximum available node group size:
- In an instance group with the spread placement strategy, the maximum number of instances depends on the limits.
- In an instance group with the partition placement strategy, the maximum number of instances in a partition depends on the quotas.
-
Under Storage:
-
Specify the Managed Service for Kubernetes node Disk type:
-
HDD: Standard network drive; HDD network block storage. -
SSD: Fast network drive; SSD network block storage. -
Non-replicated SSD: Network drive with enhanced performance achieved by eliminating redundancy. You can only change the size of this disk type in 93 GB increments.Alert
Non-replicated disks have no redundancy. If a disk fails, its data will be irretrievably lost. For more information, see Non-replicated disks and ultra high-speed network storages with three replicas (SSD).
-
SSD IO: Network drive with the same performance specifications asNon-replicated SSD, plus redundancy. You can only change the size of this disk type in 93 GB increments.
-
-
Specify the Managed Service for Kubernetes node disk size.
-
-
Under Network settings:
-
In the Public address field, select the IP address assignment method:
Auto: Assign a random IP address from the Yandex Cloud IP address pool.No address: Do not assign a public IP address.
-
Select security groups.
Warning
The configuration of security groups determines cluster performance, availability, and services running in the cluster.
-
-
Under Location:
- Select the availability zone and subnet to place the group nodes in.
- Optionally, you can place nodes of a group with the fixed scaling type across multiple availability zones. To do this, click Add location and specify an additional availability zone and subnet.
Warning
You can place autoscaling group nodes only in one availability zone.
-
Under Access, configure one of the methods of connecting to nodes in a Managed Service for Kubernetes node group:
-
To connect to nodes via OS Login, select Access by OS Login.
In this case, you will not be able to specify SSH keys because these connection methods are mutually exclusive.
For more information on how to configure and use OS Login, see Connecting to a node via OS Login.
-
To connect to nodes using SSH keys, specify the required credentials:
-
In the Login field, enter the username.
-
In the SSH key field, paste the contents of the public key file.
For more information about preparing, configuring, and using SSH keys, see Connecting to a node over SSH.
-
You can change the metadata list after you create a cluster.
-
-
Under Maintenance window settings:
- In the Maintenance frequency / Disable field, select your preferred maintenance window:
Disable: Automatic updates disabled.Anytime: Updates allowed at any time.Daily: Updates will take place within the time interval specified in the Time (UTC) and duration field.Custom: Updates will take place within the time interval specified in the Weekly schedule field.
- In the Maintenance frequency / Disable field, select your preferred maintenance window:
-
Under Additional:
- To be able to edit unsafe kernel parameters on the Managed Service for Kubernetes group nodes, click Add variable. To enter the name of each kernel parameter, create a separate field.
- To set up taints for Managed Service for Kubernetes nodes, click Add policy. Enter the key, value, and effect for each taint in a separate set of fields.
- To set up Kubernetes labels for group nodes, click Add label. Enter the key and value for each Kubernetes label in a separate set of fields.
-
Optionally, expand the Metadata section and add metadata for the nodes.
Warning
Metadata settings can affect the behavior and health of the group's nodes. Change these settings only if you know exactly what you want to do.
Providing user data in the metadata with the
user-datakey is not supported.To add metadata, click Add field. Specify the key and value for each metadata element in a separate set of fields.
You can change the metadata list after you create a cluster.
-
Click Create.
If you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.
-
View the description of the CLI command to create a Managed Service for Kubernetes node group:
yc managed-kubernetes node-group create --help -
Specify Managed Service for Kubernetes node group parameters in the create command (our example does not include all available parameters):
yc managed-kubernetes node-group create \ --allowed-unsafe-sysctls <names_of_unsafe_kernel_parameters> \ --cluster-name <cluster_name> \ --cores <number_of_vCPUs> \ --core-fraction <guaranteed_vCPU_share> \ --daily-maintenance-window <maintenance_window_settings> \ --disk-size <storage_size_in_GB> \ --disk-type <storage_type> \ --fixed-size <fixed_number_of_nodes_in_group> \ --max-expansion <expanding_group_size_when_updating> \ --max-unavailable <number_of_unavailable_nodes_when_updating> \ --location zone=[<availability_zone>],subnet-id=[<subnet_ID>] \ --memory <amount_of_RAM_in_GB> \ --name <node_group_name> \ --network-acceleration-type <network_acceleration_type> \ --network-interface security-group-ids=[<security_group_IDs>],ipv4-address=<IP_address_assignment_method> \ --platform-id <platform_ID> \ --container-runtime containerd \ --preemptible \ --public-ip \ --template-labels <cloud_label_key=cloud_label_value> \ --node-labels <k8s_label_key=k8s_label_value> --version <Kubernetes_version_on_group_nodes> \ --node-name <node_name_template> \ --node-taints <taints> \ --container-network-settings pod-mtu=<MTU_value_for_group_pods>Where:
-
--allowed-unsafe-sysctls: Permission for Managed Service for Kubernetes group nodes to use unsafe kernel parameters, comma-separated. -
--cluster-name: Name of the Managed Service for Kubernetes cluster to create the node group in. -
--cores: Number of vCPUs for Managed Service for Kubernetes nodes. -
--core-fraction: Guaranteed share of vCPUs for Managed Service for Kubernetes nodes. -
--daily-maintenance-window: Maintenance window settings. -
--disk-size: Disk size of the Managed Service for Kubernetes node. -
--disk-type: Disk type of the Managed Service for Kubernetes node,network-nvmeornetwork-hdd. -
Type of scaling:
-
--fixed-size: Fixed number of nodes in a Managed Service for Kubernetes node group. -
--auto-scale: Settings for Managed Service for Kubernetes cluster autoscaling:min: Minimum number of nodes in the group.max: Maximum number of nodes in the group.initial: Initial number of nodes in the group.
You cannot change the scaling type after creating a node group.
-
-
--max-expansion: Maximum number of nodes by which you can increase the size of the group when updating it. -
--max-unavailable: Maximum number of unavailable nodes in the group when updating it. -
--location: Availability zone and subnet to host Managed Service for Kubernetes nodes. You can specify more than one option but only a single subnet per zone. Use a separate--locationparameter for each availability zone.Warning
You can place autoscaling group nodes only in one availability zone.
If you provide
--location,--network-interface, and--public-ipin the same command, you will get an error. It is enough to specify the location of a Managed Service for Kubernetes node group either in--locationor--network-interface.To grant internet access to Managed Service for Kubernetes cluster nodes, do one of the following:
- Assign a public IP address to the cluster nodes, specifying
--network-interface ipv4-address=nator--network-interface ipv6-address=nat. - Enable access to Managed Service for Kubernetes nodes from the internet after creating a node group.
- Assign a public IP address to the cluster nodes, specifying
-
--memory: Amount of memory allocated for Managed Service for Kubernetes nodes. -
--name: Managed Service for Kubernetes node group name. -
--network-acceleration-type: Select the network acceleration type:standard: No acceleration.software-accelerated: Software-accelerated network.
Warning
Before activating a software-accelerated network, make sure that you have sufficient cloud resources available to create an additional Managed Service for Kubernetes node.
-
--network-interface: Network settings:security-group-ids: IDs of Security groups.subnets: Names of subnets that will host the nodes.ipv4-address: Method of assigning an IPv4 address.ipv6-address: Method of assigning an IPv6 address.
ipv4-addressandipv6-addressdetermine the method of assigning an IP address:auto: Only the internal IP address is assigned to the node.nat: Public and internal IP addresses are assigned to the node.
-
--platform-id: Managed Service for Kubernetes node platform. -
--container-runtime: containerd runtime environment. -
--preemptible: Flag you set for preemptible VMs. -
--public-ip: Flag you set if the Managed Service for Kubernetes node group needs a public IP address. -
--template-labels: Node group cloud labels. You can specify multiple labels separated by commas. -
--node-labels: Node group Kubernetes labels. -
--version: Kubernetes version on the Managed Service for Kubernetes group nodes. -
--node-name: Managed Service for Kubernetes node name template. The name is unique if the template contains at least one of the following variables:{instance_group.id}: Instance group ID.{instance.index}: Unique instance number in the instance group. Possible values: 1 to N, where N is the number of instances in the group.{instance.index_in_zone}: Instance number in a zone. It's unique for a specific instance group within the zone.{instance.short_id}: Instance ID that is unique within the group. Consists of four letters.{instance.zone_id}: Zone ID.
For example,
prod-{instance.short_id}-{instance_group.id}. If not specified, the default value is used:{instance_group.id}-{instance.short_id}. -
--node-taints: Kubernetes taints. You can specify multiple values. -
--container-network-settings: MTU value for network connections to group pods. This setting is not applicable for clusters with Calico or Cilium network policy controllers.
Result:
done (1m17s) id: catpl8c44kii******** cluster_id: catcsqidoos7******** ... start_time: hours: 22 duration: 36000s -
-
To add metadata for nodes, use the
--metadataor--metadata-from-fileparameter.Use metadata to configure the method of connecting to nodes in a node group. You can configure one method only because they are mutually exclusive.
To connect to nodes in a node group, specify metadata for the selected connection method:
-
To connect to nodes via OS Login, add metadata with the
enable-osloginkey set totrue.For more on configuring and using OS Login, see Connecting to a node via OS Login.
-
To connect to nodes using SSH keys, add metadata with the
ssh-keyskey and its value listing the connection details.For more on preparing, configuring, and using SSH keys, see Connecting to a node over SSH.
Warning
Metadata settings can affect the behavior and health of the group's nodes. Change these settings only if you know exactly what you want to do.
Providing user data in the metadata with the
user-datakey is not supported.Add metadata using one of the following methods:
-
Using
--metadata, specify one or multiplekey=valuepairs separated by commas.The key value is provided explicitly.
-
Using
--metadata-from-file, specify one or multiplekey=path_to_file_with_valuepairs separated by commas.The key value will be read from a file. This may be of use if the value is too long to provide it explicitly or contains line breaks or other special characters.
You can change the metadata list after you create a cluster.
-
-
To specify a placement group for Managed Service for Kubernetes nodes:
-
Get a list of placement groups using the
yc compute placement-group listcommand. -
Provide a placement group name or ID in the
--placement-groupparameter when creating a Managed Service for Kubernetes node group:yc managed-kubernetes node-group create \ ... --placement-group <placement_group_name_or_ID>
Note
The placement group determines the maximum available node group size:
- In an instance group with the spread placement strategy, the maximum number of instances depends on the limits.
- In an instance group with the partition placement strategy, the maximum number of instances in a partition depends on the quotas.
-
To create a Managed Service for Kubernetes node group:
-
In the folder containing the cluster description file, create a configuration file with the new Managed Service for Kubernetes node group's parameters.
Here is an example of the configuration file structure:
resource "yandex_kubernetes_node_group" "<node_group_name>" { cluster_id = yandex_kubernetes_cluster.<cluster_name>.id name = "<node_group_name>" ... instance_template { name = "<node_name_template>" platform_id = "<platform_for_nodes>" placement_policy { placement_group_id = "<placement_group>" } network_acceleration_type = "<network_acceleration_type>" container_runtime { type = "containerd" } labels { "<cloud_label_name>"="<cloud_label_value>" } node_labels { "<Kubernetes_label_name>"="<Kubernetes_label_value>" } ... } ... scale_policy { <node_group_scaling_settings> } deploy_policy { max_expansion = <expanding_group_size_when_updating> max_unavailable = <number_of_unavailable_nodes_when_updating> } ... allocation_policy { location { zone = "<availability_zone>" } } }Where:
-
cluster_id: Managed Service for Kubernetes cluster ID. -
name: Managed Service for Kubernetes node group name. -
instance_template: Managed Service for Kubernetes node parameters:-
name: Managed Service for Kubernetes node name template. The name is unique if the template contains at least one of the following variables:{instance_group.id}: Instance group ID.{instance.index}: Unique instance number in the instance group. Possible values: 1 to N, where N is the number of instances in the group.{instance.index_in_zone}: Instance number in a zone. It's unique for a specific instance group within the zone.{instance.short_id}: Instance ID that is unique within the group. Consists of four letters.{instance.zone_id}: Zone ID.
For example,
prod-{instance.short_id}-{instance_group.id}. If not specified, the default value is used:{instance_group.id}-{instance.short_id}. -
platform_id: Managed Service for Kubernetes node platform. -
placement_group_id: Placement group for Managed Service for Kubernetes nodes.Note
The placement group determines the maximum available node group size:
- In an instance group with the spread placement strategy, the maximum number of instances depends on the limits.
- In an instance group with the partition placement strategy, the maximum number of instances in a partition depends on the quotas.
-
network_acceleration_type: Network acceleration type:standard: No acceleration.software-accelerated: Software-accelerated network.
Warning
Before activating a software-accelerated network, make sure that you have sufficient cloud resources available to create an additional Managed Service for Kubernetes node.
-
container_runtime,type: containerd runtime environment. -
labels: Node group cloud labels. You can specify multiple labels separated by commas. -
node_labels: Node group Kubernetes labels. -
scale_policy: Scaling settings.You cannot change the scaling type after creating a node group.
-
deploy_policy: Group deployment settings:-
max_expansion: Maximum number of nodes by which you can increase the size of the group when updating it. -
max_unavailable: Maximum number of unavailable nodes in the group when updating it.
-
-
allocation_policy: Placement settings. These contain thelocationsection with thezoneparameter, i.e., the availability zone where you want to place the group nodes. You can place nodes of a group with the fixed scaling type in multiple availability zones. To do this, specify each availability zone you need in a separatelocationsection.Warning
You can place autoscaling group nodes only in one availability zone.
-
-
To create a node group with a fixed number of nodes, add the
fixed_scalesection:resource "yandex_kubernetes_node_group" "<node_group_name>" { ... scale_policy { fixed_scale { size = <number_of_nodes_in_group> } } } -
To create an autoscaling Managed Service for Kubernetes node group, add the
auto_scalesection:resource "yandex_kubernetes_node_group" "<node_group_name>" { ... scale_policy { auto_scale { min = <minimum_number_of_nodes_in_node_group> max = <maximum_number_of_nodes_in_node_group> initial = <initial_number_of_nodes_in_node_group> } } } -
To add metadata for nodes, provide it in the
instance_template.metadataparameter.Use metadata to configure the method of connecting to nodes in a node group. You can configure one method only because they are mutually exclusive.
To connect to nodes in a node group, specify metadata for the selected connection method:
-
To connect to nodes via OS Login, add metadata with the
enable-osloginkey set totrue.For more on configuring and using OS Login, see Connecting to a node via OS Login.
-
To connect to nodes using SSH keys, add metadata with the
ssh-keyskey and its value listing the connection details.For more on preparing, configuring, and using SSH keys, see Connecting to a node over SSH.
Warning
Metadata settings can affect the behavior and health of the group's nodes. Change these settings only if you know exactly what you want to do.
Providing user data in the metadata with the
user-datakey is not supported.Add metadata using one of the following methods:
-
Specify one or multiple
key=valuepairs.The key value is provided explicitly.
-
Specify one or multiple
key=file(path_to_file_with_value)pairs.The key value will be read from a file. This may be of use if the value is too long to provide it explicitly or contains line breaks or other special characters.
resource "yandex_kubernetes_node_group" "<node_group_name>" { ... instance_template { metadata = { "key_1" = "value" "key_2" = file("<path_to_file_with_value>") ... } ... } ... }You can change the metadata list after you create a cluster.
-
-
To add DNS records:
-
Add the
instance_template.network_interface.ipv4_dns_recordssection:resource "yandex_kubernetes_node_group" "<node_group_name>" { ... instance_template { network_interface { ipv4_dns_records { fqdn = "<DNS_record_FQDN>" dns_zone_id = "<DNS_zone_ID>" ttl = "<DNS_record_TTL_in_seconds>" ptr = "<PTR_record_creation>" } } } }Where
ptrdenotes a PTR record creation:trueorfalse.In a DNS record's FQDN, you can use a template with variables:
{instance_group.id}: Instance group ID.{instance.index}: Unique instance number in the instance group. Possible values: 1 to N, where N is the number of instances in the group.{instance.index_in_zone}: Instance number in a zone. It is unique for a specific instance group within a zone.{instance.short_id}: Instance ID that is unique within the group. It consists of four alphabetic characters.{instance.zone_id}: Zone ID.
For more information, see this Terraform provider guide.
-
-
Make sure the configuration files are correct.
-
In the command line, go to the folder where you created the configuration file.
-
Run a check using this command:
terraform plan
If the configuration is described correctly, the terminal will display a list of created resources and their parameters. If the configuration contains any errors, Terraform will point them out. This is a test step; no resources will be created.
-
-
Create a Managed Service for Kubernetes node group.
-
If the configuration does not contain any errors, run this command:
terraform apply -
Confirm that you want to create the resources.
After this, all required resources will be created in the specified folder and the IP addresses of the VMs will be displayed in the terminal. You can check the new resources and their configuration using the management console
.Timeouts
The Terraform provider sets time limits for operations with Managed Service for Kubernetes cluster node groups:
- Creating and editing: 60 minutes.
- Deleting: 20 minutes.
Operations in excess of this time will be interrupted.
How do I modify these limits?
Add the
timeoutssection to the cluster node group description, e.g.:resource "yandex_kubernetes_node_group" "<node_group_name>" { ... timeouts { create = "1h30m" update = "1h30m" delete = "60m" } } -
Use the create API method and provide the following in the request:
-
Managed Service for Kubernetes cluster ID in the
clusterIdparameter. You can get it with the list of Managed Service for Kubernetes clusters in the folder. -
Managed Service for Kubernetes node group configuration in the
nodeTemplateparameter. -
Network acceleration type in the
nodeTemplate.networkSettings.typeparameter.Warning
Before activating a software-accelerated network, make sure that you have sufficient cloud resources available to create an additional Managed Service for Kubernetes node.
-
containerd
runtime environment in thenodeTemplate.containerRuntimeSettings.typeparameter. -
Node group cloud labels in the
nodeTemplate.labelsparameter. -
Node group Kubernetes labels in the
nodeLabelsparameter. -
Scaling settings in the
scalePolicyparameter.You cannot change the scaling type after creating a node group.
-
Node group deployment settings in the
deployPolicyparameter:-
maxExpansion: Maximum number of nodes by which you can increase the size of the group when updating it. -
maxUnavailable: Maximum number of unavailable nodes in the group when updating it.
-
-
Managed Service for Kubernetes node group placement settings in the
allocationPolicyparameters.Warning
You can place autoscaling group nodes only in one availability zone.
-
Maintenance window settings in the
maintenancePolicyparameters. -
List of settings to update in the
updateMaskparameter.Warning
The API method will assign default values to all the parameters of the object you are modifying unless you explicitly provide them in your request. To avoid this, list the settings you want to change in the
updateMaskparameter as a single comma-separated string. -
For nodes to use non-replicated disks, provide
network-ssd-nonreplicatedfor thenodeTemplate.bootDiskSpec.diskTypeIdparameter.You can only change the size of non-replicated disks in 93 GB increments. The maximum size of this type of disk is 4 TB.
Alert
Non-replicated disks have no redundancy. If a disk fails, its data will be irretrievably lost. For more information, see Non-replicated disks and ultra high-speed network storages with three replicas (SSD).
-
To enable Managed Service for Kubernetes group nodes to use unsafe kernel parameters, provide their names in the
allowedUnsafeSysctlsparameter. -
To set taints, provide their values in the
nodeTaintsparameter. -
To set a template for Managed Service for Kubernetes node names, provide it in the
nodeTemplate.nameparameter. The name is unique if the template contains at least one of the following variables:{instance_group.id}: Instance group ID.{instance.index}: Unique instance number in the instance group. Possible values: 1 to N, where N is the number of instances in the group.{instance.index_in_zone}: Instance number in a zone. It's unique for a specific instance group within the zone.{instance.short_id}: Instance ID that is unique within the group. Consists of four letters.{instance.zone_id}: Zone ID.
For example,
prod-{instance.short_id}-{instance_group.id}. If not specified, the default value is used:{instance_group.id}-{instance.short_id}. -
To specify a placement group for Managed Service for Kubernetes nodes, provide the placement group ID in the
nodeTemplate.placementPolicy.placementGroupIdparameter.Note
The placement group determines the maximum available node group size:
- In an instance group with the spread placement strategy, the maximum number of instances depends on the limits.
- In an instance group with the partition placement strategy, the maximum number of instances in a partition depends on the quotas.
-
To add metadata for nodes, provide it in the
nodeTemplate.metadataparameter.Use metadata to configure the method of connecting to nodes in a node group. You can configure one method only because they are mutually exclusive.
To connect to nodes in a node group, specify metadata for the selected connection method:
-
To connect to nodes via OS Login, add metadata with the
enable-osloginkey set totrue.For more on configuring and using OS Login, see Connecting to a node via OS Login.
-
To connect to nodes using SSH keys, add metadata with the
ssh-keyskey and its value listing the connection details.For more on preparing, configuring, and using SSH keys, see Connecting to a node over SSH.
Warning
Metadata settings can affect the behavior and health of the group's nodes. Change these settings only if you know exactly what you want to do.
Providing user data in the metadata with the
user-datakey is not supported.Add metadata by specifying one or multiple
key=valuepairs separated by commas.The key value is provided explicitly.
You can change the metadata list after you create a cluster.
-
-
To add DNS records, provide their settings in the
nodeTemplate.v4AddressSpec.dnsRecordSpecsparameter. In a DNS record's FQDN, you can use thenodeTemplate.namenode name template with variables.
Creating a group of Managed Service for Kubernetes nodes may take a few minutes depending on the number of nodes.
Individual nodes in node groups are Yandex Compute Cloud virtual machines with automatically generated names. To configure nodes, follow the node group management guides.
Alert
Do not change node VM settings, including names, network interfaces, and SSH keys, using the Compute Cloud interfaces or SSH connections to the VM.
This can disrupt the operation of individual nodes, groups of nodes, and the whole Managed Service for Kubernetes cluster.
Examples
Create a node group for the Managed Service for Kubernetes cluster with the following test specifications:
- Name:
k8s-demo-ng. - Description:
Test node group. - Node name template:
test-{instance.short_id}-{instance_group.id}. - Kubernetes cluster: Specify the ID of an existing cluster, e.g.,
cat0adul1fj0********. - Kubernetes version on group nodes:
1.29. - Node platform:
standard-v3. - Number of vCPUs for nodes: Two.
- Guaranteed vCPU share: 50%.
- Disk size: 64 GB.
- Disk type:
network-ssd. - Number of nodes: One.
- Number of nodes Managed Service for Kubernetes can create in the group when updating it: Up to three.
- Number of nodes Managed Service for Kubernetes can delete from the group when updating it: Up to one.
- RAM: 2 GB.
- Update time: From 22:00 to 08:00 UTC.
- Network acceleration type:
standard(no acceleration). - Network settings:
- Security group ID, e.g.,
enp6saqnq4ie244g67sb. - Subnet ID, e.g.,
e9bj3s90g9hm********. - Assigning public and internal IP addresses to nodes: Enabled.
- Security group ID, e.g.,
- Kubernetes label:
node-label1=node-value1. - Kubernetes taint:
taint1=taint-value1:NoSchedule. - Cloud label:
template-label1=template-value1. - Permission to use unsafe kernel parameters: Enabled. We added the
kernel.msg*andnet.core.somaxconnparameters. - VM being the only node of the group: Preemptible.
Run this command:
yc managed-kubernetes node-group create \
--name k8s-demo-ng \
--description 'Test node group' \
--node-name test-{instance.short_id}-{instance_group.id} \
--cluster-id cat0adul1fj0******** \
--version 1.29 \
--platform-id standard-v3 \
--cores 2 \
--core-fraction 50 \
--disk-size 64 \
--disk-type network-ssd \
--fixed-size 1 \
--max-expansion 3 \
--max-unavailable 1 \
--memory 2 \
--daily-maintenance-window 'start=22:00,duration=10h' \
--network-acceleration-type standard \
--network-interface security-group-ids=enp6saqnq4ie244g67sb,subnets=e9bj3s90g9hm********,ipv4-address=nat \
--node-labels node-label1=node-value1 \
--node-taints taint1=taint-value1:NoSchedule \
--template-labels template-label1=template-value1 \
--allowed-unsafe-sysctls='kernel.msg*,net.core.somaxconn' \
--preemptible
-
Place the node group configuration file in the same folder as the cluster description file.
resource "yandex_kubernetes_node_group" "k8s-demo-ng" { name = "k8s-demo-ng" description = "Test node group" cluster_id = "cat0adul1fj0********" version = "1.29" instance_template { name = "test-{instance.short_id}-{instance_group.id}" platform_id = "standard-v3" resources { cores = 2 core_fraction = 50 memory = 2 } boot_disk { size = 64 type = "network-ssd" } network_acceleration_type = "standard" network_interface { security_group_ids = ["enp6saqnq4ie244g67sb"] subnet_ids = ["e9bj3s90g9hm********"] nat = true } scheduling_policy { preemptible = true } } scale_policy { fixed_scale { size = 1 } } deploy_policy { max_expansion = 3 max_unavailable = 1 } maintenance_policy { auto_upgrade = true auto_repair = true maintenance_window { start_time = "22:00" duration = "10h" } } node_labels = { node-label1 = "node-value1" } node_taints = ["taint1=taint-value1:NoSchedule"] labels = { "template-label1" = "template-value1" } allowed_unsafe_sysctls = ["kernel.msg*", "net.core.somaxconn"] } -
Make sure the configuration file is correct.
-
In the command line, go to the folder where you created the configuration file.
-
Run a check using this command:
terraform plan
If the configuration is described correctly, the terminal will display a list of created resources and their parameters. If the configuration contains any errors, Terraform will point them out. This is a test step; no resources will be created.
-
-
Create a Managed Service for Kubernetes node group.
-
If the configuration does not contain any errors, run this command:
terraform apply -
Confirm that you want to create the resources.
After this, all required resources will be created in the specified folder and the IP addresses of the VMs will be displayed in the terminal. You can check the new resources and their configuration using the management console
. -