Creating a Managed Service for Kubernetes cluster
Create a Managed Service for Kubernetes cluster and then create a node group.
To create a cluster with no internet access, see Creating and configuring a Kubernetes cluster with no internet access.
Getting started
-
Go to the management console
. If not signed up yet, navigate to the management console and follow the on-screen instructions. -
On the Yandex Cloud Billing
page, make sure you have a linked billing account and its status isACTIVE
orTRIAL_ACTIVE
. If you do not have a billing account yet, create one. -
If you do not have a folder yet, create one.
-
Make sure that the account you are using to create the Managed Service for Kubernetes cluster has all the relevant roles.
-
Make sure you have enough resources available in the cloud.
-
If you do not have a network yet, create one.
-
If you do not have any subnets yet, create them in the availability zones where your Managed Service for Kubernetes cluster and node group will be created.
-
Create service accounts:
- Service account with the
k8s.clusters.agent
andvpc.publicAdmin
roles for the folder where the Managed Service for Kubernetes cluster is created. This service account will be used to create the resources required for the Managed Service for Kubernetes cluster. - Service account with the container-registry.images.puller role for the folder containing the Docker image registry. Nodes will pull the required Docker images from the registry on behalf of this account.
You can use the same service account for both operations.
Note
To create a cluster with tunnel mode, the cluster service account requires the
k8s.tunnelClusters.agent
role. - Service account with the
-
Review the recommendations for using Managed Service for Kubernetes.
Create a Managed Service for Kubernetes cluster
-
In the management console
, select the folder where you want to create a Managed Service for Kubernetes cluster. -
Select Managed Service for Kubernetes.
-
Click Create cluster.
-
Enter a name and description for your Managed Service for Kubernetes cluster. The Managed Service for Kubernetes cluster name must be unique within Yandex Cloud.
-
Specify a Service account for resources to be used to create your resources.
-
Specify a Service account for nodes to be used by the Managed Service for Kubernetes nodes to access the Yandex Container Registry Docker image registry.
-
(Optional) Specify the Encryption key that will be used for encrypting secrets.
You will not be able to edit this setting once you create a cluster.
-
Specify a release channel.
You will not be able to edit this setting once you create a cluster.
-
Add cloud labels in the Labels field.
-
Under Master configuration:
-
(Optional) Expand the Compute resources section and select a resource configuration for the master.
By default the following resources are provided for the operation of one master host:
- Platform: Intel Cascade Lake
- Guaranteed vCPU share: 100%
- vCPU: 2.
- RAM: 8 GB
To allow further changes to the master's resource configuration, select Allow resource volume to increase in response to loads.
Note
The feature of selecting and updating a master configuration is at the Preview stage.
-
In the Kubernetes version field, select the Kubernetes version to be installed on the Managed Service for Kubernetes master.
-
In the Public address field, select an IP address assignment method:
Auto
: Assign a random IP address from the Yandex Cloud IP pool.No address
: Not to assign a public IP address.
Warning
Do not place a cluster with a public IP address in subnets with internet access via a NAT instance. With this configuration in place, your request to the cluster’s public IP address will get a response from the NAT instance’s IP address, and the client will reject it. For more information, see Route priority in complex scenarios.
You will not be able to edit this setting once you create a cluster.
-
In the Type of master field, select the Managed Service for Kubernetes master type:
-
Basic
: Contains one master host in one availability zone. This type of master is cheaper but not fault-tolerant. Its former name is zonal.Warning
A base master is billed as a zonal one and displayed in Yandex Cloud Billing as
Managed Kubernetes. Zonal Master - small
. -
Highly available
: Contains three master hosts. Its former name is regional.Warning
A highly-available master is billed as a regional one and displayed in Yandex Cloud Billing as
Managed Kubernetes. Regional Master - small
.
-
-
In the Cloud network field, select the network to create a Managed Service for Kubernetes master in. If there are no networks available, create one.
Note
If you select a cloud network from another folder, assign the resource service account the following roles in that folder:
To use a public IP address, also assign the vpc.publicAdmin role.
-
For a highly available master, select master host placement in the Distribution of masters by availability zone field:
One zone
: In one availability zone and one subnet. This is a good choice of master if you want to ensure high availability of the cluster and reduce network latency within it.Different zones
: In three different availability zones. This master ensures the greatest fault tolerance: if one zone becomes unavailable, the master will remain operational.
-
Depending on the type of master you select:
- For a basic or highly available master in a single zone, specify the availability zone and subnet.
- For a highly available master in different zones, specify subnets in each zone.
If there are no subnets, create them.
Warning
You cannot change the master type and location after you create a cluster.
-
Select security groups for the Managed Service for Kubernetes cluster's network traffic.
Warning
The configuration of security groups determines cluster performance, availability, and services running in the cluster.
-
-
Under Maintenance window settings:
- In the Maintenance frequency / Disable field, configure the maintenance window:
Disable
: Automatic updates disabled.Anytime
: Updates allowed at any time.Daily
: Updates will take place within the time interval specified in the Time (UTC) and duration field.Custom
: Updates will take place within the time interval specified in the Weekly schedule field.
- In the Maintenance frequency / Disable field, configure the maintenance window:
-
Under Cluster network settings:
-
(Optional) Select the network policy controller:
You will not be able to edit this setting once you create a cluster.
Warning
You cannot enable the Calico network policy controller and the Cilium tunnel mode at the same time.
- Enable network policy to use Calico.
- Enable tunnel mode to use Cilium.
-
Specify the CIDR cluster, which is a range of IP addresses to allocate pod IPs from.
-
Specify the CIDR services, which is a range of IP addresses to allocate service IPs from.
-
Set the Managed Service for Kubernetes node subnet mask and the maximum number of pods per node.
-
-
Click Create.
If you do not have the Yandex Cloud CLI yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the --folder-name
or --folder-id
parameters.
To create a Managed Service for Kubernetes cluster:
-
Specify the Managed Service for Kubernetes cluster parameters in the create command (not all parameters are given in the example):
yc managed-kubernetes cluster create \ --name test-k8s \ --network-name default \ --public-ip \ --release-channel regular \ --version 1.27 \ --cluster-ipv4-range 10.1.0.0/16 \ --service-ipv4-range 10.2.0.0/16 \ --security-group-ids enpe5sdn7vs5********,enpj6c5ifh75******** \ --service-account-name default-sa \ --node-service-account-name default-sa \ --master-location zone=ru-central1-a,subnet-id=mysubnet \ --daily-maintenance-window start=22:00,duration=10h --labels <cloud_label_name=cloud_label_value>
Where:
-
--name
: Managed Service for Kubernetes cluster name. -
--network-name
: Network name.Note
If you select a cloud network from another folder, assign the resource service account the following roles in that folder:
To use a public IP address, also assign the vpc.publicAdmin role.
-
--public-ip
: Flag indicating that the Managed Service for Kubernetes cluster needs a public IP address.Warning
Do not place a cluster with a public IP address in subnets with internet access via a NAT instance. With this configuration in place, your request to the cluster’s public IP address will get a response from the NAT instance’s IP address, and the client will reject it. For more information, see Route priority in complex scenarios.
You will not be able to edit this setting once you create a cluster.
-
--release-channel
: Release channel.You will not be able to edit this setting once you create a cluster.
-
--version
: Kubernetes version. Specify a version available for the selected release channel. -
--cluster-ipv4-range
: Range of IP addresses for allocating pod addresses. -
--service-ipv4-range
: Range of IP addresses for allocating service addresses. -
--security-group-ids
: List of Managed Service for Kubernetes cluster security group IDs.Warning
The configuration of security groups determines cluster performance, availability, and services running in the cluster.
-
--service-account-id
: Unique ID of the service account for the resources. This service account will be used to create the resources required for the Managed Service for Kubernetes cluster. -
--node-service-account-id
: Unique ID of the service account for the nodes. Nodes will pull the required Docker images from the registry on behalf of this account. -
--master-location
: Master configuration. Specify in the parameter the availability zone and subnet where the master will be located.The number of
--master-location
parameters depends on the type of master:- For the basic master, provide one
--master-location
parameter. - For a highly available master hosted across three availability zones, provide three
--master-location
parameters. In each one, specify different availability zones and subnets. - For a highly available master hosted in a single availability zone, provide three
--master-location
parameters. In each one, specify the same availability zone and subnet.
- For the basic master, provide one
-
--daily-maintenance-window
: Maintenance window settings. -
--labels
: Cloud labels for the cluster.
Result:
done (5m47s) id: cathn0s6qobf******** folder_id: b1g66jflru0e******** ... service_account_id: aje3932acd0c******** node_service_account_id: aje3932acd0c******** release_channel: REGULAR
-
-
Configure the cluster’s Container Network Interface
:You will not be able to edit this setting once you create a cluster.
Warning
You cannot enable the Calico network policy controller and the Cilium tunnel mode at the same time.
-
To enable the Calico network policy controller, set the
--enable-network-policy
flag in the Managed Service for Kubernetes cluster create command:yc managed-kubernetes cluster create \ ... --enable-network-policy
-
To enable the Cilium tunnel mode, provide the
--cilium
flag in the Managed Service for Kubernetes cluster create command:yc managed-kubernetes cluster create \ ... --cilium
-
-
To use the Yandex Key Management Service encryption key for protecting sensitive data, provide the key name or ID in the Managed Service for Kubernetes cluster creation command:
yc managed-kubernetes cluster create \ ... --kms-key-name <encryption_key_name> \ --kms-key-id <encryption_key_ID>
You will not be able to edit this setting once you create a cluster.
-
To enable sending logs to Yandex Cloud Logging, provide the logging settings in the
--master-logging
property of the Managed Service for Kubernetes cluster create command:yc managed-kubernetes cluster create \ ... --master-logging enabled=<send_logs>,` `log-group-id=<log_group_ID>,` `folder-id=<folder_ID>,` `kube-apiserver-enabled=<send_kube-apiserver_logs>,` `cluster-autoscaler-enabled=<send_cluster-autoscaler_logs>,` `events-enabled=<send_Kubernetes_events>` `audit-enabled=<send_audit_events>
Where:
enabled
: Flag that enables log sending,true
orfalse
.log-group-id
: ID of the log group to send the logs to.folder-id
: ID of the folder to send the logs to. The logs will be sent to the log group of the default folder.kube-apiserver-enabled
: Flag that enables kube-apiserver log sending,true
orfalse
.cluster-autoscaler-enabled
: Flag that enablescluster-autoscaler
log sending,true
orfalse
.events-enabled
: Flag that enables Kubernetes event sending,true
orfalse
.audit-enabled
: Flag that enables audit event sending,true
orfalse
.
If log sending is enabled but neither
log-group-id
norfolder-id
is specified, the logs will be sent to the default log group of the folder with the Managed Service for Kubernetes cluster. You cannot set bothlog-group-id
andfolder-id
at the same time.
With Terraform
Terraform is distributed under the Business Source License
For more information about the provider resources, see the documentation on the Terraform
If you don't have Terraform, install it and configure the Yandex Cloud provider.
To create a Managed Service for Kubernetes cluster:
-
In the configuration file, define the parameters of the resources you want to create:
-
Managed Service for Kubernetes cluster: Cluster description.
-
Network: Description of the cloud network to host the Managed Service for Kubernetes cluster. If you already have a suitable network, you do not need to describe it again.
Note
If you select a cloud network from another folder, assign the resource service account the following roles in that folder:
To use a public IP address, also assign the vpc.publicAdmin role.
-
Subnets: Description of the subnets to connect the Managed Service for Kubernetes cluster hosts to. If you already have suitable subnets, you do not need to describe them again.
-
Service account for the Managed Service for Kubernetes cluster and nodes and role settings
for this account. Create separate service accounts for the Managed Service for Kubernetes cluster and nodes, as required. If you already have a suitable service account, you do not need to describe it again.
Here is the configuration file example:
resource "yandex_kubernetes_cluster" "<Managed_Service_for_Kubernetes_cluster_name>" { network_id = yandex_vpc_network.<network_name>.id master { master_location { zone = yandex_vpc_subnet.<subnet_name>.zone subnet_id = yandex_vpc_subnet.<subnet_name>.id } } service_account_id = yandex_iam_service_account.<service_account_name>.id node_service_account_id = yandex_iam_service_account.<service_account_name>.id depends_on = [ yandex_resourcemanager_folder_iam_member.k8s-clusters-agent, yandex_resourcemanager_folder_iam_member.vpc-public-admin, yandex_resourcemanager_folder_iam_member.images-puller ] } labels { "<cloud_label_name>"="<cloud_label_value>" } resource "yandex_vpc_network" "<network_name>" { name = "<network_name>" } resource "yandex_vpc_subnet" "<subnet_name>" { v4_cidr_blocks = ["<subnet_IP_address_range>"] zone = "<availability_zone>" network_id = yandex_vpc_network.<network_name>.id } resource "yandex_iam_service_account" "<service_account_name>" { name = "<service_account_name>" description = "<service_account_description>" } resource "yandex_resourcemanager_folder_iam_member" "k8s-clusters-agent" { # The service account gets the "k8s.clusters.agent" role. folder_id = "<folder_ID>" role = "k8s.clusters.agent" member = "serviceAccount:${yandex_iam_service_account.<service_account_name>.id}" } resource "yandex_resourcemanager_folder_iam_member" "vpc-public-admin" { # The service account gets the "vpc.publicAdmin" role. folder_id = "<folder_ID>" role = "vpc.publicAdmin" member = "serviceAccount:${yandex_iam_service_account.<service_account_name>.id}" } resource "yandex_resourcemanager_folder_iam_member" "images-puller" { # The service account gets the "container-registry.images.puller" role. folder_id = "<folder_ID>" role = "container-registry.images.puller" member = "serviceAccount:${yandex_iam_service_account.<service_account_name>.id}" }
Note
Cloud labels for a Kubernetes cluster are composed according to certain rules.
To enable sending logs to Yandex Cloud Logging, add the
master_logging
section to the Managed Service for Kubernetes cluster description:resource "yandex_kubernetes_cluster" "<cluster_name>" { ... master { ... master_logging { enabled = <log_sending> log_group_id = "<log_group_ID>" folder_id = "<folder_ID>" kube_apiserver_enabled = <kube-apiserver_log_sending> cluster_autoscaler_enabled = <cluster-autoscaler_log_sending> events_enabled = <Kubernetes_event_sending> audit_enabled = <audit_event_sending> } } }
Where:
enabled
: Flag that enables log sending,true
orfalse
.log_group_id
: ID of the log group to send the logs to.folder_id
: ID of the folder to send the logs to. The logs will be sent to the log group of the default folder.kube_apiserver_enabled
: Flag that enables kube-apiserver log sending,true
orfalse
.cluster_autoscaler_enabled
: Flag that enablescluster-autoscaler
log sending,true
orfalse
.events_enabled
: Flag that enables Kubernetes event sending,true
orfalse
.audit_enabled
: Flag that enables audit event sending,true
orfalse
.
If log sending is enabled but neither
log_group_id
norfolder_id
is specified, the logs will be sent to the default log group of the folder with the Managed Service for Kubernetes cluster. You cannot set bothlog_group_id
andfolder_id
at the same time.For more information, see the Terraform
provider documentation. -
-
Make sure the configuration files are correct.
-
In the command line, go to the folder where you created the configuration file.
-
Run a check using this command:
terraform plan
If the configuration is described correctly, the terminal will display a list of created resources and their parameters. If the configuration contains any errors, Terraform will point them out. This is a test step; no resources will be created.
-
-
Create a Managed Service for Kubernetes cluster.
-
If the configuration does not contain any errors, run this command:
terraform apply
-
Confirm that you want to create the resources.
After this, all required resources will be created in the specified folder and the IP addresses of the VMs will be displayed in the terminal. You can check the new resources and their configuration using the management console
. -
To create a Managed Service for Kubernetes cluster, use the create method for the Cluster resource.
The request body depends on the master type:
- For the basic master, provide one
masterSpec.locations
parameter in the request. - For a highly available master hosted across three availability zones, provide three
masterSpec.locations
parameters in the request. In each one, specify different availability zones and subnets. - For a highly available master hosted in a single availability zone, provide three
masterSpec.locations
parameters in the request. In each one, specify the same availability zone and subnet.
When providing the masterSpec.locations
parameter, you do not need to specify masterSpec.zonalMasterSpec
or masterSpec.regionalMasterSpec
.
Note
If you select a cloud network from another folder, assign the resource service account the following roles in that folder:
To use a public IP address, also assign the vpc.publicAdmin role.
To use a Yandex Key Management Service encryption key to protect secrets, provide its ID in the kmsProvider.keyId
parameter.
To enable sending logs to Yandex Cloud Logging, provide the logging settings in the masterSpec.masterLogging
parameter.
To add a cloud label, provide its name and value in the labels
parameter.
Examples
Creating a Managed Service for Kubernetes cluster with a basic master
Create a Managed Service for Kubernetes cluster with the following test specifications:
- Name:
k8s-single
. - Network:
mynet
. - Availability zone:
ru-central1-a
. - Subnet:
mysubnet
. - Service account:
myaccount
. - Security group ID:
enp6saqnq4ie244g67sb
.
To create a Managed Service for Kubernetes cluster with a basic master, run this command:
yc managed-kubernetes cluster create \
--name k8s-single \
--network-name mynet \
--master-location zone=ru-central1-a,subnet-name=mysubnet \
--service-account-name myaccount \
--node-service-account-name myaccount \
--security-group-ids enp6saqnq4ie244g67sb
Create a Managed Service for Kubernetes cluster and a network for it with the following test specifications:
-
Name:
k8s-single
. -
Folder ID:
b1gia87mbaomkfvsleds
. -
Network:
mynet
. -
Subnet:
mysubnet
. Its network settings are as follows:- Availability zone:
ru-central1-a
. - Range:
10.1.0.0/16
.
- Availability zone:
-
Service account:
myaccount
. -
Service account roles:
k8s.clusters.agent
,vpc.publicAdmin
,container-registry.images.puller
, andkms.keys.encrypterDecrypter
. -
Yandex Key Management Service encryption key:
kms-key
. -
Security group:
k8s-public-services
. It contains rules for connecting to services from the internet.
Install Terraform (unless you already have it), configure the provider according to this guide, and apply the configuration file:
locals {
folder_id = "b1gia87mbaomkfvsleds"
}
resource "yandex_kubernetes_cluster" "k8s-single" {
name = "k8s-single"
network_id = yandex_vpc_network.mynet.id
master {
master_location {
zone = yandex_vpc_subnet.mysubnet.zone
subnet_id = yandex_vpc_subnet.mysubnet.id
}
security_group_ids = [yandex_vpc_security_group.k8s-public-services.id]
}
service_account_id = yandex_iam_service_account.myaccount.id
node_service_account_id = yandex_iam_service_account.myaccount.id
depends_on = [
yandex_resourcemanager_folder_iam_member.k8s-clusters-agent,
yandex_resourcemanager_folder_iam_member.vpc-public-admin,
yandex_resourcemanager_folder_iam_member.images-puller,
yandex_resourcemanager_folder_iam_member.encrypterDecrypter
]
kms_provider {
key_id = yandex_kms_symmetric_key.kms-key.id
}
}
resource "yandex_vpc_network" "mynet" {
name = "mynet"
}
resource "yandex_vpc_subnet" "mysubnet" {
name = "mysubnet"
v4_cidr_blocks = ["10.1.0.0/16"]
zone = "ru-central1-a"
network_id = yandex_vpc_network.mynet.id
}
resource "yandex_iam_service_account" "myaccount" {
name = "myaccount"
description = "Service account for the single Kubernetes cluster"
}
resource "yandex_resourcemanager_folder_iam_member" "k8s-clusters-agent" {
# The service account gets the "k8s.clusters.agent" role.
folder_id = local.folder_id
role = "k8s.clusters.agent"
member = "serviceAccount:${yandex_iam_service_account.myaccount.id}"
}
resource "yandex_resourcemanager_folder_iam_member" "vpc-public-admin" {
# The service account gets the "vpc.publicAdmin" role.
folder_id = local.folder_id
role = "vpc.publicAdmin"
member = "serviceAccount:${yandex_iam_service_account.myaccount.id}"
}
resource "yandex_resourcemanager_folder_iam_member" "images-puller" {
# The service account gets the "container-registry.images.puller" role.
folder_id = local.folder_id
role = "container-registry.images.puller"
member = "serviceAccount:${yandex_iam_service_account.myaccount.id}"
}
resource "yandex_resourcemanager_folder_iam_member" "encrypterDecrypter" {
# The service account gets the "kms.keys.encrypterDecrypter" role.
folder_id = local.folder_id
role = "kms.keys.encrypterDecrypter"
member = "serviceAccount:${yandex_iam_service_account.myaccount.id}"
}
resource "yandex_kms_symmetric_key" "kms-key" {
# A Yandex Key Management Service key for encrypting critical information, including passwords, OAuth tokens, and SSH keys.
name = "kms-key"
default_algorithm = "AES_128"
rotation_period = "8760h" # 1 year.
}
resource "yandex_vpc_security_group" "k8s-public-services" {
name = "k8s-public-services"
description = "Group rules allow connections to services from the internet. Apply the rules for node groups only."
network_id = yandex_vpc_network.mynet.id
ingress {
protocol = "TCP"
description = "The rule allows availability checks from the load balancer's range of addresses. It is required for the operation of a fault-tolerant Managed Service for Kubernetes cluster and load balancer services."
predefined_target = "loadbalancer_healthchecks"
from_port = 0
to_port = 65535
}
ingress {
protocol = "ANY"
description = "The rule allows master-to-node and node-to-node communication inside a security group."
predefined_target = "self_security_group"
from_port = 0
to_port = 65535
}
ingress {
protocol = "ANY"
description = "The rule allows sub-sub and service-service interactions. Specify the subnets of your Managed Service for Kubernetes cluster and services."
v4_cidr_blocks = concat(yandex_vpc_subnet.mysubnet.v4_cidr_blocks)
from_port = 0
to_port = 65535
}
ingress {
protocol = "ICMP"
description = "The rule allows debug ICMP packets from internal subnets."
v4_cidr_blocks = ["10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"]
}
ingress {
protocol = "TCP"
description = "The rule allows incoming traffic from the internet to a range of NodePorts. Add ports or change existing ones to the required ports."
v4_cidr_blocks = ["0.0.0.0/0"]
from_port = 30000
to_port = 32767
}
egress {
protocol = "ANY"
description = "The rule allows all outgoing traffic. Nodes can connect to Yandex Container Registry, Yandex Object Storage, Docker Hub, etc."
v4_cidr_blocks = ["0.0.0.0/0"]
from_port = 0
to_port = 65535
}
}
Creating a Managed Service for Kubernetes cluster with a highly available master in three availability zones
Create a Managed Service for Kubernetes cluster with the following test specifications:
- Name:
k8s-ha-three-zones
. - Network:
my-ha-net
. - Subnet for the
ru-central1-a
availability zone:mysubnet-a
. - Subnet for the
ru-central1-b
availability zone:mysubnet-b
. - Subnet for the
ru-central1-d
availability zone:mysubnet-d
. - Service account:
ha-k8s-account
. - Security group ID:
enp6saqnq4ie244g67sb
.
To create a Managed Service for Kubernetes cluster with a highly available master in three availability zones, run this command:
yc managed-kubernetes cluster create \
--name k8s-ha-three-zones \
--network-name my-ha-net \
--master-location zone=ru-central1-a,subnet-name=mysubnet-a \
--master-location zone=ru-central1-b,subnet-name=mysubnet-b \
--master-location zone=ru-central1-d,subnet-name=mysubnet-d \
--service-account-name ha-k8s-account \
--node-service-account-name ha-k8s-account \
--security-group-ids enp6saqnq4ie244g67sb
Create a Managed Service for Kubernetes cluster and a network for it with the following test specifications:
-
Name:
k8s-ha-three-zones
. -
Folder ID:
b1gia87mbaomkfvsleds
-
Network:
my-ha-net
. -
Subnet:
mysubnet-a
. Its network settings are as follows:- Availability zone:
ru-central1-a
. - Range:
10.5.0.0/16
.
- Availability zone:
-
Subnet:
mysubnet-b
. Its network settings are as follows:- Availability zone:
ru-central1-b
. - Range:
10.6.0.0/16
.
- Availability zone:
-
Subnet:
mysubnet-d
. Its network settings are as follows:- Availability zone:
ru-central1-d
. - Range:
10.7.0.0/16
.
- Availability zone:
-
Service account:
ha-k8s-account
. -
Service account roles:
k8s.clusters.agent
,vpc.publicAdmin
,container-registry.images.puller
, andkms.keys.encrypterDecrypter
. -
Yandex Key Management Service encryption key:
kms-key
. -
Security group:
regional-k8s-sg
. It contains rules for service traffic.
Install Terraform (unless you already have it), configure the provider according to this guide, and apply the configuration file:
locals {
folder_id = "b1gia87mbaomkfvsleds"
}
resource "yandex_kubernetes_cluster" "k8s-ha-three-zones" {
name = "k8s-ha-three-zones"
network_id = yandex_vpc_network.my-ha-net.id
master {
master_location {
zone = yandex_vpc_subnet.mysubnet-a.zone
subnet_id = yandex_vpc_subnet.mysubnet-a.id
}
master_location {
zone = yandex_vpc_subnet.mysubnet-b.zone
subnet_id = yandex_vpc_subnet.mysubnet-b.id
}
master_location {
zone = yandex_vpc_subnet.mysubnet-d.zone
subnet_id = yandex_vpc_subnet.mysubnet-d.id
}
security_group_ids = [yandex_vpc_security_group.ha-k8s-sg.id]
}
service_account_id = yandex_iam_service_account.ha-k8s-account.id
node_service_account_id = yandex_iam_service_account.ha-k8s-account.id
depends_on = [
yandex_resourcemanager_folder_iam_member.k8s-clusters-agent,
yandex_resourcemanager_folder_iam_member.vpc-public-admin,
yandex_resourcemanager_folder_iam_member.images-puller,
yandex_resourcemanager_folder_iam_member.encrypterDecrypter
]
kms_provider {
key_id = yandex_kms_symmetric_key.kms-key.id
}
}
resource "yandex_vpc_network" "my-ha-net" {
name = "my-ha-net"
}
resource "yandex_vpc_subnet" "mysubnet-a" {
name = "mysubnet-a"
v4_cidr_blocks = ["10.5.0.0/16"]
zone = "ru-central1-a"
network_id = yandex_vpc_network.my-ha-net.id
}
resource "yandex_vpc_subnet" "mysubnet-b" {
name = "mysubnet-b"
v4_cidr_blocks = ["10.6.0.0/16"]
zone = "ru-central1-b"
network_id = yandex_vpc_network.my-ha-net.id
}
resource "yandex_vpc_subnet" "mysubnet-d" {
name = "mysubnet-d"
v4_cidr_blocks = ["10.7.0.0/16"]
zone = "ru-central1-d"
network_id = yandex_vpc_network.my-ha-net.id
}
resource "yandex_iam_service_account" "ha-k8s-account" {
name = "ha-k8s-account"
description = "Service account for the highly available Kubernetes cluster"
}
resource "yandex_resourcemanager_folder_iam_member" "k8s-clusters-agent" {
# The service account gets the "k8s.clusters.agent" role.
folder_id = local.folder_id
role = "k8s.clusters.agent"
member = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}
resource "yandex_resourcemanager_folder_iam_member" "vpc-public-admin" {
# The service account gets the "vpc.publicAdmin" role.
folder_id = local.folder_id
role = "vpc.publicAdmin"
member = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}
resource "yandex_resourcemanager_folder_iam_member" "images-puller" {
# The service account gets the "container-registry.images.puller" role.
folder_id = local.folder_id
role = "container-registry.images.puller"
member = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}
resource "yandex_resourcemanager_folder_iam_member" "encrypterDecrypter" {
# The service account gets the "kms.keys.encrypterDecrypter" role.
folder_id = local.folder_id
role = "kms.keys.encrypterDecrypter"
member = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}
resource "yandex_kms_symmetric_key" "kms-key" {
# A Yandex Key Management Service key for encrypting critical information, including passwords, OAuth tokens, and SSH keys.
name = "kms-key"
default_algorithm = "AES_128"
rotation_period = "8760h" # 1 year.
}
resource "yandex_vpc_security_group" "ha-k8s-sg" {
name = "ha-k8s-sg"
description = "Group rules ensure the basic performance of the Managed Service for Kubernetes cluster. Apply it to the cluster and node groups."
network_id = yandex_vpc_network.my-ha-net.id
ingress {
protocol = "TCP"
description = "The rule allows availability checks from the load balancer's range of addresses. It is required for the operation of a fault-tolerant Managed Service for Kubernetes cluster and load balancer services."
predefined_target = "loadbalancer_healthchecks"
from_port = 0
to_port = 65535
}
ingress {
protocol = "ANY"
description = "The rule allows master-to-node and node-to-node communication inside a security group."
predefined_target = "self_security_group"
from_port = 0
to_port = 65535
}
ingress {
protocol = "ANY"
description = "The rule allows sub-sub and service-service interactions. Specify the subnets of your Managed Service for Kubernetes cluster and services."
v4_cidr_blocks = concat(yandex_vpc_subnet.mysubnet-a.v4_cidr_blocks, yandex_vpc_subnet.mysubnet-b.v4_cidr_blocks, yandex_vpc_subnet.mysubnet-d.v4_cidr_blocks)
from_port = 0
to_port = 65535
}
ingress {
protocol = "ICMP"
description = "The rule allows debug ICMP packets from internal subnets."
v4_cidr_blocks = ["10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"]
}
ingress {
protocol = "TCP"
description = "The rule allows incoming traffic from the internet to a range of NodePorts. Add ports or change existing ones to the required ports."
v4_cidr_blocks = ["0.0.0.0/0"]
from_port = 30000
to_port = 32767
}
egress {
protocol = "ANY"
description = "The rule allows all outgoing traffic. Nodes can connect to Yandex Container Registry, Yandex Object Storage, Docker Hub, etc."
v4_cidr_blocks = ["0.0.0.0/0"]
from_port = 0
to_port = 65535
}
}
Creating a Managed Service for Kubernetes cluster with a highly available master in a single availability zone
Create a Managed Service for Kubernetes cluster with the following test specifications:
- Name:
k8s-ha-one-zone
. - Network:
my-ha-net
. - Subnet for the
ru-central1-a
availability zone:my-ha-subnet
. - Number of identical
--master-location
parameters: three. This creates three instances of the master in one availability zone. - Availability zone:
ru-central1-a
. - Service account:
ha-k8s-account
. - Security group ID:
enp6saqnq4ie244g67sb
.
To create a Managed Service for Kubernetes cluster with a highly available master in a single availability zone , run this command:
yc managed-kubernetes cluster create \
--name k8s-ha-one-zone \
--network-name my-ha-net \
--master-location zone=ru-central1-a,subnet-name=my-ha-subnet \
--master-location zone=ru-central1-a,subnet-name=my-ha-subnet \
--master-location zone=ru-central1-a,subnet-name=my-ha-subnet \
--service-account-name ha-k8s-account \
--node-service-account-name ha-k8s-account \
--security-group-ids enp6saqnq4ie244g67sb
Create a Managed Service for Kubernetes cluster and a network for it with the following test specifications:
-
Name:
k8s-ha-one-zone
. -
Folder ID:
b1gia87mbaomkfvsleds
. -
Network:
my-ha-net
. -
Subnet:
my-ha-subnet
. Its network settings are as follows:- Availability zone:
ru-central1-a
. - Range:
10.5.0.0/16
.
- Availability zone:
-
Service account:
ha-k8s-account
. -
Service account roles:
k8s.clusters.agent
,vpc.publicAdmin
,container-registry.images.puller
, andkms.keys.encrypterDecrypter
. -
Yandex Key Management Service encryption key:
kms-key
. -
Security group:
ha-k8s-sg
. It contains rules for service traffic.
Install Terraform (unless you already have it), configure the provider according to this guide, and apply the configuration file:
locals {
folder_id = "b1gia87mbaomkfvsleds"
}
resource "yandex_kubernetes_cluster" "k8s-ha-one-zone" {
name = "k8s-ha-one-zone"
network_id = yandex_vpc_network.my-ha-net.id
master {
master_location {
zone = yandex_vpc_subnet.my-ha-subnet.zone
subnet_id = yandex_vpc_subnet.my-ha-subnet.id
}
master_location {
zone = yandex_vpc_subnet.my-ha-subnet.zone
subnet_id = yandex_vpc_subnet.my-ha-subnet.id
}
master_location {
zone = yandex_vpc_subnet.my-ha-subnet.zone
subnet_id = yandex_vpc_subnet.my-ha-subnet.id
}
security_group_ids = [yandex_vpc_security_group.ha-k8s-sg.id]
}
service_account_id = yandex_iam_service_account.ha-k8s-account.id
node_service_account_id = yandex_iam_service_account.ha-k8s-account.id
depends_on = [
yandex_resourcemanager_folder_iam_member.k8s-clusters-agent,
yandex_resourcemanager_folder_iam_member.vpc-public-admin,
yandex_resourcemanager_folder_iam_member.images-puller,
yandex_resourcemanager_folder_iam_member.encrypterDecrypter
]
kms_provider {
key_id = yandex_kms_symmetric_key.kms-key.id
}
}
resource "yandex_vpc_network" "my-ha-net" {
name = "my-ha-net"
}
resource "yandex_vpc_subnet" "my-ha-subnet" {
name = "my-ha-subnet"
v4_cidr_blocks = ["10.5.0.0/16"]
zone = "ru-central1-a"
network_id = yandex_vpc_network.my-ha-net.id
}
resource "yandex_iam_service_account" "ha-k8s-account" {
name = "ha-k8s-account"
description = "Service account for the highly available Kubernetes cluster"
}
resource "yandex_resourcemanager_folder_iam_member" "k8s-clusters-agent" {
# The service account gets the "k8s.clusters.agent" role.
folder_id = local.folder_id
role = "k8s.clusters.agent"
member = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}
resource "yandex_resourcemanager_folder_iam_member" "vpc-public-admin" {
# The service account gets the "vpc.publicAdmin" role.
folder_id = local.folder_id
role = "vpc.publicAdmin"
member = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}
resource "yandex_resourcemanager_folder_iam_member" "images-puller" {
# The service account gets the "container-registry.images.puller" role.
folder_id = local.folder_id
role = "container-registry.images.puller"
member = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}
resource "yandex_resourcemanager_folder_iam_member" "encrypterDecrypter" {
# The service account gets the "kms.keys.encrypterDecrypter" role.
folder_id = local.folder_id
role = "kms.keys.encrypterDecrypter"
member = "serviceAccount:${yandex_iam_service_account.ha-k8s-account.id}"
}
resource "yandex_kms_symmetric_key" "kms-key" {
# A Yandex Key Management Service key for encrypting critical information, including passwords, OAuth tokens, and SSH keys.
name = "kms-key"
default_algorithm = "AES_128"
rotation_period = "8760h" # 1 year.
}
resource "yandex_vpc_security_group" "ha-k8s-sg" {
name = "ha-k8s-sg"
description = "Group rules ensure the basic performance of the Managed Service for Kubernetes cluster. Apply it to the cluster and node groups."
network_id = yandex_vpc_network.my-ha-net.id
ingress {
protocol = "TCP"
description = "The rule allows availability checks from the load balancer's range of addresses. It is required for the operation of a fault-tolerant Managed Service for Kubernetes cluster and load balancer services."
predefined_target = "loadbalancer_healthchecks"
from_port = 0
to_port = 65535
}
ingress {
protocol = "ANY"
description = "The rule allows master-to-node and node-to-node communication inside a security group."
predefined_target = "self_security_group"
from_port = 0
to_port = 65535
}
ingress {
protocol = "ANY"
description = "The rule allows sub-sub and service-service interactions. Specify the subnets of your Managed Service for Kubernetes cluster and services."
v4_cidr_blocks = concat(yandex_vpc_subnet.my-ha-subnet.v4_cidr_blocks, yandex_vpc_subnet.my-ha-subnet.v4_cidr_blocks, yandex_vpc_subnet.my-ha-subnet.v4_cidr_blocks)
from_port = 0
to_port = 65535
}
ingress {
protocol = "ICMP"
description = "The rule allows debug ICMP packets from internal subnets."
v4_cidr_blocks = ["10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"]
}
ingress {
protocol = "TCP"
description = "The rule allows incoming traffic from the internet to a range of NodePorts. Add ports or change existing ones to the required ports."
v4_cidr_blocks = ["0.0.0.0/0"]
from_port = 30000
to_port = 32767
}
egress {
protocol = "ANY"
description = "The rule allows all outgoing traffic. Nodes can connect to Yandex Container Registry, Yandex Object Storage, Docker Hub, etc."
v4_cidr_blocks = ["0.0.0.0/0"]
from_port = 0
to_port = 65535
}
}