yandex_kubernetes_cluster (Resource)
- Example usage
- Schema
- Required
- Optional
- Read-Only
- Nested Schema for master
- Nested Schema for master.maintenance_policy
- Nested Schema for master.maintenance_policy.maintenance_window
- Nested Schema for master.master_location
- Nested Schema for master.master_logging
- Nested Schema for master.regional
- Nested Schema for master.regional.location
- Nested Schema for master.scale_policy
- Nested Schema for master.scale_policy.auto_scale
- Nested Schema for master.zonal
- Nested Schema for master.version_info
- Nested Schema for kms_provider
- Nested Schema for network_implementation
- Nested Schema for network_implementation.cilium
- Nested Schema for timeouts
- Nested Schema for workload_identity_federation
- Import
Creates a Yandex Cloud Managed Kubernetes Cluster. For more information, see the official documentation.
Warning
When access rights for service_account_id or node_service_account_id are provided using terraform resources, it is necessary to add dependency on these access resources to cluster config - see Example #3.
Without it, on destroy, terraform will delete cluster and remove access rights for service account(s) simultaneously, that will cause problems for cluster and related node group deletion.
Example usage
//
// Create a new Managed Kubernetes zonal Cluster.
//
resource "yandex_kubernetes_cluster" "zonal_cluster" {
name = "name"
description = "description"
network_id = yandex_vpc_network.network_resource_name.id
master {
version = "1.30"
zonal {
zone = yandex_vpc_subnet.subnet_resource_name.zone
subnet_id = yandex_vpc_subnet.subnet_resource_name.id
}
public_ip = true
security_group_ids = ["${yandex_vpc_security_group.security_group_name.id}"]
maintenance_policy {
auto_upgrade = true
maintenance_window {
start_time = "15:00"
duration = "3h"
}
}
master_logging {
enabled = true
log_group_id = yandex_logging_group.log_group_resoruce_name.id
kube_apiserver_enabled = true
cluster_autoscaler_enabled = true
events_enabled = true
audit_enabled = true
}
scale_policy {
auto_scale {
min_resource_preset_id = "s-c4-m16"
}
}
}
service_account_id = yandex_iam_service_account.service_account_resource_name.id
node_service_account_id = yandex_iam_service_account.node_service_account_resource_name.id
labels = {
my_key = "my_value"
my_other_key = "my_other_value"
}
release_channel = "RAPID"
network_policy_provider = "CALICO"
kms_provider {
key_id = yandex_kms_symmetric_key.kms_key_resource_name.id
}
workload_identity_federation {
enabled = true
}
}
//
// Create a new Managed Kubernetes regional Cluster.
//
resource "yandex_kubernetes_cluster" "regional_cluster" {
name = "name"
description = "description"
network_id = yandex_vpc_network.network_resource_name.id
master {
regional {
region = "ru-central1"
location {
zone = yandex_vpc_subnet.subnet_a_resource_name.zone
subnet_id = yandex_vpc_subnet.subnet_a_resource_name.id
}
location {
zone = yandex_vpc_subnet.subnet_b_resource_name.zone
subnet_id = yandex_vpc_subnet.subnet_b_resource_name.id
}
location {
zone = yandex_vpc_subnet.subnet_d_resource_name.zone
subnet_id = yandex_vpc_subnet.subnet_d_resource_name.id
}
}
version = "1.30"
public_ip = true
maintenance_policy {
auto_upgrade = true
maintenance_window {
day = "monday"
start_time = "15:00"
duration = "3h"
}
maintenance_window {
day = "friday"
start_time = "10:00"
duration = "4h30m"
}
}
master_logging {
enabled = true
folder_id = data.yandex_resourcemanager_folder.folder_resource_name.id
kube_apiserver_enabled = true
cluster_autoscaler_enabled = true
events_enabled = true
audit_enabled = true
}
scale_policy {
auto_scale {
min_resource_preset_id = "s-c4-m16"
}
}
}
service_account_id = yandex_iam_service_account.service_account_resource_name.id
node_service_account_id = yandex_iam_service_account.node_service_account_resource_name.id
labels = {
my_key = "my_value"
my_other_key = "my_other_value"
}
release_channel = "STABLE"
workload_identity_federation {
enabled = true
}
}
depends_on = [
"yandex_resourcemanager_folder_iam_member.ServiceAccountResourceName",
"yandex_resourcemanager_folder_iam_member.NodeServiceAccountResourceName"
]
Schema
Required
master(Block List, Min: 1, Max: 1) Kubernetes master configuration options. (see below for nested schema)network_id(String) The ID of the cluster network.node_service_account_id(String) Service account to be used by the worker nodes of the Kubernetes cluster to access Container Registry or to push node logs and metrics.service_account_id(String) Service account to be used for provisioning Compute Cloud and VPC resources for Kubernetes cluster. Selected service account should haveeditrole on the folder where the Kubernetes cluster will be located and on the folder where selected network resides.
Optional
cluster_ipv4_range(String) CIDR block. IP range for allocating pod addresses. It should not overlap with any subnet in the network the Kubernetes cluster located in. Static routes will be set up for this CIDR blocks in node subnets.cluster_ipv6_range(String) Identical tocluster_ipv4_rangebut for IPv6 protocol.description(String) The resource description.folder_id(String) The folder identifier that resource belongs to. If it is not provided, the default providerfolder-idis used.kms_provider(Block List, Max: 1) Cluster KMS provider parameters. (see below for nested schema)labels(Map of String) A set of key/value label pairs which assigned to resource.name(String) The resource name.network_implementation(Block List, Max: 1) Network Implementation options. (see below for nested schema)network_policy_provider(String) Network policy provider for the cluster. Possible values:CALICO.node_ipv4_cidr_mask_size(Number) Size of the masks that are assigned to each node in the cluster. Effectively limits maximum number of pods for each node.release_channel(String) Cluster release channel.service_ipv4_range(String) CIDR block. IP range Kubernetes service Kubernetes cluster IP addresses will be allocated from. It should not overlap with any subnet in the network the Kubernetes cluster located in.service_ipv6_range(String) Identical to service_ipv4_range but for IPv6 protocol.timeouts(Block, Optional) (see below for nested schema)workload_identity_federation(Block List, Max: 1) Workload Identity Federation configuration. (see below for nested schema)
Read-Only
created_at(String) The creation timestamp of the resource.health(String) Health of the Kubernetes cluster.id(String) The ID of this resource.log_group_id(String) Log group where cluster stores cluster system logs, like audit, events, or control plane logs.status(String) Status of the Kubernetes cluster.
Nested Schema for master
Optional:
etcd_cluster_size(Number) Number of etcd clusters that will be used for the Kubernetes master.external_v6_address(String) An IPv6 external network address that is assigned to the master.maintenance_policy(Block List, Max: 1) Maintenance policy for Kubernetes master. If policy is omitted, automatic revision upgrades of the kubernetes master are enabled and could happen at any time. Revision upgrades are performed only within the same minor version, e.g. 1.29. Minor version upgrades (e.g. 1.29->1.30) should be performed manually. (see below for nested schema)master_location(Block List) Cluster master's instances locations array (zone and subnet). Cannot be used together withzonalorregional. Currently, supports either one, for zonal master, or three instances ofmaster_location. Can be updated in place. When creating regional cluster (three master instances), itsregionwill be evaluated automatically by backend. (see below for nested schema)master_logging(Block List, Max: 1) Master Logging options. (see below for nested schema)public_ip(Boolean) Whentrue, Kubernetes master will have visible ipv4 address.regional(Block List, Max: 1) Initialize parameters for Regional Master (highly available master). (see below for nested schema)scale_policy(Block List, Max: 1) Scale policy of the master. (see below for nested schema)security_group_ids(Set of String) The list of security groups applied to resource or their components.version(String) Version of Kubernetes that will be used for master.zonal(Block List, Max: 1) Initialize parameters for Zonal Master (single node master). (see below for nested schema)
Read-Only:
cluster_ca_certificate(String) PEM-encoded public certificate that is the root of trust for the Kubernetes cluster.external_v4_address(String) An IPv4 external network address that is assigned to the master.external_v4_endpoint(String) External endpoint that can be used to access Kubernetes cluster API from the internet (outside of the cloud).external_v6_endpoint(String) External IPv6 endpoint that can be used to access Kubernetes cluster API from the internet (outside of the cloud).internal_v4_address(String) An IPv4 internal network address that is assigned to the master.internal_v4_endpoint(String) Internal endpoint that can be used to connect to the master from cloud networks.version_info(List of Object) Information about cluster version. (see below for nested schema)
Nested Schema for master.maintenance_policy
Required:
auto_upgrade(Boolean) Boolean flag that specifies if master can be upgraded automatically. When omitted, default value is TRUE.
Optional:
maintenance_window(Block Set) This structure specifies maintenance window, when update for master is allowed. When omitted, it defaults to any time. To specify time of day interval, for all days, one element should be provided, with two fields set,start_timeandduration. Please seezonal_cluster_resource_nameconfig example.
To allow maintenance only on specific days of week, please provide list of elements, with all fields set. Only one time interval (duration) is allowed for each day of week. Please see regional_cluster_resource_name config example (see below for nested schema)
Nested Schema for master.maintenance_policy.maintenance_window
Required:
duration(String) The duration of the day of week you want to update.start_time(String) The start time of the day of week you want to update.
Optional:
day(String) The day of the week which you want to update.
Nested Schema for master.master_location
Optional:
subnet_id(String) ID of the subnet.zone(String) ID of the availability zone.
Nested Schema for master.master_logging
Optional:
audit_enabled(Boolean) Boolean flag that specifies if kube-apiserver audit logs should be sent to Yandex Cloud Logging.cluster_autoscaler_enabled(Boolean) Boolean flag that specifies if cluster-autoscaler logs should be sent to Yandex Cloud Logging.enabled(Boolean) Boolean flag that specifies if master components logs should be sent to Yandex Cloud Logging. The exact components that will send their logs must be configured via the options described below.
Warning
Only one of log_group_id or folder_id (or none) may be specified. If log_group_id is specified, logs will be sent to this specific Log group. If folder_id is specified, logs will be sent to default Log group of this folder. If none of two is specified, logs will be sent to default Log group of the same folder as Kubernetes cluster.
events_enabled(Boolean) Boolean flag that specifies if kubernetes cluster events should be sent to Yandex Cloud Logging.folder_id(String) ID of the folder default Log group of which should be used to collect logs.kube_apiserver_enabled(Boolean) Boolean flag that specifies if kube-apiserver logs should be sent to Yandex Cloud Logging.log_group_id(String) ID of the Yandex Cloud Logging Log group.
Nested Schema for master.regional
Required:
region(String) Name of availability region (e.g.ru-central1), where master instances will be allocated.
Optional:
location(Block List) Array of locations, where master instances will be allocated. (see below for nested schema)
Nested Schema for master.regional.location
Optional:
subnet_id(String) ID of the subnet.zone(String) ID of the availability zone.
Nested Schema for master.scale_policy
Optional:
auto_scale(Block List, Max: 1) Autoscaled master instance resources. (see below for nested schema)
Nested Schema for master.scale_policy.auto_scale
Required:
min_resource_preset_id(String) Minimal resource preset ID.
Nested Schema for master.zonal
Optional:
subnet_id(String) ID of the subnet. If no ID is specified, and there only one subnet in specified zone, an address in this subnet will be allocated.zone(String) ID of the availability zone.
Nested Schema for master.version_info
Read-Only:
current_version(String)new_revision_available(Boolean)new_revision_summary(String)version_deprecated(Boolean)
Nested Schema for kms_provider
Optional:
key_id(String) KMS key ID.
Nested Schema for network_implementation
Optional:
cilium(Block List, Max: 1) Cilium network implementation configuration. No options exist. (see below for nested schema)
Nested Schema for network_implementation.cilium
Nested Schema for timeouts
Optional:
create(String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).delete(String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours). Setting a timeout for a Delete operation is only applicable if changes are saved into state before the destroy operation occurs.read(String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours). Read operations occur during any refresh or planning operation when refresh is enabled.update(String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
Nested Schema for workload_identity_federation
Required:
enabled(Boolean) Identifies whether Workload Identity Federation is enabled.
Read-Only:
issuer(String) Issuer URI for Kubernetes service account tokens.jwks_uri(String) JSON Web Key Set URI used to verify token signatures.
Import
The resource can be imported by using their resource ID. For getting the resource ID you can use Yandex Cloud Web Console
# terraform import yandex_kubernetes_cluster.<resource Name> <resource Id>
terraform import yandex_kubernetes_cluster.regional_cluster ...