Managing Kubernetes resources in a Yandex Managed Service for Kubernetes cluster via the Terraform provider
You can use Terraform manifests to create Kubernetes resources. To do this, set up the kubernetes Terraform provider. It supports Terraform resources that are mapped to YAML configuration files for various Kubernetes resources.
Using Terraform to create Kubernetes resources is convenient if you are already managing your Yandex Managed Service for Kubernetes cluster infrastructure through Terraform. This allows you to describe all resources using the same markup language.
In addition, Terraform tracks dependencies between resources and prevents a resource from being created, modified, or deleted until its dependencies are in place. Let’s assume you are creating a PersistentVolumeClaim resource. It requires a certain amount of storage for a PersistentVolume, yet the required free space is lacking. Terraform will detect the lack and prevent creation of the PersistentVolumeClaim resource.
The example below illustrates how to create standard Kubernetes resources using Terraform.
To create Kubernetes resources with Terraform:
- Set up your infrastructure.
- Set up the
kubernetesprovider. - Create Kubernetes resources.
- Make sure the cluster application is available from the internet.
If you no longer need the resources you created, delete them.
Required paid resources
The support cost for this solution includes:
- Fee for using the master and outgoing traffic in a Managed Service for Kubernetes cluster (see Managed Service for Kubernetes pricing).
- Fee for using computing resources, OS, and storage in cluster nodes (VMs) (see Compute Cloud pricing).
- Fee for an NLB (see Network Load Balancer pricing).
- Fee for public IP addresses for the VM and NLB (see Virtual Private Cloud pricing).
Set up the infrastructure for Managed Service for Kubernetes
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
.At this step, the file should not contain
kubernetesprovider settings. You will add them later. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-cluster.tf
configuration file to the same working directory.This file describes:
- Network.
- Subnet.
- Two security groups: one for the cluster and one for the node group.
- Cloud service account with the
k8s.clusters.agent,k8s.tunnelClusters.agent,vpc.publicAdmin,load-balancer.admin, andcontainer-registry.images.pullerroles. - Managed Service for Kubernetes cluster.
- Kubernetes node group.
-
Specify the variable values in the
k8s-cluster.tffile. -
Make sure the Terraform configuration files are correct using this command:
terraform validateTerraform will show any errors found in your configuration files.
-
Create an infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
-
Install kubect
and configure it to work with the new cluster.
Set up the kubernetes provider
-
In the working directory, open the file with
yandexprovider settings. It must have the following structure:terraform { required_providers { yandex = { source = "yandex-cloud/yandex" } } required_version = ">= 0.13" } provider "yandex" { token = "<IAM_token>" cloud_id = "<cloud_ID>" folder_id = "<folder_ID>" zone = "<default_availability_zone>" } -
In the file, specify the parameters required for the
kubernetesprovider:-
Under
required_providers, add:kubernetes = { source = "hashicorp/kubernetes" } -
Under
required_providers, changerequired_versionto">= 0.14.8". -
Add a new section at the end of the file:
data "yandex_client_config" "client" {} provider "kubernetes" { host = yandex_kubernetes_cluster.k8s-cluster.master[0].external_v4_endpoint cluster_ca_certificate = yandex_kubernetes_cluster.k8s-cluster.master[0].cluster_ca_certificate token = data.yandex_client_config.client.iam_token }
-
-
Make sure the file looks like this after completing the above steps:
terraform { required_providers { yandex = { source = "yandex-cloud/yandex" } kubernetes = { source = "hashicorp/kubernetes" } } required_version = ">= 0.14.8" } provider "yandex" { token = "<IAM_token>" cloud_id = "<cloud_ID>" folder_id = "<folder_ID>" zone = "<default_availability_zone>" } data "yandex_client_config" "client" {} provider "kubernetes" { host = yandex_kubernetes_cluster.k8s-cluster.master[0].external_v4_endpoint cluster_ca_certificate = yandex_kubernetes_cluster.k8s-cluster.master[0].cluster_ca_certificate token = data.yandex_client_config.client.iam_token } -
Initialize the
kubernetesprovider:terraform init
Create Kubernetes resources
Create a test application and a LoadBalancer service:
-
In the working directory, create a file named
deployment.tfdescribing theDeploymentresource:resource "kubernetes_deployment" "demo-app-deployment" { metadata { name = "hello" labels = { app = "hello" version = "v1" } } spec { replicas = 2 selector { match_labels = { app = "hello" } } template { metadata { labels = { app = "hello" version = "v1" } } spec { container { name = "hello-app" image = "cr.yandex/crpjd37scfv653nl11i9/hello:1.1" } } } } } -
In the working directory, create a file named
service.tfdescribing theServiceresource:resource "kubernetes_service" "demo-lb-service" { metadata { name = "hello" } spec { selector = { app = kubernetes_deployment.demo-app-deployment.spec.0.template.0.metadata[0].labels.app } type = "LoadBalancer" port { port = 80 target_port = 8080 } } } -
Create Kubernetes resources:
-
View the planned changes:
terraform plan -
If the changes are acceptable, apply them:
terraform apply
After you run
terraform apply, you may get this error:Error: Waiting for rollout to finish: 2 replicas wanted; 0 replicas Ready │ │ with kubernetes_deployment.demo-app-deployment, │ on deployment.tf line 1, in resource "kubernetes_deployment" "demo-app-deployment": │ 1: resource "kubernetes_deployment" "demo-app-deployment" { │It means the
Deploymentresources are not ready yet. Check their readiness using thekubectl get deploymentcommand, which will return this result:NAME READY UP-TO-DATE AVAILABLE AGE hello 0/2 2 0 12mWhen the
READYcolumn shows2/2, run theterraform applycommand again. -
You can also create other standard Kubernetes resources using Terraform manifests. Use the YAML configuration of the resource you need as a base (see this example for a podcontainerPort parameter from the YAML file with the container_port parameter in Terraform. For a full list of Terraform resources for Kubernetes, see this Kubernetes provider article
For information about creating custom resources
Make sure the cluster application is available from the internet
-
View information about the created load balancer:
kubectl describe service helloResult:
Name: hello Namespace: default Labels: <none> Annotations: <none> Selector: app=hello Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.228.81 IPs: 10.96.228.81 LoadBalancer Ingress: 84.201.148.8 Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 32532/TCP Endpoints: 10.112.128.7:8080,10.112.128.8:8080 Session Affinity: None External Traffic Policy: Cluster Internal Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 5m32s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 5m25s service-controller Ensured load balancer -
Copy the IP address from the
LoadBalancer Ingressfield. -
Open the app URL in your browser:
http://<copied_IP_address>Result:
Hello, world! Running in 'hello-5c46b*****-nc**'
Delete the resources you created
-
In the terminal, navigate to the directory containing the infrastructure plan.
-
Run this command:
terraform destroyTerraform will delete all resources you created in the current configuration.
Example of setting up a persistent volume with Terraform
Provide a persistent volume for the Managed Service for Kubernetes cluster. To do this, use a configuration file:
pv-pvc.tf
resource "yandex_compute_disk" "pv_disk" {
name = "pv-disk"
zone = "ru-central1-a"
size = 10
type = "network-ssd"
}
resource "kubernetes_storage_class" "pv_sc" {
metadata {
name = "pv-sc"
}
storage_provisioner = "disk-csi-driver.mks.ycloud.io"
parameters = {
"csi.storage.k8s.io/fstype" = "ext4"
}
reclaim_policy = "Retain"
volume_binding_mode = "WaitForFirstConsumer"
}
resource "kubernetes_persistent_volume" "my_pv" {
metadata {
name = "my-pv"
}
spec {
capacity = {
storage = "10Gi"
}
access_modes = ["ReadWriteOnce"]
storage_class_name = "pv-sc"
persistent_volume_source {
csi {
driver = "disk-csi-driver.mks.ycloud.io"
volume_handle = yandex_compute_disk.pv_disk.id
}
}
}
}
resource "kubernetes_persistent_volume_claim" "my_pvc" {
metadata {
name = "my-pvc"
}
spec {
access_modes = ["ReadWriteOnce"]
resources {
requests = {
storage = "5Gi"
}
}
storage_class_name = "pv-sc"
volume_name = "my-pv"
}
}
The pv-pvc.tf file describes:
-
Compute Cloud disk used as a storage for
PersistentVolume:- Name:
pv-disk. - Availability zone:
ru-central1-a. - Disk size: 10 GB.
- Disk type:
network-ssd.
- Name:
-
Custom StorageClass:
- Name:
pv-sc. - Storage provider:
disk-csi-driver.mks.ycloud.io. - File system type:
ext4. - Reuse policy:
Retain. ThePersistentVolumeobject will not be deleted after the deletion of its associatedPersistentVolumeClaimobject. - Volume binding mode:
WaitForFirstConsumer.PersistentVolumeandPersistentVolumeClaimwill only be bound when the pod requests the volume.
Learn more about storage class parameters here
. - Name:
-
PersistentVolumeobject:- Name:
my-pv. - Size: 10 GB.
- Access mode:
ReadWriteOnce. Only pods located on the same node can read and write data to thisPersistentVolumeobject. Pods on other nodes will not be able to access this object. - Storage class:
pv-sc. If not specified, the default storage class will be used. - Data source:
pv-disk.
Learn more about
PersistentVolumeparameters here . - Name:
-
PersistentVolumeClaimobject:- Name:
my-pvc. - Access mode:
ReadWriteOnce. Only pods located on the same node can read and write data to thisPersistentVolumeobject. Pods on other nodes will not be able to access this object. - Requested storage size: 5GB.
- Storage class:
pv-sc. If not specified, the default storage class will be used. - Volume name:
PersistentVolumeobject to bind withPersistentVolumeClaim.
Learn more about
PersistentVolumeClaimparameters here . - Name: