Creating a self-managed Kubernetes cluster using the Yandex Cloud provider for the Kubernetes Cluster API
- Get your cloud ready
- Set up your environment
- Prepare an OS image for cluster nodes
- Get a Docker image with the Yandex Cloud provider
- Install the Yandex Cloud provider and the Kubernetes Cluster API provider
- Generate cluster manifests
- Deploy a cluster
- Connect to the cluster
- Install a CCM to the new cluster
- Install a CNI to the new cluster
- Check the connection between the managing cluster and the new cluster
- Delete the resources you created
Cluster-api-provider-yandex
The cluster is deployed on virtual machines Yandex Compute Cloud and an L7 Yandex Application Load Balancer.
Advantages of using Yandex Cloud provider for creating clusters:
- Integration with the Yandex Cloud API.
- Declarative approach to cluster creation and management.
- Ability to describe the cluster as a custom resource CustomResourceDefinition
. - Wide range of parameters for configuring cluster compute resources.
- Custom OS images for master and nodes.
- Custom Control Plane.
- Alternative to Terraform
in CI processes.
Provider compatibility with the Kubernetes Cluster API
| Provider version | Cluster API version |
|---|---|
| v1alpha1 | v1beta1 (v1.x) |
To deploy a Kubernetes cluster in Yandex Cloud using the Cluster API:
- Get your cloud ready.
- Set up your environment.
- Prepare an OS image for cluster nodes.
- Get a Docker image with the Yandex Cloud provider.
- Install the Yandex Cloud provider and the Kubernetes Cluster API provider.
- Generate cluster manifests.
- Deploy a cluster.
- Connect to the cluster.
- Install the CCM.
- Install the CNI.
- Check the connection between the managing cluster and the new cluster.
If you no longer need the resources you created, delete them.
Get your cloud ready
Sign up for Yandex Cloud and create a billing account:
- Navigate to the management console
and log in to Yandex Cloud or create a new account. - On the Yandex Cloud Billing
page, make sure you have a billing account linked and it has theACTIVEorTRIAL_ACTIVEstatus. If you do not have a billing account, create one and link a cloud to it.
If you have an active billing account, you can navigate to the cloud page
Learn more about clouds and folders here.
Required paid resources
The infrastructure support costs include:
- Fee for computing resources and disks of VMs used for Kubernetes cluster deployment, auxiliary VM, and Managed Service for Kubernetes managing cluster nodes (see Compute Cloud pricing).
- Fee for using an L7 load balancer’s computing resources (see Yandex Application Load Balancer pricing).
- Fee for using Managed Service for Kubernetes managing cluster master and outbound traffic (see Yandex Managed Service for Kubernetes pricing).
- Fee for public IP addresses for auxiliary VMs and Managed Service for Kubernetes managing cluster (see Yandex Virtual Private Cloud pricing).
- Fee for using the NAT gateway (see Yandex Virtual Private Cloud pricing).
Optional costs
- If intending to use a custom image for the new Kubernetes cluster nodes:
- Fee for storing the image in the bucket and data operations (see Yandex Object Storage pricing).
- Fee for storing the image in Compute Cloud (see Yandex Compute Cloud pricing).
- If intending to use a custom Docker image to deploy the Yandex Cloud provider in the managing cluster, fee for storing a Docker image in the registry and outgoing traffic (see Yandex Container Registry pricing).
Set up your infrastructure
-
Prepare a Yandex Cloud service account:
- Create a service account you will use to create resources for the cluster.
- Assign the compute.editor and alb.editor roles for the folder to the service account.
- Create an authorized key for a service account in JSON format.
-
If your folder does not have a Virtual Private Cloud network yet, create it. Also create a subnet.
-
The new cluster infrastructure will automatically be assigned the default security group which is created together with the network. Add the following rules for incoming traffic to this group:
Protocol Port range Source type Source Description TCP0-65535Security groupBalancerHealth checks by an L7 load balancerAny8443CIDR0.0.0.0/0Access to the Kubernetes API -
The created cluster will be accessible within the cloud network via an internal IP address. To enable remote access to the cluster:
-
Create a Managed Service for Kubernetes managing cluster with a public IP address and a node group. You will need this cluster to deploy the new cluster using the Cluster API and to manage the cluster infrastructure.
Tip
You can also deploy the managing cluster locally, for example, using the
kindutility . -
For the new cluster to have internet access and be able to push Docker images, configure a NAT gateway for the subnet the new cluster will be located in.
Set up your environment
The environment is configured on the local computer.
-
Install the following tools:
- Go
1.22.0 or higher. - Docker
17.03 or higher. - kubectl
1.11.3 or higher. - clusterctl
1.5.0 or higher.
- Go
-
Configure
kubectlaccess to the Managed Service for Kubernetes managing cluster.If the managing cluster is deployed locally with the help of
kind, configure access to it as per this guide . -
Clone the cluster-api-provider-yandex
repository and navigate to the project directory.git clone https://github.com/yandex-cloud/cluster-api-provider-yandex.git cd cluster-api-provider-yandex
Prepare an OS image for cluster nodes
The OS image deployed on the nodes of the new cluster must be ready to work with the Kubernetes Cluster API and compatible with Compute Cloud.
You can use a ready-made test image or build a custom one:
To use a Ubuntu 24.04 test OS image ready for Kubernetes 1.31.4, specify the image ID fd8a3kknu25826s8hbq3 in the YANDEX_CONTROL_PLANE_MACHINE_IMAGE_ID variable when generating the cluster manifest.
Warning
This image is created for informational purposes only, do not use it in production.
-
Build
your OS image using the Image Builder utility.See also: Prepare a disk image for Compute Cloud.
-
Upload the image to Compute Cloud and save its ID.
Get a Docker image with the Yandex Cloud provider
You can use a ready-made Docker image with the Yandex Cloud provider from a public Yandex Container Registry or build your own image from the source code.
-
Authenticate in your Container Registry using the Docker credential helper
. -
Add to the
IMGenvironment variable the path to the Docker image with the Yandex Cloud provider in the public registry:export IMG=cr.yandex/crpsjg1coh47p81vh2lc/capy/cluster-api-provider-yandex:latest
-
Create a Container Registry and save its ID.
-
Authenticate in your Container Registry using the Docker credential helper
. -
Add to the
IMGenvironment variable the path the new Docker image will be stored at in the registry:export IMG=cr.yandex/<registry_ID>/cluster-api-provider-yandex:<tag> -
If you are building your Docker image on a non-AMD64
computer, edit thedocker-buildsection in the Makefile :docker build --platform linux/amd64 -t ${IMG} . -
Run the Docker daemon.
-
Build a Docker image and push it to the registry:
make docker-build docker-push
Install the Yandex Cloud provider and the Kubernetes Cluster API provider
-
Initialize the managing cluster:
clusterctl initThe managing cluster will have the core components of the Kubernetes Cluster API and cert-manager
. -
Create a custom resource definition (CustomResourceDefinitions
, CRD) for the new cluster:make install -
Retrieve a list of installed CRDs:
kubectl get crd | grep cluster.x-k8s.ioTo get a manifest for a specific CRD, run the following command:
kubectl get crd <CRD_name> \ --output yaml -
Create a namespace for the Yandex Cloud provider:
kubectl create namespace capy-system -
Create a secret with the Yandex Cloud service account's authorized key:
kubectl create secret generic yc-sa-key \ --from-file=key=<path_to_file_with_authorized_key> \ --namespace capy-system -
Install the Yandex Cloud provider:
make deploy
Generate cluster manifests
-
Get the IDs of Yandex Cloud resources to deploy a cluster:
- OS image
- Folder
- Availability zone
- Network
- Subnet in the selected availability zone.
-
Provide the IDs to these environment variables:
export YANDEX_CONTROL_PLANE_MACHINE_IMAGE_ID=<image_ID> export YANDEX_FOLDER_ID=<folder_ID> export YANDEX_NETWORK_ID=<network_ID> export YANDEX_SUBNET_ID=<subnet_ID> export YANDEX_ZONE_ID=<availability_zone_ID>If you did not build a custom OS image, set the
YANDEX_CONTROL_PLANE_MACHINE_IMAGE_IDvariable tofd8a3kknu25826s8hbq3. This is the ID of a test Ubuntu 24.04 image compatible with Kubernetes 1.31.4. -
Generate cluster manifests:
clusterctl generate cluster <name_of_new_cluster> \ --from templates/cluster-template.yaml > /tmp/capy-cluster.yamlThe
capy-cluster.yamlmanifest will describe the following:-
L7 Application Load Balancer with a dynamic internal IP address. You can give it a fixed IP address.
Warning
Once the cluster is created, you will not be able to assign a fixed IP address to the L7 load balancer.
-
Three Control Plane nodes for the cluster.
-
-
Optionally, to deploy workload cluster nodes right away, add their description to the manifest.
clusterctl generate cluster <name_of_new_cluster> \ --worker-machine-count <number_of_workload_nodes> \ --from templates/cluster-template.yaml > /tmp/capy-cluster.yaml
Optionally, configure the API server endpoint
Specify the parameters for the L7 load balancer in the capy-cluster.yaml manifest:
loadBalancer:
listener:
address: <fixed_IP_address_from_subnet_range>
subnet:
id: <subnet_ID>
Deploy a cluster
Run this command:
kubectl apply -f /tmp/capy-cluster.yaml
You can monitor cluster creation progress from the Yandex Cloud management consolecapy-controller-manager pod logs:
kubectl logs <capy-controller-manager_pod_name> \
--namespace capy-system \
--follow
Connect to the cluster
The details for connection to the new cluster will be stored in the <name_of_new_cluster>-kubeconfig secret in the managing cluster.
-
Get the data from the secret:
kubectl get secret <name_of_new_cluster>-kubeconfig \ --output yaml | yq -r '.data.value' | base64 \ --decode > capy-cluster-config -
Provide the
kubectlconfiguration file to the auxiliary VM:scp <path_to_capy-cluster-config_file_on_local_computer> \ <username>@<VM_public_IP_address>:/home/<username>/.kube/config -
Connect to the auxiliary VM over SSH.
-
Make sure the new cluster is accessible:
kubectl cluster-info
Install a CCM to the new cluster
For connection between the cluster resources and Yandex Cloud resources, install a cloud controller manager
Note
If you want to use the Kubernetes Cloud Controller Manager for Yandex Cloud, add the current version of the Docker image and the YANDEX_CLUSTER_NAME environment variable with the new cluster's name to the yandex-cloud-controller-manager.yamlDaemonSet.
Install a CNI to the new cluster
To provide network functionality for pods in the new cluster, install to it a container network interface
For more information, see this documentation:
Check the connection between the managing cluster and the new cluster
-
Connect to the auxiliary VM and make sure that all the pods with the necessary system components have been deployed in the cluster:
kubectl get pods --all-namespacesOutput example:
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-695bcfd99c-rcc42 1/1 Running 0 3h55m kube-system calico-node-9qhxj 1/1 Running 0 3h55m kube-system coredns-7c65d6cfc9-52tvn 1/1 Running 0 4h50m kube-system coredns-7c65d6cfc9-dpgvg 1/1 Running 0 4h50m kube-system etcd-capy-cluster-control-plane-p646q 1/1 Running 0 4h50m kube-system kube-apiserver-capy-cluster-control-plane-p646q 1/1 Running 0 4h50m kube-system kube-controller-manager-capy-cluster-control-plane-p646q 1/1 Running 0 4h50m kube-system kube-proxy-wb7jr 1/1 Running 0 4h50m kube-system kube-scheduler-capy-cluster-control-plane-p646q 1/1 Running 0 4h50m kube-system yandex-cloud-controller-manager-nwhwv 1/1 Running 0 26s -
Use your local computer to check the connection between the managing cluster and the new cluster:
clusterctl describe cluster <name_of_new_cluster>Result:
NAME READY SEVERITY REASON SINCE MESSAGE Cluster/capy-cluster True 10s ├─ClusterInfrastructure - YandexCluster/capy-cluster └─ControlPlane - KubeadmControlPlane/capy-cluster-control-plane True 10s └─3 Machines... True 3m9s See capy-cluster-control-plane-cf72l, capy-cluster-control-plane-g9jw7, ...
Delete the resources you created
Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:
-
Delete the Kubernetes cluster created using the Cluster API:
kubectl delete -f /tmp/capy-cluster.yaml -
Delete CRD from the Managed Service for Kubernetes managing cluster:
make uninstall -
Delete the Yandex Cloud provider controller from the managing cluster:
make undeploy -
Delete the auxiliary Yandex Cloud resources if you had created them:
- Node group of the Managed Service for Kubernetes managing cluster
- Managed Service for Kubernetes managing cluster
- Auxiliary VM
- NAT gateway
- OS image in Compute Cloud
- OS image in Object Storage
- Bucket
- Docker image
- Registry