Integration with Crossplane
Crossplane
To create a Yandex Compute Cloud VM using Crossplane installed in a Kubernetes cluster:
- Get your cloud ready.
- Create Managed Service for Kubernetes resources.
- Create Yandex Cloud resources using Crossplane.
If you no longer need the resources you created, delete them.
Required paid resources
The support cost for this solution includes:
- Fee for using the master and outgoing traffic in a Managed Service for Kubernetes cluster (see Managed Service for Kubernetes pricing).
- Fee for using computing resources, OS, and storage in cluster nodes (VMs) (see Compute Cloud pricing).
- Fee for a public IP address assigned to cluster nodes (see Virtual Private Cloud pricing).
- Fee for a NAT gateway (see Virtual Private Cloud pricing).
Get your cloud ready
-
If you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the
yc config set folder-id <folder_ID>command. You can also set a different folder for any specific command using the--folder-nameor--folder-idparameter. -
Install jq
.
Create Managed Service for Kubernetes resources
-
Create a Kubernetes cluster and node group.
ManuallyTerraform-
If you do not have a network yet, create one.
-
If you do not have any subnets yet, create them in the availability zones where the new Kubernetes cluster and node group will reside.
-
Create these service accounts:
- Service account with the
k8s.clusters.agentandvpc.publicAdminroles for the folder where you want to create a Kubernetes cluster. This service account will be used to create resources for your Kubernetes cluster. - Service account with the container-registry.images.puller role. The nodes will use this account to pull the required Docker images from the registry.
Tip
You can use the same service account to manage your Kubernetes cluster and its node groups.
- Service account with the
-
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a Kubernetes cluster and node group with any suitable configuration. When creating, specify the preconfigured security groups.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-cluster.tf
cluster configuration file to the same working directory. This file describes:-
Kubernetes cluster.
-
Service account for the Managed Service for Kubernetes cluster and node group.
-
Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Specify the following in the configuration file:
- Folder ID.
- Kubernetes version for the Kubernetes cluster and node groups.
- Kubernetes cluster CIDR.
- Name of the Managed Service for Kubernetes cluster service account.
-
Make sure the Terraform configuration files are correct using this command:
terraform validateTerraform will show any errors found in your configuration files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
-
-
Install kubect
and configure it to work with the new cluster. -
Set up a NAT gateway for the Kubernetes cluster node subnet.
Create Yandex Cloud resources using Crossplane
-
Define the resources you want to create with Crossplane. To get the list of available resources, run the following command:
kubectl get crd | grep yandex-cloud.jet.crossplane.io -
Define the parameters for your resources. To view the available parameters for a specific resource, run this command:
kubectl describe crd <resource_name> -
Create the
vm-instance-template.ymlmanifest template that describes the network and subnet existing in the folder as well ascrossplane-vmyou are going to create with Crossplane:# Adding an existing network to the configuration apiVersion: vpc.yandex-cloud.jet.crossplane.io/v1alpha1 kind: Network metadata: name: <name_of_existing_network> annotations: # Point the provider to the existing network crossplane.io/external-name: <ID_of_existing_network> spec: # Prohibit deletion of the existing network deletionPolicy: Orphan forProvider: name: <name_of_existing_network> providerConfigRef: name: default --- # Adding an existing subnet to the configuration apiVersion: vpc.yandex-cloud.jet.crossplane.io/v1alpha1 kind: Subnet metadata: name: <name_of_existing_subnet> annotations: # Point the provider to the existing subnet crossplane.io/external-name: <ID_of_existing_subnet> spec: # Prohibit deletion of the existing subnet deletionPolicy: Orphan forProvider: name: <name_of_existing_subnet> networkIdRef: name: <name_of_existing_network> v4CidrBlocks: - <IPv4_CIDR_of_existing_subnet> providerConfigRef: name: default --- # Creating a VM apiVersion: compute.yandex-cloud.jet.crossplane.io/v1alpha1 kind: Instance metadata: name: crossplane-vm spec: forProvider: name: crossplane-vm platformId: standard-v1 zone: ru-central1-a resources: - cores: 2 memory: 4 bootDisk: - initializeParams: - imageId: fd80bm0rh4rkepi5ksdi networkInterface: - subnetIdRef: name: <name_of_existing_subnet> # Automatically assign a public IP address to the VM nat: true metadata: ssh-keys: "<public_SSH_key>" providerConfigRef: name: default # Write the VM access credentials into a secret writeConnectionSecretToRef: name: instance-conn namespace: defaultIn the VM configuration section:
zone: ru-central1-a: Availability zone to host the new VM.name: crossplane-vm: Name of the VM to create with Crossplane.imageId: fd80bm0rh4rkepi5ksdi: VM boot image ID. You can get it with the list of images. This example uses the Ubuntu 22.04 LTS image.
For examples of how to configure Yandex Cloud resources, see the provider's GitHub repository
. -
Apply the
vm-instance-template.ymlmanifest:kubectl apply -f vm-instance-template.yml -
Check the state of the new resources:
kubectl get network kubectl get subnet kubectl get instance -
Make sure
crossplane-vmappeared in the folder:yc compute instance list -
To get the VM access credentials from the secret, run this command:
kubectl get secret instance-conn -o json | jq -r '.data | map_values(@base64d)'Expected result:
{ "external_ip": "<public_IP_address>", "fqdn": "<full_domain_name>", "internal_ip": "<internal_IP_address>" }
Delete the resources you created
Some resources are not free of charge. Delete the resources you no longer need to avoid paying for them:
-
Delete
crossplane-vm:kubectl delete instance crossplane-vm -
Delete the other resources:
ManuallyTerraform-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy -
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-
-