Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Tutorials
    • All tutorials
    • Setting up a Managed Service for PostgreSQL connection from a container in Serverless Containers
    • Creating a VM from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with an additional volume for a Docker container
    • Creating an instance group from a Container Optimized Image with multiple Docker containers
    • Creating an instance group from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with multiple Docker containers
    • Updating a Container Optimized Image VM
    • Configuring data output from a Docker container to a serial port
      • Creating a new Kubernetes project
      • Creating a Kubernetes cluster with no internet access
      • Creating a Kubernetes cluster using the Yandex Cloud provider for the Kubernetes Cluster API
      • Running workloads with GPUs
      • Using node groups with GPUs and no pre-installed drivers
      • Setting up time-slicing GPUs
      • Migrating resources to a different availability zone
      • Encrypting secrets
      • Connecting a BareMetal server as an external node to a Managed Service for Kubernetes cluster
        • Integration with Argo CD
        • Integration with Crossplane
        • Syncing Yandex Lockbox and Managed Service for Kubernetes secrets
        • Transferring Managed Service for Kubernetes cluster logs to Cloud Logging
        • Setting up the Gateway API
        • Setting up an Application Load Balancer ingress controller
        • Logging settings for Application Load Balancer ingress controllers
        • Health checking your applications in a Managed Service for Kubernetes cluster with an Application Load Balancer ingress controller
        • Using Jaeger to trace queries in Managed Service for YDB
        • Setting up Kyverno & Kyverno Policies
        • Using Metrics Provider to deliver metrics
        • Editing website images with Thumbor
        • Using Istio
        • Using HashiCorp Vault to store secrets

In this article:

  • Required paid resources
  • Get your cloud ready
  • Set up the infrastructure
  • Install the Application Load Balancer ingress controller
  • Install additional dependencies
  • Create a Docker image
  • Deploy a test application
  • Set up an address for the L7 load balancer
  • Create the Ingress and HttpBackendGroup resources
  • Check the result
  • Delete the resources you created
  1. Container infrastructure
  2. Managed Service for Kubernetes
  3. Using Cloud Marketplace products
  4. Health checking your applications in a Managed Service for Kubernetes cluster with an Application Load Balancer ingress controller

Health checking applications in a Yandex Managed Service for Kubernetes cluster via a Yandex Application Load Balancer

Written by
Yandex Cloud
Updated at November 11, 2025
  • Required paid resources
  • Get your cloud ready
    • Set up the infrastructure
    • Install the Application Load Balancer ingress controller
    • Install additional dependencies
  • Create a Docker image
  • Deploy a test application
  • Set up an address for the L7 load balancer
  • Create the Ingress and HttpBackendGroup resources
  • Check the result
  • Delete the resources you created

You can use an Application Load Balancer ingress controller to automatically health check your applications deployed in a Managed Service for Kubernetes cluster.

Tip

We recommend using the new Yandex Cloud Gwin controller instead of an Application Load Balancer Ingress controller.

An ingress controller installed in a cluster deploys an L7 load balancer with all the required Application Load Balancer resources based on the configuration of the Ingress and HttpBackendGroup resources you created.

The L7 load balancer automatically health checks the application in this cluster. Depending on the results, the L7 load balancer allows or denies external traffic to the backend (the Service resource). For more information, see Health checks.

By default, the Application Load Balancer ingress controller listens for application health check requests from the L7 load balancer on TCP port 10501 and checks the health of kube-proxy pods on each cluster node. Given that kube-proxy is healthy, the process is as follows: if an application does not respond in a particular pod, Kubernetes redirects traffic to a different pod or node.

In this tutorial, you will configure your own application health checks using the HttpBackendGroup resource parameters and open a dedicated port on cluster nodes for these checks in the NodePort type Service resource parameters.

You can view health check results in the management console.

Note

You can also configure application health checks using the ingress.alb.yc.io/health-checks annotation of the Service resource.

To deploy an application in a Managed Service for Kubernetes cluster, configure access to it, and set up its health checks via an Application Load Balancer:

  1. Get your cloud ready.
  2. Create a Docker image.
  3. Deploy a test application.
  4. Set up an address for the L7 load balancer.
  5. Create the Ingress and HttpBackendGroup resources.
  6. Check the result.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost for this solution includes:

  • Fee for a DNS zone and DNS requests (see Cloud DNS pricing).
  • Fee for using the master and outgoing traffic in a Managed Service for Kubernetes cluster (see Managed Service for Kubernetes pricing).
  • Fee for using computing resources, OS, and storage in cluster nodes (VMs) (see Compute Cloud pricing).
  • Fee for using an L7 load balancer’s computing resources (see Application Load Balancer pricing).
  • Fee for public IP addresses for cluster nodes and L7 load balancer (see Virtual Private Cloud pricing).
  • Fee for Container Registry storage.

Get your cloud readyGet your cloud ready

Set up the infrastructureSet up the infrastructure

Manually
Terraform
  1. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

    Also configure the security groups required for Application Load Balancer.

    The application will be available on the Managed Service for Kubernetes cluster nodes on port 30080. Application health checks will be available on port 30081. Make sure these ports are open for the L7 load balancer in the node group's security group. You can also make these ports accessible from the internet.

    Warning

    The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

  2. Create a Managed Service for Kubernetes cluster. When creating a cluster, specify the preconfigured security groups.

    For Yandex Cloud internal network usage, your cluster does not need a public IP address. To enable internet access to your cluster, assign it a public IP address.

  3. Create a node group. To enable internet access for your node group (e.g., for Docker image pulls), assign it a public IP address. Specify the preconfigured security groups.

  4. Create a registry in Yandex Container Registry.

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the k8s-custom-health-checks.tf configuration file to the same working directory.

    This file describes:

    • Network.
    • Subnet.
    • Security groups for the Managed Service for Kubernetes cluster, node group, and the Application Load Balancer.
    • Service account for the Kubernetes cluster.
    • Kubernetes cluster.
    • Kubernetes node group.
    • Yandex Container Registry.
  6. Specify the following in the k8s-custom-health-checks.tf file:

    • folder_id: Cloud folder ID, same as in the provider settings.
    • k8s_version: Kubernetes version. Available versions are listed in Release channels.
  7. Make sure the Terraform configuration files are correct using this command:

    terraform validate
    

    Terraform will show any errors found in your configuration files.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Install the Application Load Balancer ingress controllerInstall the Application Load Balancer ingress controller

Use this guide to install the ALB Ingress Controller application in a separate yc-alb namespace. Later on, all the required Kubernetes resources will be created in this namespace.

Install additional dependenciesInstall additional dependencies

  1. If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

    By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

  2. Install kubect and configure it to work with the new cluster.

    If a cluster has no public IP address assigned and kubectl is configured via the cluster's private IP address, run kubectl commands on a Yandex Cloud VM that is in the same network as the cluster.

  3. Install Docker.

  4. Authenticate in Yandex Container Registry using a Docker credential helper.

Create a Docker imageCreate a Docker image

The source files for a Docker image reside in the yc-mk8s-alb-ingress-health-checks repository.

The Docker image will be created from app/Dockerfile and will contain the test application code from the app/healthchecktest.go file. You will use this Docker image to deploy the application in your Managed Service for Kubernetes cluster.

The application will respond to HTTP requests as follows depending on the pod port:

  • 80: Return request path parameters in the response body, e.g., /test-path. This is the main application functionality that will be available via the L7 load balancer.
  • 8080: Return OK in the response body. This functionality will be used for application health checks.

Successful requests will return the 200 OK HTTP code.

To create a Docker image:

  1. Clone the yc-mk8s-alb-ingress-health-checks repository:

    git clone git@github.com:yandex-cloud-examples/yc-mk8s-alb-ingress-health-checks.git
    
  2. In the terminal, go to the root of the repository directory.

  3. Get the Container Registry ID. You can get it with the list of registries in the folder.

  4. Add the name of the Docker image to create to the environment variable:

    export TEST_IMG=cr.yandex/<registry_ID>/example-app1:latest
    
  5. Build the Docker image:

    docker build -t ${TEST_IMG} -f ./app/Dockerfile .
    

    The command specifies the path for the repository root directory.

  6. Push the Docker image to the registry:

    docker push ${TEST_IMG}
    

    If you fail to push the image, follow these steps:

    • Make sure you are authenticated in Container Registry using a Docker credential helper.
    • Configure registry access: Allow your computer's IP address to push Docker images by granting the PUSH permission.

Deploy a test applicationDeploy a test application

Build a test application from the created Docker image and the app/testapp.yaml configuration file.

The file contains the description of Kubernetes resources: Deployment and Service of the NodePort type.

The Service resource contains the description of ports used to access the application on your cluster nodes:

  • spec.ports.name: http: Port to access the main functionality of the application, 80 on the pod and 30080 on the node.
  • spec.ports.name: health: Port for application health checks, 8080 on the pod and 30081 on the node.

To build a test application:

  1. Specify the value of the TEST_IMG environment variable in the spec.template.spec.containers.image field in the app/testapp.yaml file. To get this value, run:

    printenv TEST_IMG
    
  2. Create the application from app/testapp.yaml:

    kubectl apply -f ./app/testapp.yaml --namespace=yc-alb
    

    The command specifies the path for the repository root directory.

  3. Make sure the pods with the application are running:

    kubectl get po --namespace=yc-alb
    

    Result:

    NAME                               READY   STATUS    RESTARTS   AGE
    alb-demo-1-54b95979b4-***          1/1     Running   0          71s
    alb-demo-1-54b95979b4-***          1/1     Running   0          71s
    yc-alb-ingress-controller-***      1/1     Running   0          11m
    yc-alb-ingress-controller-hc-***   1/1     Running   0          11m
    
  4. Test the application by specifying the IP address of the Managed Service for Kubernetes cluster node in the request. You can find out the IP address of the node in the management console.

    • Main functionality:

      curl --include http://<node_IP_address>:30080/test-path
      

      Result:

      HTTP/1.1 200 OK
      Date: Thu, 18 Jul 2024 11:55:52 GMT
      Content-Length: 10
      Content-Type: text/plain; charset=utf-8
      
      /test-path%
      
    • Application health check:

      curl --include http://<node_IP_address>:30081
      

      Result:

      HTTP/1.1 200 OK
      Date: Thu, 18 Jul 2024 12:00:57 GMT
      Content-Length: 2
      Content-Type: text/plain; charset=utf-8
      
      OK%
      

Set up an address for the L7 load balancerSet up an address for the L7 load balancer

This address will be used to access the application from the internet.

To set up an address for the load balancer:

Manually
Terraform
  1. Reserve a static public IP address for your Application Load Balancer.

  2. Register a public domain zone and delegate your domain.

  3. To map the address to the domain, create an A record for the delegated domain. Specify the reserved IP address as the record value.

  4. Make sure the A record is added:

    host <domain>
    

    Result:

    <domain> has address <IP_address>
    
  1. Place the address-for-k8s-health-checks.tf configuration file in the same working directory as the k8s-custom-health-checks.tf file.

    address-for-k8s-health-checks.tf describes:

    • Static public IP address.
    • Public DNS zone.
    • Type A record for this zone to map the reserved IP address to the delegated domain.
  2. Make sure the Terraform configuration files are correct using this command:

    terraform validate
    

    Terraform will show any errors found in your configuration files.

  3. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

  4. Make sure the A record is added:

    host <domain>
    

    Result:

    <domain> has address <IP_address>
    

Create the Ingress and HttpBackendGroup resourcesCreate the Ingress and HttpBackendGroup resources

Based on the Ingress and HttpBackendGroup resources, the ingress controller will deploy an L7 load balancer with all the required Application Load Balancer resources.

The ingress.yaml and httpbackendgroup.yaml configuration files for the specified resources are located in the yc-mk8s-alb-ingress-health-checks repository.

You can specify the settings for custom application health checks in the HttpBackendGroup resource in the spec.backends.healthChecks parameter.

To create resources:

  1. In the ingress.yaml file, specify the following values for annotations:

    • ingress.alb.yc.io/subnets: One or more subnets to host the Application Load Balancer.
    • ingress.alb.yc.io/security-groups: One or more security groups for the load balancer. If you skip this parameter, the default security group will be used. At least one of the security groups must allow an outgoing TCP connection to port 10501 in the Managed Service for Kubernetes node group subnet or to its security group.
    • ingress.alb.yc.io/external-ipv4-address: Public access to the load balancer from the internet. Specify the previously reserved static public IP address.
  2. In the same ingress.yaml file, specify the delegated domain in the spec.rules.host parameter.

  3. To create the Ingress and HttpBackendGroup resources, run the following command from the root of the repository directory:

    kubectl apply -f ingress.yaml --namespace=yc-alb &&
    kubectl apply -f httpbackendgroup.yaml --namespace=yc-alb
    
  4. Wait until the resources are created and the load balancer is deployed and assigned a public IP address. This may take a few minutes.

    To follow the process and make sure it is error-free, open the logs of the pod it is run in:

    1. In the management console, go to the folder dashboard and select Managed Service for Kubernetes.

    2. Click the cluster name and select Workload in the left-hand panel.

    3. Select the yc-alb-ingress-controller-* pod (not yc-alb-ingress-controller-hc-*) that is running the resource creation.

    4. Go to the Logs tab on the pod page.

      The load balancer's creation logs are generated and displayed in real time. Any errors that occur will also be logged.

Check the resultCheck the result

  1. Make sure the load balancer was created. To do this, run the following command and verify that the ADDRESS field in the output contains a value:

    kubectl get ingress alb-demo --namespace=yc-alb
    

    Result:

    NAME       CLASS    HOSTS     ADDRESS      PORTS   AGE
    alb-demo   <none>   <domain>   <IP_address>   80      15h
    
  2. Check that the deployed application is available via the L7 load balancer:

    curl --include http://<domain>/test-path
    

    Result:

    HTTP/1.1 200 OK
    date: Thu, 18 Jul 2024 12:23:51 GMT
    content-length: 10
    content-type: text/plain; charset=utf-8
    server: ycalb
    
    /test-path%
    

    Note

    If the resource is unavailable at the specified URL, make sure that the security groups for the Managed Service for Kubernetes cluster and its node groups are configured correctly. If any rule is missing, add it.

  3. Make sure the app health checks are working correctly:

    Management console
    1. In the management console, go to the folder dashboard and select Application Load Balancer.
    2. Click the load balancer name and select Health checks in the left-hand panel.
    3. Check the target health. The HEALTHY status indicates the application is up and running.

Delete the resources you createdDelete the resources you created

Some resources are not free of charge. Delete the resources you no longer need to avoid paying for them:

  1. Application Load Balancer
  2. Application Load Balancer HTTP router
  3. Application Load Balancer backend group
  4. Application Load Balancer target group
  5. Managed Service for Kubernetes node group
  6. Managed Service for Kubernetes cluster
  7. Container Registry.
  8. Cloud DNS public domain zone
  9. Virtual Private Cloud security groups
  10. Virtual Private Cloud static public IP address

Was the article helpful?

Previous
Logging settings for Application Load Balancer ingress controllers
Next
Using Jaeger to trace queries in Managed Service for YDB
© 2025 Direct Cursus Technology L.L.C.