Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Yandex Application Load Balancer
  • Getting started
    • Overview
      • Overview
      • How it works
      • Installing an Ingress controller
      • Updating an Ingress controller
    • Configuring security groups
    • Working with service accounts
    • Creating or updating resources based on a configuration
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • L7 load balancer logs
  • Release notes

In this article:

  • Mapping between Application Load Balancer and Kubernetes resources
  • IDs of load balancer resources in a Kubernetes cluster
  1. Tools for Managed Service for Kubernetes
  2. Ingress controller
  3. How it works

How the Application Load Balancer Ingress controller works

Written by
Yandex Cloud
Updated at April 22, 2025
  • Mapping between Application Load Balancer and Kubernetes resources
  • IDs of load balancer resources in a Kubernetes cluster

An Application Load Balancer Ingress controller for Managed Service for Kubernetes has two pods:

  • The primary yc-alb-ingress-controller-* pod responsible for creating and updating Application Load Balancer resources. You can use its logs to follow the operations with the resources.

  • The yc-alb-ingress-controller-hc-* health check pod with a container which receives health check requests from the L7 load balancer on TCP port 10501 and health checks kube-proxy pods on each cluster node. If kube-proxy is healthy, then, even though an application does not respond in a particular pod, Kubernetes will redirect traffic to a different pod with that application or to a different node.

    This is the default health check workflow for the Application Load Balancer Ingress controller. You can configure custom health checks to control app performance on all nodes.

Warning

Do not manually update Application Load Balancer resources created by the controller's primary pod. Any changes you make will be rolled back automatically. Use the standard Managed Service for Kubernetes cluster control methods instead.

The primary pod manages the Application Load Balancer resource architecture using the following principles:

  • Load balancers and HTTP routers to accept and distribute traffic to backend groups are created based on Ingress resources.

    If several Ingress resources have the same ingress.alb.yc.io/group-name annotation values, they are combined into a single load balancer.

    • For a load balancer to accept HTTPS traffic, the spec.tls field in the Ingress description must specify the domain names and the certificate IDs from Certificate Manager:

      spec:
        tls:
          - hosts:
              - <domain_name>
            secretName: yc-certmgr-cert-id-<certificate_ID>
      

      Where secretName is the reference to the certificate from Yandex Certificate Manager.

      This will create two types of listeners for the load balancer: some will accept HTTPS traffic on port 443, while the others will redirect HTTP requests (port 80) to HTTPS with the 301 Moved Permanently status code. The traffic distribution rules for the same domain names explicitly specified in other Ingress resources, without the spec.tls field, will be prioritized over HTTP-to-HTTPS redirects.

      If the certificate is not available in Certificate Manager, provide it through a Kubernetes secret by specifying its name in the secretName field. Application Load Balancer Ingress controller will automatically add this certificate to Certificate Manager.

    • If there is no spec.tls field in the Ingress description, only listeners for incoming HTTP traffic on port 80 will be created for the load balancer.

    • If the Ingress description gives no rules for distribution of incoming traffic among the backends, it will be redirected to the default backend.

  • You can create backend groups to process incoming traffic:

    • Based on Kubernetes services referenced in Ingress rules directly. This method is useful if you need to bind a simple backend group consisting of a single service to a route.

      In ALB Ingress Controller versions prior to 0.2.0, each backend group corresponds to a bundle of host, http.paths.path, and http.paths.pathType parameters. In versions 0.2.0 and later, the backend group corresponds to the backend.service parameter. This may cause collisions when updating the ALB Ingress Controller. To avoid them, find out whether upgrade restrictions apply to your infrastructure.

    • Based on HttpBackendGroup resources that support explicit backend group descriptions. These are custom resources from the alb.yc.io API group provided by an Ingress controller.

      Same as to services, you should refer to HttpBackendGroup in the Ingress rules (spec.rules[*].http.paths[*].backend.resource).

      Using HttpBackendGroup enables extended Application Load Balancer functionality. A backend group can route traffic to either Kubernetes services or Yandex Object Storage buckets. HttpBackendGroup allows you to distribute traffic across backends proportionally using relative weights.

    • Based on GrpcBackendGroup resources that support explicit backend group descriptions. These are custom resources from the alb.yc.io API group provided by an Ingress controller.

      Same as to services, you should refer to GrpcBackendGroup in the Ingress rules (spec.rules[*].http.paths[*].backend.resource).

  • On backends, services referenced in Ingress or HttpBackendGroup/GrpcBackendGroup are deployed. These can be configured using the Service resources.

    Warning

    Kubernetes backend services referenced in Ingress rules (directly or via HttpBackendGroup/GrpcBackendGroup), must be of type NodePort. For more information about this type, see the relevant Kubernetes article.

Mapping between Application Load Balancer and Kubernetes resourcesMapping between Application Load Balancer and Kubernetes resources

Application Load Balancer Kubernetes
Load balancer Ingress resources with identical ingress.alb.yc.io/group-name annotation values
HTTP router virtual hosts Ingress.spec.rules
Virtual host routes Ingress.spec.rules[*].http.paths
Backend group HttpBackendGroup/GrpcBackendGroup or services
Target group Cluster node group

IDs of load balancer resources in a Kubernetes clusterIDs of load balancer resources in a Kubernetes cluster

IDs of resources of an Application Load Balancer load balancer deployed in the Ingress configuration are specified in the custom IngressGroupStatus resource of the Managed Service for Kubernetes cluster. To view them:

Management console
kubectl CLI
  1. In the management console, select the folder where the required Managed Service for Kubernetes cluster was created.
  2. From the list of services, select Managed Service for Kubernetes.
  3. Select the Managed Service for Kubernetes cluster whose Ingress configuration was used to create the load balancer.
  4. On the Managed Service for Kubernetes cluster page, go to the  Custom resources tab.
  5. Select ingressgroupstatuses.alb.yc.io and go to the Resources tab.
  6. Select a resource with the Ingress resource group name specified in the ingress.alb.yc.io/group-name annotation and go to the YAML tab.
  1. Install kubect and configure it to work with the new cluster.

  2. Run this command:

    kubectl describe IngressGroupStatus
    

Was the article helpful?

Previous
Overview
Next
Installing an Ingress controller
Yandex project
© 2025 Yandex.Cloud LLC