How the Application Load Balancer ingress controller works
Tip
We recommend using the new Yandex Cloud Gwin controller instead of an Application Load Balancer Ingress controller.
An Application Load Balancer ingress controller for Managed Service for Kubernetes is running two pods:
-
The leader pod, i.e.,
yc-alb-ingress-controller-*, manages Application Load Balancer resource creation and updates. You can use its logs to monitor resource operations. -
The health check pod, i.e.,
yc-alb-ingress-controller-hc-*, runs a container listening on TCP port10501for L7 load balancer health check requests and performing health checks on kube-proxy pods across all cluster nodes. Given that kube-proxy is healthy, the process is as follows: if an application does not respond in a particular pod, Kubernetes redirects traffic to a different pod or node.This is the default health check workflow used by the Application Load Balancer ingress controller. You can configure custom health checks to monitor your applications across all pods.
Warning
Do not manually update Application Load Balancer resources created by the controller's leader pod. The system will automatically revert any manual modifications. Use the standard Managed Service for Kubernetes cluster management methods instead.
The leader pod manages the Application Load Balancer resource architecture according to these rules:
-
Based on Ingress configurations, the system creates load balancers and HTTP routers receving and distributing incoming traffic across backend groups.
Ingressresources with the sameingress.alb.yc.io/group-nameannotations are consolidated into one load balancer.-
To enable HTTPS traffic on the load balancer, specify your service domain names and Certificate Manager certificate IDs in the
Ingressspec.tlsfield:spec: tls: - hosts: - <domain_name> secretName: yc-certmgr-cert-id-<certificate_ID>Where
secretNamerefers to the Yandex Certificate Manager certificate.When this field is configured, the system will create two types of load balancer listeners: HTTPS listeners to serve encrypted traffic on port
443, and HTTP listeners responding to requests on port80with a301 Moved Permanentlystatus code redirecting clients to the HTTPS endpoint. If multipleIngressrules apply to the same domain, rules withoutspec.tls, i.e., HTTP-only, will take priority over HTTP-to-HTTPS redirects.If the certificate is not available in Certificate Manager, provide it through a Kubernetes secret by specifying its name in the
secretNamefield. Application Load Balancer Ingress controller will automatically add this certificate to Certificate Manager. -
If the
spec.tlsfield is omitted in theIngressdescription, the system will only create HTTP listeners processing unencrypted traffic on port80. -
If no traffic distribution rules are specified in the
Ingressdescription, incoming requests will be routed to the default backend.
-
-
To process incoming traffic, you can create backend groups using the following methods:
-
Specify relevant Kubernetes services directly in
Ingressrules. Use this method for routing traffic to backend groups containing only one service.Pre-0.2.0 ALB Ingress Controller versions map each backend group to a distinct combination of
host,http.paths.path, andhttp.paths.pathTypevalues specified in anIngressrule. ALB Ingress Controllers version 0.2.0 and later map backend groups directly to thebackend.serviceconfiguration. This may cause collisions when upgrading the ALB Ingress Controller. To avoid them, check the upgrade restrictions for your infrastructure. -
Describe your backend groups using HttpBackendGroup resources. These custom resources
are defined in thealb.yc.ioAPI group provided by an ingress controller.The same as services, you must specify your
HttpBackendGroupresources in theIngressrules, i.e.,spec.rules[*].http.paths[*].backend.resource.Using
HttpBackendGroupenables extended Application Load Balancer functionality. A backend group can route traffic to either Kubernetes services or Yandex Object Storage buckets.HttpBackendGroupallows you to distribute traffic across backends proportionally using relative weights. -
Describe your backend groups using GrpcBackendGroup resources. These custom resources
are defined in thealb.yc.ioAPI group provided by an ingress controller.The same as services, you must specify your
GrpcBackendGroupresources in theIngressrules, i.e.,spec.rules[*].http.paths[*].backend.resource.
-
-
The system will deploy backend services specified in the
IngressorHttpBackendGroup/GrpcBackendGroupresources. You can configure them through Service resources.Warning
Kubernetes backend services referenced in
Ingressrules (directly or viaHttpBackendGroup/GrpcBackendGroup), must be of typeNodePort. For more information about this type, see the relevant Kubernetes article .
Application Load Balancer-to-Kubernetes resource mapping
| Application Load Balancer | Kubernetes |
|---|---|
| Load balancer | Ingress resources with the same ingress.alb.yc.io/group-name annotations |
| HTTP router virtual hosts | Ingress.spec.rules |
| Virtual host routes | Ingress.spec.rules[*].http.paths |
| Backend group | HttpBackendGroup/GrpcBackendGroup or services |
| Target group | Cluster node group |
Load balancer resource IDs within a Kubernetes cluster
For an Application Load Balancer load balancer deployed according to the Ingress configuration, its resource IDs are specified in the custom Managed Service for Kubernetes cluster resource, IngressGroupStatus. To see them, do the following:
- In the management console
, select the folder where you created the required Managed Service for Kubernetes cluster. - In the list of services, select Managed Service for Kubernetes.
- Select the Managed Service for Kubernetes cluster whose
Ingressconfiguration was used to create the load balancer. - On the Managed Service for Kubernetes cluster page, navigate to the
Custom resources tab. - Select
ingressgroupstatuses.alb.yc.ioand navigate to the Resources tab. - Select a resource that has its
Ingressresource group name specified in theingress.alb.yc.io/group-nameannotation and navigate to the YAML tab.
-
Install kubect
and configure it to work with the new cluster. -
Run this command:
kubectl describe IngressGroupStatus