How the Application Load Balancer Ingress controller works
An Application Load Balancer Ingress controller for Managed Service for Kubernetes has two pods:
- The primary
yc-alb-ingress-controller-*
pod is responsible for creating and updating Application Load Balancer resources. You can use its logs to follow the operations with the resources. - The
yc-alb-ingress-controller-hc-*
health check pod with a container receives health check requests from the L7 load balancer on TCP port10501
and health checks kube-proxy pods on each cluster node. If kube-proxy is healthy, then, even if an application in a particular pod does not respond, Kubernetes will redirect traffic to a different pod with that application or to a different node. This is the default health check workflow for the Application Load Balancer Ingress controller. You can also configure your own health checks in the HttpBackendGroup resource parameters.
Warning
Do not manually update Application Load Balancer resources created by the controller's primary pod. Any changes you make will be rolled back automatically. Use the standard Managed Service for Kubernetes cluster control methods instead.
The primary pod manages the Application Load Balancer resource architecture using the following principles:
-
Load balancers and HTTP routers to accept and distribute traffic to backend groups are created based on Ingress resources.
If several
Ingress
resources have the sameingress.alb.yc.io/group-name
annotation values, they are combined into a single load balancer.-
For a load balancer to accept HTTPS traffic, the
spec.tls
field in theIngress
description must specify the domain names and the certificate IDs from Certificate Manager:spec: tls: - hosts: - <domain_name> secretName: yc-certmgr-cert-id-<certificate_ID>
Where
secretName
is the reference to the certificate from Yandex Certificate Manager.This will create two types of listeners for the load balancer: some will accept HTTPS traffic on port
443
, while the others will redirect HTTP requests (port80
) to HTTPS with the301 Moved Permanently
status code. The traffic distribution rules for the same domain names explicitly specified in otherIngress
resources, without thespec.tls
field, will be prioritized over HTTP-to-HTTPS redirects.If a certificate is not added to Certificate Manager yet, specify a Kubernetes secret containing the certificate in the
secretName
field. Application Load Balancer Ingress controller will automatically add the certificate to Certificate Manager. -
If there is no
spec.tls
field in theIngress
description, only listeners for incoming HTTP traffic on port80
will be created for the load balancer.
-
-
You can create backend groups to process incoming traffic:
-
Based on Kubernetes services referenced in
Ingress
rules directly. This method is useful if you need to bind a simple backend group consisting of a single service to a route.In ALB Ingress Controller versions prior to 0.2.0, each backend group corresponds to a bundle of
host
,http.paths.path
, andhttp.paths.pathType
parameters. In versions 0.2.0 and later, the backend group corresponds to thebackend.service
parameter. This may cause collisions when updating the ALB Ingress Controller. To avoid them, find out whether upgrade restrictions apply to your infrastructure. -
Based on HttpBackendGroup resources that support explicit backend group descriptions. These are custom resources
from thealb.yc.io
API group provided by an Ingress controller.You should refer to
HttpBackendGroup
in theIngress
rules, same as to services (spec.rules[*].http.paths[*].backend.resource
).Using
HttpBackendGroup
makes extended Application Load Balancer functionality available. A group like this may have Kubernetes services or Yandex Object Storage buckets as backends. InHttpBackendGroup
, you can also specify relative backend weight to allocate traffic to them proportionately.
-
-
Services referenced in
Ingress
orHttpBackendGroup
are deployed to backends. These can be configured using Service resources.Warning
The Kubernetes services used as backends (as specified in the
Ingress
rules directly or inHttpBackendGroup
), must be ofNodePort
type. For more details on this type, please see the Kubernetes documentation .
Mapping between Application Load Balancer and Kubernetes resources
Application Load Balancer | Kubernetes |
---|---|
Load balancer | Ingress resources with identical ingress.alb.yc.io/group-name annotation values |
HTTP router virtual hosts | Ingress.spec.rules |
Virtual host routes | Ingress.spec.rules[*].http.paths |
Backend group | HttpBackendGroup or services |
Target group | Cluster node group |
IDs of load balancer resources in a Kubernetes cluster
IDs of resources of an Application Load Balancer load balancer deployed in the Ingress
configuration are specified in the custom IngressGroupStatus
resource of the Managed Service for Kubernetes cluster. To view them:
- In the management console
, select the folder where the required Managed Service for Kubernetes cluster was created. - In the list of services, select Managed Service for Kubernetes.
- Select the Managed Service for Kubernetes cluster whose
Ingress
configuration was used to create the load balancer. - On the Managed Service for Kubernetes cluster page, go to the
Custom resources tab. - Select
ingressgroupstatuses.alb.yc.io
and go to the Resources tab. - Select a resource with the
Ingress
resource group name specified in theingress.alb.yc.io/group-name
annotation and go to the YAML tab.
-
Install kubectl
and configure it to work with the created cluster. -
Run this command:
kubectl describe IngressGroupStatus