Configuring security groups for Application Load Balancer tools for Managed Service for Kubernetes
For the Ingress controller or Gateway API to work properly, you need to configure security groups for your cluster, Yandex Managed Service for Kubernetes node groups, and Application Load Balancer load balancer.
You can use different security groups (recommended) or the same group for the cluster, the node groups, and the load balancer.
Within the security groups, you must configure:
- All standard rules described in the relevant documentation sections:
- For a cluster and node groups: see the Managed Service for Kubernetes documentation, Configuring security groups.
- For a load balancer: see Security groups. The final rule for outgoing traffic to the VM backends must allow connections to the cluster node group subnets and security groups.
- Backend state check rules, allowing:
- The load balancer to send traffic to cluster nodes via TCP port 10501 (destination: cluster node group subnets or security groups).
- Node groups to receive this traffic (traffic originates in the load balancer subnets or security group).
Cluster and node group security groups are specified in their settings. For more information, see the guides below:
- Creating and updating a cluster
- Creating and updating a node group
Security group IDs are specified in:
- The
Ingress
resource: In theingress.alb.yc.io/security-groups
annotation. If you create a load balancer for severalIngress
resources, it is assigned all the security groups specified for theseIngress
resources. - The
Gateway
resource: In thegateway.alb.yc.io/security-groups
annotation.
Example configuration
Let us provide an example for the following conditions:
- You need to deploy a load balancer with a public IP to accept HTTPS traffic, on 3 subnets with CIDRs
10.128.0.0/24
,10.129.0.0/24
, and10.130.0.0/24
, hereafter marked [B]. - When creating the cluster, its CIDR was specified as
10.96.0.0/16
[C], and the service CIDR as10.112.0.0/16
[S]. - The cluster's node group is located on a subnet with CIDR
10.140.0.0/24
[Nod]. - You can only connect to the nodes via SSH and control the cluster using the API,
kubectl
, and other utilities from CIDR203.0.113.0/24
[Con].
Then, you need to create the following rules in the security groups:
-
Cluster security group and housekeeping node groups:
Egress trafficIncoming trafficPort range Protocol Destination name CIDR blocks Description All
(0-65535
)Any
(Any
)CIDR
0.0.0.0/0
For all outgoing traffic Port range Protocol Source CIDR blocks Description All
(0-65535
)TCP
Load balancer healthchecks
— For a network load balancer All
(0-65535
)Any
(Any
)Security group
Current
(Self
)For traffic between master and nodes All
(0-65535
)Any
(Any
)CIDR
10.96.0.0/16
[C]10.112.0.0/16
[S]For traffic between pods and services All
(0-65535
)ICMP
CIDR
10.0.0.0/8
192.168.0.0/16
172.16.0.0/12
For functionality verification of nodes from subnets within Yandex Cloud -
Node group security group for connecting to services from the Internet:
Incoming trafficPort range Protocol Source CIDR blocks Description 30000-32767
TCP
CIDR
0.0.0.0/0
For service access from the Internet and from the Yandex Cloud subnets -
Node group security group for connecting to nodes over SSH:
Incoming trafficPort range Protocol Source CIDR blocks Description 30000-32767
TCP
CIDR
203.0.113.0/24
[Con]For connecting to nodes over SSH -
Cluster security group for access to the Kubernetes API:
-
Node group security group for backend status checks:
-
Load balancer security group:
Egress trafficIncoming trafficPort range Protocol Destination name CIDR blocks Description All
(0-65535
)TCP
CIDR
10.140.0.0/24
[Nod]For outgoing traffic to nodes, including status checks Port range Protocol Source CIDR blocks Description 80
TCP
CIDR
0.0.0.0/0
For receiving incoming HTTP traffic 443
TCP
CIDR
0.0.0.0/0
For receiving outgoing HTTP traffic 30080
TCP
Load balancer healthchecks
— For load balancer node status checks