Configuring security groups
Security groups follow the All traffic that is not allowed is prohibited principle. For a cluster to run, you need to create rules in its security groups to allow:
- Service traffic within the cluster.
- Connections to services from the internet.
- Connections to nodes over SSH.
- API accessKubernetes.
Note
We recommend creating an independent security group for each of the mentioned sets of rules.
You can specify more detailed rules for your security groups, e.g., to allow traffic only in specific subnets.
Security groups must be correctly configured for all subnets that will host the cluster. This determines the performance and availability of the cluster and the services running there.
Prior to editing security groups or the settings of any included rules, make sure this is not going to disrupt the cluster or its node groups.
Alert
Do not delete security groups attached to a running cluster or node group as this may disrupt their operation and result in a loss of data.
Creating rules for service traffic
Warning
Rules for service traffic are required for a regional cluster to work.
For the cluster to run properly, create rules both for the inbound and outgoing traffic, and apply them to the cluster and node groups:
- Add rules for incoming traffic.
- For a network load balancer:
- Port range:
0-65535
- Protocol:
TCP
- Source:
Load balancer healthchecks
- Port range:
- To transfer service traffic between the master and nodes:
- Port range:
0-65535
- Protocol:
Any
- Source:
Security group
- Security group:
Current
(Self
)
- Port range:
- To transfer traffic between pods and services:
- Port range:
0-65535
- Protocol:
Any
- Source:
CIDR
- CIDR blocks: Specify the IP address ranges of the subnets created along with the cluster, e.g.,
10.96.0.0/16
or10.112.0.0/16
.
- Port range:
- To test the nodes using ICMP requests from the subnets within Yandex Cloud:
- Protocol:
ICMPv6
- Source:
CIDR
- CIDR blocks: IP address ranges of the subnets within Yandex Cloud from which the cluster will be diagnosed, for example:
10.0.0.0/8
192.168.0.0/16
172.16.0.0/12
- Protocol:
- For a network load balancer:
- Add a rule for outgoing traffic that allows cluster hosts to connect to external resources, for example, to download images from Docker Hub or work with Yandex Object Storage:
- Port range:
0-65535
- Protocol:
Any
- Destination name:
CIDR
- CIDR blocks:
0.0.0.0/0
- Port range:
For information on how to configure security groups for the L7 load balancer, see Configuring security groups for Application Load Balancer tools for Managed Service for Kubernetes.
Creating a rule for connecting to services from the internet
To be sure that the services running on nodes are accessible from the internet and subnets within Yandex Cloud, create a rule for the incoming traffic and apply it to the node group:
- Port range:
30000-32767
- Protocol:
TCP
- Source:
CIDR
- CIDR blocks:
0.0.0.0/0
Creating a rule for connecting to nodes via SSH
To access the nodes via SSH, create a rule for incoming traffic and apply it to the node group:
- Port range:
22
- Protocol:
TCP
- Source:
CIDR
- CIDR blocks: IP address ranges of the subnets within Yandex Cloud and public IP addresses of computers on the internet, for example:
10.0.0.0/8
192.168.0.0/16
172.16.0.0/12
85.32.32.22/32
Creating rules to access the Kubernetes API
To access the Kubernetes API and manage clusters using kubectl
and other utilities, you need rules that allow connections to the master via ports 6443
and 443
. Create two rules for the incoming traffic, one rule per port, and apply them to the cluster:
- Port range:
443
,6443
- Protocol:
TCP
- Source:
CIDR
- CIDR blocks: Specify the IP address range of the subnets from which you will manage the cluster, for example:
85.23.23.22/32
: For external network192.168.0.0/24
: For internal network
Examples
For example, you need to create rules for an existing Kubernetes cluster:
- With the zonal master located in the
ru-central1-a
availability zone. - With the
worker-nodes-c
node group. - With the address range for pods and services:
10.96.0.0/16
and10.112.0.0/16
. - With access to services:
- From the load balancer's address range
198.18.235.0/24
and198.18.248.0/24
. - From the internal subnets
172.16.0.0/12
,10.0.0.0/8
, and192.168.0.0/16
for the ICMP protocol. - From the internet from any address (
0.0.0.0/0
) to a range of NodePorts (30000-32767
).
- From the load balancer's address range
- With access to nodes from the internet from the address
85.32.32.22/32
to port22
. - With access to the Kubernetes API from an external subnet from an address range
203.0.113.0/24
via ports443
and6443
.
Four security groups are created:
k8s-main-sg
: Rules for service traffic.k8s-public-services
: Rules for connecting to nodes from the internet.k8s-nodes-ssh-access
: Rules for connecting to nodes over SSH.k8s-master-whitelist
: Rules for accessing the cluster API.
Configuration file for this cluster:
terraform {
required_providers {
yandex = {
source = "yandex-cloud/yandex"
}
}
}
provider "yandex" {
token = "<service_account_OAuth_or_static_key>"
cloud_id = "<cloud_ID>"
folder_id = "<folder_ID>"
zone = "<availability_zone>"
}
resource "yandex_vpc_security_group" "k8s-main-sg" {
name = "k8s-main-sg"
description = "Group rules ensure the basic performance of the cluster. Apply it to the cluster and node groups."
network_id = "<cloud_network_ID>"
ingress {
protocol = "TCP"
description = "The rule allows availability checks from the load balancer address range. It is required for the operation of a fault-tolerant cluster and load balancer services."
predefined_target = "loadbalancer_healthchecks"
from_port = 0
to_port = 65535
}
ingress {
protocol = "ANY"
description = "The rule allows master to node and node to node communication inside a security group."
predefined_target = "self_security_group"
from_port = 0
to_port = 65535
}
ingress {
protocol = "ANY"
description = "Rule allows pod-pod and service-service communication. Specify the subnets of your cluster and services."
v4_cidr_blocks = ["10.96.0.0/16", "10.112.0.0/16"]
from_port = 0
to_port = 65535
}
ingress {
protocol = "ICMP"
description = "Rule allows debugging ICMP packets from internal subnets."
v4_cidr_blocks = ["172.16.0.0/12", "10.0.0.0/8", "192.168.0.0/16"]
}
egress {
protocol = "ANY"
description = "Rule allows all outgoing traffic. Nodes can connect to Yandex Container Registry, Object Storage, Docker Hub, and so on."
v4_cidr_blocks = ["0.0.0.0/0"]
from_port = 0
to_port = 65535
}
}
resource "yandex_vpc_security_group" "k8s-public-services" {
name = "k8s-public-services"
description = "Group rules allow connections to services from the internet. Apply the rules only for node groups."
network_id = "<cloud_network_ID>"
ingress {
protocol = "TCP"
description = "Rule allows incoming traffic from the internet to the NodePort port range. Add ports or change existing ones to the required ports."
v4_cidr_blocks = ["0.0.0.0/0"]
from_port = 30000
to_port = 32767
}
}
resource "yandex_vpc_security_group" "k8s-nodes-ssh-access" {
name = "k8s-nodes-ssh-access"
description = "Group rules allow connections to cluster nodes over SSH. Apply the rules only for node groups."
network_id = "<cloud_network_ID>"
ingress {
protocol = "TCP"
description = "Rule allows connections to nodes over SSH from specified IPs."
v4_cidr_blocks = ["85.32.32.22/32"]
port = 22
}
}
resource "yandex_vpc_security_group" "k8s-master-whitelist" {
name = "k8s-master-whitelist"
description = "Group rules allow access to the Kubernetes API from the internet. Apply the rules to the cluster only."
network_id = "<cloud_network_ID>"
ingress {
protocol = "TCP"
description = "Rule allows connections to the Kubernetes API via port 6443 from a specified network."
v4_cidr_blocks = ["203.0.113.0/24"]
port = 6443
}
ingress {
protocol = "TCP"
description = "Rule allows connections to the Kubernetes API via port 443 from a specified network."
v4_cidr_blocks = ["203.0.113.0/24"]
port = 443
}
}
resource "yandex_kubernetes_cluster" "k8s-cluster" {
name = "k8s-cluster"
cluster_ipv4_range = "10.96.0.0/16"
service_ipv4_range = "10.112.0.0/16"
...
master {
version = "1.20"
master_location {
zone = "ru-central1-a"
subnet_id = <cloud_subnet_ID>
}
security_group_ids = [
yandex_vpc_security_group.k8s-main-sg.id,
yandex_vpc_security_group.k8s-master-whitelist.id
]
...
}
...
}
resource "yandex_kubernetes_node_group" "worker-nodes-c" {
cluster_id = yandex_kubernetes_cluster.k8s-cluster.id
name = "worker-nodes-c"
version = "1.20"
...
instance_template {
platform_id = "standard-v3"
network_interface {
nat = true
subnet_ids = [<cloud_subnet_ID>]
security_group_ids = [
yandex_vpc_security_group.k8s-main-sg.id,
yandex_vpc_security_group.k8s-nodes-ssh-access.id,
yandex_vpc_security_group.k8s-public-services.id
]
...
}
...
}
}