Kubernetes cluster network policies
Kubernetes network policies help configure network interchanges between groups of pods and network nodes. You can create network policies using the Kubernetes Network Policy API
To manage network policies, Managed Service for Kubernetes uses the Calico
The Calico network controller uses the iptables
Warning
You can enable network policies only when creating a cluster.
Integration with load balancers
Warning
Due to the Yandex Cloud architecture, you cannot use loadBalancerSourceRanges
in Managed Service for Kubernetes when setting up network policy controllers. To allow traffic via the Yandex Network Load Balancer or Yandex Application Load Balancer use NetworkPolicy
For step-by-step instructions on how to set up access to an application using NetworkPolicy, see Granting access to an app running in a Kubernetes cluster.
Calico
Calico enables you to configure basic security policies
Step-by-step configuration instructions are provided at Configuring the Calico network policy controller.
Cilium
Unlike Calico, the Cilium controller has broader capabilities and enables you to:
- Use the same subnet ranges for pods and services in different clusters.
- Create more functional network policies, for example, by filtering pod-to-pod traffic at the L7 application layer
or using the DNS name of an external resource. - Use the built-in Hubble
tool to monitor network events.
In a Managed Service for Kubernetes cluster, Cilium operates in tunneling mode
Cilium tunneling mode helps:
- Create clusters with overlapping IP addresses on the same network.
- Use an extended address range of up to
/8
for pod and cluster services. - Create twice as many cluster nodes (as compared to Calico).
To be able to use tunnel mode, a service account requires the k8s.tunnelClusters.agent
role.
Allow-all and deny-all network policies
Warning
Use these policies for debugging tasks only; otherwise, granularly set up your policies to address particular tasks.
Policies for incoming connections
You can create a policy that allows all incoming connections to all pods in a namespace. If such a policy is present, no other policy can ban incoming connections to these pods.
Example:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-ingress
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
You can create a policy that bans all incoming connections to all pods in a namespace. Such policy ensures that all incoming connections will be banned for pods not selected by any other network policy.
Example:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
Policies for outgoing connections
You can create a policy that allows all outgoing connections from all pods in a namespace. If such a policy is present, no other policy can ban incoming connections to these pods.
Example:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
spec:
podSelector: {}
egress:
- {}
policyTypes:
- Egress
You can create a policy that bans all outgoing connections from all pods in a namespace. Such policy ensures that all outgoing connections will be banned for pods not selected by any other network policy.
Example:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
spec:
podSelector: {}
policyTypes:
- Egress
For more information about NetworkPolicy
, see Fields and annotations of the NetworkPolicy resource.
Cluster requirements to enable network policies
To enable network policies in a Kubernetes cluster, sufficient resources in node groups are required. Using network policies requires additional memory and vCPU resources.
We recommend that you only enable your network policy controller in a cluster of at least two nodes.