Granting access to an app running in a Kubernetes cluster
To grant access to an app running in a Kubernetes cluster, you can use various types of public and internal services.
To publish an app, use a LoadBalancer
type service. The following options are supported:
-
Public access by IP address with a network load balancer.
-
Access from internal networks by IP address with an internal network load balancer.
The application will be available:
- From Yandex Virtual Private Cloud subnets.
- From the company's internal subnets connected to Yandex Cloud via Yandex Cloud Interconnect.
- Via VPN.
When using an external load balancer, you can specify a static public IP address in the loadBalancerIP
field. You need to reserve such an address in advance. When reserving a public IP address, you can enable DDoS protection.
If you do not specify a static IP address, the network load balancer will get a dynamic IP address.
Note
Unlike an IP address of a pod or node, which may change if node group resources are updated, the static IP address of a LoadBalancer
type service does not change.
Prepare and run in a Kubernetes cluster the application you need to grant access to with the help of a LoadBalancer
type service. As an example, use an application that responds to HTTP requests on port 8080.
- Create a simple app.
- Create a LoadBalancer type service with a public IP address.
- Create a LoadBalancer type service with an internal IP address.
- Specify the advanced settings.
- Specify node health check parameters.
- (Optional) Create a NetworkPolicy object.
How to ensure access to an app via HTTPS?
See this documentation:
- Creating a new Kubernetes project in Yandex Cloud
- Configuring an Yandex Application Load Balancer L7 load balancer using an Ingress controller
- Installing an NGINX Ingress controller with a Let's Encrypt® certificate manager
- Installing an NGINX Ingress controller with a Yandex Certificate Manager certificate
If you no longer need the resources you created, delete them.
Getting started
Prepare the required infrastructure:
-
Create a cloud network and subnet.
-
Create a service account with the
editor
role. -
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a Managed Service for Kubernetes cluster and a node group with public internet access and the security groups you prepared earlier.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-load-balancer.tf
Managed Service for Kubernetes cluster configuration file to the same working directory. The file describes:-
Managed Service for Kubernetes cluster.
-
Service account required for the Managed Service for Kubernetes cluster and node group.
-
Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Specify the following in the configuration file:
- Folder ID.
- Kubernetes version for the Managed Service for Kubernetes cluster and node groups.
- Name of the Managed Service for Kubernetes cluster service account.
-
Check that the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Create a simple app
-
Save the following app creation specification to a YAML file named
hello.yaml
.Deployment
is the Kubernetes API object that manages the replicated application.apiVersion: apps/v1 kind: Deployment metadata: name: hello spec: replicas: 2 selector: matchLabels: app: hello template: metadata: labels: app: hello spec: containers: - name: hello-app image: cr.yandex/crpjd37scfv653nl11i9/hello:1.1
-
Create an app:
CLIIf you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the
--folder-name
or--folder-id
parameter.kubectl apply -f hello.yaml
Result:
deployment.apps/hello created
-
View information about the created app:
CLIkubectl describe deployment hello
Result:
Name: hello Namespace: default CreationTimestamp: Wed, 28 Oct 2020 23:15:25 +0300 Labels: <none> Annotations: deployment.kubernetes.io/revision: 1 Selector: app=hello Replicas: 2 desired | 2 updated | 2 total | 1 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=hello Containers: hello-app: Image: cr.yandex/crpjd37scfv653nl11i9/hello:1.1 Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: <none> NewReplicaSet: hello-******** (2/2 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 10s deployment-controller Scaled up replica set hello-******** to 2
Create a LoadBalancer type service with a public IP address
When you create a LoadBalancer
type service, the Yandex Cloud controller creates and configures for you a network load balancer with a public IP address in your folder.
Warning
- You will be charged for the network load balancer you created based on the pricing rules.
- Do not modify or delete the network load balancer and the target groups that are automatically created in your folder after creating a
LoadBalancer
type service.
-
Save the following specification for creating a
LoadBalancer
type service to a YAML file namedload-balancer.yaml
:apiVersion: v1 kind: Service metadata: name: hello spec: type: LoadBalancer ports: - port: 80 name: plaintext targetPort: 8080 # Selector Kubernetes labels used in the pod template when creating the Deployment object. selector: app: hello
For more information, see the
Service
resource reference for Yandex Network Load Balancer. -
Create a network load balancer:
CLIkubectl apply -f load-balancer.yaml
Result:
service/hello created
-
View information about the network load balancer created:
Management consoleCLI- In the management console
, select your default folder. - Select Network Load Balancer.
- The Load balancers tab shows the network load balancer with the
k8s
prefix in its name and the unique ID of your Kubernetes cluster in its description.
kubectl describe service hello
Result:
Name: hello Namespace: default Labels: <none> Annotations: <none> Selector: app=hello Type: LoadBalancer IP: 172.20.169.7 LoadBalancer Ingress: 130.193.50.111 Port: plaintext 80/TCP TargetPort: 8080/TCP NodePort: plaintext 32302/TCP Endpoints: 10.1.130.4:8080 Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 2m43s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 2m17s service-controller Ensured load balancer
- In the management console
-
Make sure the application is available from the internet:
CLIcurl http://130.193.50.111
Where
130.193.50.111
is the public IP address from theLoadBalancer Ingress
field.Result:
Hello, world! Running in 'hello-********'
Create a LoadBalancer type service with an internal IP address
-
Edit the specification in the
load-balancer.yaml
file:apiVersion: v1 kind: Service metadata: name: hello annotations: # Load balancer type. yandex.cloud/load-balancer-type: internal # ID of the subnet for the internal network load balancer. yandex.cloud/subnet-id: e1b23q26ab1c******** spec: type: LoadBalancer ports: - port: 80 name: plaintext targetPort: 8080 # Selector Kubernetes labels used in the pod template when creating the Deployment object. selector: app: hello
For more information, see the
Service
resource reference for Yandex Network Load Balancer. -
Delete the network load balancer you created earlier:
CLIkubectl delete service hello
Result:
service "hello" deleted
-
Create an internal network load balancer:
CLIkubectl apply -f load-balancer.yaml
Result:
service/hello created
Configure the advanced settings
In Managed Service for Kubernetes, you can specify the following additional parameters for your LoadBalancer
type service:
loadBalancerIP
: Public (static) IP address you reserved in advance.externalTrafficPolicy
: Traffic management policy .
Example
apiVersion: v1
kind: Service
metadata:
name: hello
spec:
type: LoadBalancer
ports:
- port: 80
name: plaintext
targetPort: 8080
selector:
app: hello
loadBalancerIP: 159.161.32.22
externalTrafficPolicy: Cluster
For more information, see the Service
resource reference for Yandex Network Load Balancer.
Specify node health check parameters
LoadBalancer
type services in Managed Service for Kubernetes can run status check requests for a target group of Kubernetes nodes. Based on the metrics delivered to the service, Managed Service for Kubernetes decides if the nodes are available.
To enable node health check mode, specify the yandex.cloud/load-balancer-healthcheck-*
annotations in the service specification, e.g.:
apiVersion: v1
kind: Service
metadata:
name: hello
annotations:
# Node health check parameters
yandex.cloud/load-balancer-healthcheck-healthy-threshold: "2"
yandex.cloud/load-balancer-healthcheck-interval: "2s"
For more information, see the Service
resource reference for Yandex Network Load Balancer.
Create a NetworkPolicy object
To connect to services published via Network Load Balancer from particular IP addresses, enable network policies in the cluster. To set up access via the load balancer, create a NetworkPolicyIngress
type policy.
NetworkPolicy object configuration example
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: whitelist-netpol
namespace: ns-example
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
ingress:
- from:
# Address ranges used by the load balancer to health check nodes.
- ipBlock:
cidr: 198.18.235.0/24
- ipBlock:
cidr: 198.18.248.0/24
# Pod address ranges.
- ipBlock:
cidr: 172.16.1.0/12
- ipBlock:
cidr: 172.16.2.0/12
For more information, see the NetworkPolicy
resource reference for Yandex Network Load Balancer.
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
-
Delete the resources depending on how they were created:
ManuallyTerraform-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy
-
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-
-
-
If you used static public IP addresses to access a Managed Service for Kubernetes cluster or nodes, release and delete them.