Deploying and load testing a gRPC service with scaling
Use this tutorial to deploy an autoscalable gRPC
To deploy the service and perform load testing:
- Prepare your cloud.
- Prepare a test target.
- Prepare a domain.
- Install Ingress.
- Configure horizontal pod autoscaling.
- Load test the gRPC service.
If you no longer need the resources you created, delete them.
Prepare your cloud
-
Register a domain name for your website.
-
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Also configure the security groups required for Application Load Balancer.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Prepare your Managed Service for Kubernetes cluster.
-
Install Metrics Provider in the
kube-public
namespace. -
Install ALB Ingress Controller.
-
Optionally, install ExternalDNS with a plugin for Yandex Cloud DNS to automatically create a DNS record in Yandex Cloud DNS when creating an Ingress controller.
Required paid resources
The infrastructure support costs include:
- Fee for using the Managed Service for Kubernetes master and outgoing traffic (see Managed Service for Kubernetes pricing).
- Fee for using computing resources of the L7 load balancer (see Application Load Balancer pricing).
- Fee for public DNS queries and DNS zones if using Yandex Cloud DNS (see Cloud DNS pricing).
Prepare a test target
This instruction will use a gRPC service as a test target.
-
Save the following specification to create an application in the
grpc-server.yaml
file:### Deployment. apiVersion: apps/v1 kind: Deployment metadata: name: grpc-app labels: app: grpc-app spec: replicas: 1 selector: matchLabels: app: grpc-app template: metadata: name: grpc-app labels: app: grpc-app spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - grpc-app topologyKey: "kubernetes.io/hostname" containers: - name: grpc-app image: cr.yandex/crp6a9o7k9q5rrtt2hoq/grpc-test-server resources: requests: memory: "256Mi" cpu: "500m" limits: memory: "500Mi" cpu: "1" ### Service. apiVersion: v1 kind: Service metadata: name: grpc-service spec: selector: app: grpc-app type: NodePort ports: - name: grpc port: 80 targetPort: 8080 protocol: TCP nodePort: 30085
-
Create an app:
kubectl apply -f grpc-server.yaml
Prepare a domain
-
Create a public DNS zone and delegate a domain.
Note
For the example.com domain, the zone must be named
example.com.
(with a dot at the end). -
Add a
Let's Encrypt®
certificate. -
Check the rights for the domain.
Install Ingress
-
Create an Ingress resource manifest in the
ingress.yaml
file:apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: grpc-demo annotations: ingress.alb.yc.io/subnets: <subnet_IDs> ingress.alb.yc.io/external-ipv4-address: auto ingress.alb.yc.io/protocol: grpc ingress.alb.yc.io/security-groups: <security_group_ID> spec: tls: - hosts: - <website_name> secretName: yc-certmgr-cert-id-<certificate_ID> rules: - host: <website_name> http: paths: - pathType: Prefix path: "/api.Adder/Add" backend: service: name: grpc-service port: number: 80 - pathType: Prefix path: "/grpc.reflection.v1alpha.ServerReflection" backend: service: name: grpc-service port: number: 80
Where:
-
ingress.alb.yc.io/subnets
: List of comma-separated subnet IDs. -
ingress.alb.yc.io/external-ipv4-address
: Providing public online access to Application Load Balancer.If set to
auto
, the Ingress controller will get a public IP address automatically. Deleting the Ingress controller also deletes the IP address from the cloud. -
ingress.alb.yc.io/security-groups
: ID of the security group you created when preparing your cloud. If security groups are not enabled in your cloud, delete this annotation. -
secretName
: Reference to a TLS certificate from Yandex Certificate Manager inyc-certmgr-cert-id-<certificate_ID>
format. -
hosts
,host
: Domain name the TLS certificate corresponds to.
For more information, see Ingress fields and annotations.
-
-
Create an
Ingress
resource:kubectl apply -f ingress.yaml
-
Check that the resource was created and given a public IP address:
kubectl get ingress grpc-demo
Result:
NAME CLASS HOSTS ADDRESS PORTS AGE grpc-demo <none> <website_name> <IP_address> 80, 443 2m
Where:
<website_name>
: Domain name the TLS certificate corresponds to.<IP_address>
: IP address of the website.
The IP address should appear in the
ADDRESS
column. If it did not, the load balancer was not created or was created with an error. Check the logs for theyc-alb-ingress-controller-*
pod. -
If you have no ExternalDNS with a plugin for Cloud DNS installed, create an A record in Cloud DNS stating the load balancer's public IP. If you are using ExternalDNS with a plugin for Cloud DNS, this record will be created automatically.
Configure horizontal pod autoscaling
-
Create a file named
hpa.yaml
with the Horizontal Pod Autoscaler specification:### HPA. apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: grpc-app spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: grpc-app minReplicas: 1 maxReplicas: 10 metrics: - type: External external: metric: name: "load_balancer.requests_count_per_second" selector: matchLabels: service: "application-load-balancer" load_balancer: <load_balancer_ID> code: "total" backend_group: <backend_group_IDs> target: type: AverageValue averageValue: 2
Where:
load_balancer
: L7 load balancer ID.backend_group
: Backend group ID.
You can find them in the Application Load Balancer console or by running these commands:
yc alb load-balancer list yc alb backend-group list
-
Create Horizontal Pod Autoscaler:
kubectl apply -f hpa.yaml
Load test the gRPC service
-
Create a service account:
-
Create a service account named
sa-loadtest
in the folder that will host the agent to supply the load. -
Assign roles to the service account:
loadtesting.generatorClient
: Enables you to run agents and tests on agents and upload test results to the storage.compute.admin
: Enables you to manage a VM in Yandex Compute Cloud.vpc.user
: Enables you to connect to Yandex Virtual Private Cloud network resources and use them.
-
-
Create and configure a NAT gateway in the subnet where your test target is and where the agent will reside. This will enable the agent to access Yandex Load Testing.
-
Create a test agent.
-
Prepare a file with test data named
ammo.json
:{"tag": "/Add", "call": "api.Adder.Add", "payload": {"x": 21, "y": 12}}
-
Prepare the
load.yaml
configuration file:phantom: enabled: false package: yandextank.plugins.Phantom pandora: enabled: true package: yandextank.plugins.Pandora config_content: pools: - id: Gun gun: type: grpc target: <your_website_name>:<port> tls: true ammo: type: grpc/json file: ammo.json result: type: phout destination: ./phout.log rps: - duration: 60s type: line from: 1 to: 10 startup: - type: once times: 1000 log: level: debug monitoring: expvar: enabled: true port: 1234 autostop: enabled: true package: yandextank.plugins.Autostop autostop: - limit (5m) uploader: enabled: true package: yandextank.plugins.DataUploader job_name: '[pandora][grpc][tls]' job_dsc: '' ver: '' api_address: loadtesting.api.cloud.yandex.net:443
-
-
Under Attached files, click Choose files and select the previously saved
ammo.json
file. -
Under Test settings settings:
- In the Configuration method field, select
Config
. - In the Configuration file field, click Choose files and upload the
load.yaml
file you prepared.
- In the Configuration method field, select
-
-
Monitor the test:
- In the management console
, select Managed Service for Kubernetes. - Select your Managed Service for Kubernetes test cluster.
- Go to the Workload tab.
- Monitor the change in the number of application pods as the load increases and decreases.
- After the test is complete, in the management console
, select Application Load Balancer. - Select the created L7 load balancer.
- Go to the Monitoring tab.
- View the test load chart.
- In the management console
How to delete the resources you created
Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:
- If you had set up CNAME records in Cloud DNS, delete the DNS zone.
- Delete the L7 load balancer.
- Delete the Managed Service for Kubernetes cluster.
- Delete the route table.
- Delete the NAT gateway.