Using Istio
- Getting started
- Install Istio
- Install a test application
- View a service network diagram on the Kiali dashboard
- Route service requests
- Simulate a service failure
- Redistribute traffic
- Set authentication mode using mutual TLS
- View Istio metrics on the Prometheus dashboard
- View Istio metrics on the Grafana dashboard
- Delete the resources you created
Istio
To view Istio usage options:
- Install Istio.
- Install a test application.
- View a service network diagram on the Kiali dashboard.
- Route service requests.
- Simulate a service failure.
- Redistribute traffic.
- Set authentication mode using mutual TLS.
- View Istio metrics on the Prometheus dashboard.
- View Istio metrics on the Grafana dashboard.
If you no longer need the resources you created, delete them.
Getting started
-
Create a Kubernetes cluster and a group of nodes.
ManuallyTerraform-
If you do not have a network yet, create one.
-
If you do not have any subnets yet, create them in the availability zones where your Kubernetes cluster and node group will be created.
-
- Service account with the
k8s.clusters.agent
andvpc.publicAdmin
roles for the folder where the Kubernetes cluster is created. This service account will be used to create the resources required for the Kubernetes cluster. - Service account with the container-registry.images.puller role. Nodes will pull the required Docker images from the registry on behalf of this account.
Tip
You can use the same service account to manage your Kubernetes cluster and its node groups.
- Service account with the
-
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a Kubernetes cluster and a node group with at least 6 GB of RAM and the security groups created earlier.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-cluster.tf
cluster configuration file to the same working directory. The file describes:-
Kubernetes cluster.
-
Service account required to use the Managed Service for Kubernetes cluster and node group.
-
Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
In
k8s-cluster.tf
, specify:- Folder ID.
- Kubernetes version for the Kubernetes cluster and node groups.
- At least 6 GB of RAM for your node group. The value must be a multiple of the number of vCPUs.
- Kubernetes cluster CIDR.
- Name of the Managed Service for Kubernetes cluster service account.
-
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
-
-
Install kubectl
and configure it to work with the created cluster.
Install Istio
-
Install Istio from the Yandex Cloud Marketplace application catalog. When installing the application:
- Create a new namespace called
istio-system
. - Install add-ons for Istio: Kiali, Prometheus, Grafana, Loki, and Jaeger.
- Create a new namespace called
-
Make sure that all the pods of Istio and its add-ons have changed their status to
Running
:kubectl get pods -n istio-system
Result:
NAME READY STATUS RESTARTS AGE grafana-75c6d4fcf7-v4sfp 1/1 Running 0 2h istio-ingressgateway-6496999d57-hdbnf 1/1 Running 0 2h istiod-665dbb97c9-s6xxk 1/1 Running 0 2h jaeger-5468d9c886-x2bq8 1/1 Running 0 2h kiali-6854cc8574-26t65 1/1 Running 0 2h loki-0 1/1 Running 0 2h prometheus-54f86f6676-vmqqr 2/2 Running 0 2h
Install a test application
-
Create a new namespace called
todoapp
:kubectl create namespace todoapp
-
Add the
istio-injection
label to thetodoapp
namespace:kubectl label namespace todoapp istio-injection=enabled
-
Install a test application named
todoapp
:kubectl apply -f https://raw.githubusercontent.com/yandex-cloud-examples/yc-mk8s-todo-app/main/kube/todoapp.yaml -n todoapp
Result:
deployment.apps/todoapp-v1 created deployment.apps/todoapp-v2 created deployment.apps/recommender-v1 created deployment.apps/todoapp-redis-v1 created service/todoapp created service/recommender created service/todoapp-redis created
-
Check the pod status:
kubectl get pods -n todoapp
Result:
NAME READY STATUS RESTARTS AGE recommender-v1-7865c4cfbb-hsp2k 2/2 Running 0 60s recommender-v1-7865c4cfbb-vqt68 2/2 Running 0 59s todoapp-redis-v1-dbdf4d44-48952 2/2 Running 0 59s todoapp-v1-6d4b78b6c9-gfkxd 2/2 Running 0 60s todoapp-v1-6d4b78b6c9-jc962 2/2 Running 0 60s todoapp-v2-7dd69b445f-2rznm 2/2 Running 0 60s todoapp-v2-7dd69b445f-gr4vn 2/2 Running 0 60s
Make sure that all the pods have changed their status to
Running
andREADY=2/2
. -
Check the status of services:
kubectl get services -n todoapp
Result:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE recommender ClusterIP 10.96.255.93 <none> 80/TCP 80s todoapp ClusterIP 10.96.232.143 <none> 80/TCP 80s todoapp-redis ClusterIP 10.96.174.100 <none> 6379/TCP 80s
-
Make sure that the web app is up and running:
kubectl exec "$(kubectl get pod -l app=recommender -n todoapp -o jsonpath='{.items[0].metadata.name}')" -n todoapp \ -- curl -sS todoapp:80 | grep -o "<title>.*</title>"
Result:
<title>Todoapp</title>
-
Publish the app:
kubectl apply -f https://raw.githubusercontent.com/yandex-cloud-examples/yc-mk8s-todo-app/main/kube/todoapp-gateway.yaml -n todoapp
Result:
gateway.networking.istio.io/todoapp-gateway created virtualservice.networking.istio.io/todoapp-vs created
-
Get the Ingress gateway IP to access the app:
kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
-
To run the web app, paste the obtained IP into the browser address bar.
Note
Each time the page is refreshed, its content will be updated. Depending on the version of the pod processing your request, you will see:
- Pod
v1
: Section with a to-do list. - Pod
v2
: Section with a to-do list and a section with recommendations.
- Pod
View a service network diagram on the Kiali dashboard
-
Make sure that
Kiali
is installed and available in the Managed Service for Kubernetes cluster:kubectl get service kiali -n istio-system
Result:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kiali ClusterIP 10.96.207.108 <none> 20001/TCP,9090/TCP 15d
-
Configure
kiali
service port forwarding to the local computer:kubectl port-forward service/kiali 8080:20001 -n istio-system
-
To open the Kiali dashboard, paste
http://localhost:8080
into the browser address bar.The Kiali dashboard provides various information, such as the service network diagram, Istio configuration, service configuration and status, as well as pod metrics, traces, and logs.
-
To generate traffic to your test app, play around with it. For example, add a to-do list.
-
Open the Kiali dashboard, go to Graph, and select the todoapp namespace. You will see a diagram with the test app components running in the Istio service network.
Tip
Use the Kiali dashboard to track changes in the next steps of this tutorial. For example, you can see how the display of services or traffic distribution changes.
Route service requests
The todoapp
service pods are deployed on v1
and v2
concurrently. When the test app page is refreshed, the recommendations panel is sometimes not displayed, as only the todoapp
v2
pods make requests to the service and show the results.
Use routing to route users to a specific service version:
-
Route all requests to
v1
:kubectl apply -f https://raw.githubusercontent.com/yandex-cloud-examples/yc-mk8s-todo-app/main/kube/virtualservice-route-v1.yaml -n todoapp
Result:
destinationrule.networking.istio.io/todoapp-dr created virtualservice.networking.istio.io/todoapp-vs configured
-
Refresh the test app page several times. Now all requests are handled by the
v1
pods. The page only shows the to-do list. -
Route all requests to
v2
:kubectl apply -f https://raw.githubusercontent.com/yandex-cloud-examples/yc-mk8s-todo-app/main/kube/virtualservice-route-v2.yaml -n todoapp
Result:
destinationrule.networking.istio.io/todoapp-dr unchanged virtualservice.networking.istio.io/todoapp-vs configured
-
Refresh the test app page several times. Now all requests are handled by the
v2
pods. The page shows the to-do list and recommendations sections.
To cancel routing, run the command below:
kubectl apply -f https://raw.githubusercontent.com/yandex-cloud-examples/yc-mk8s-todo-app/main/kube/todoapp-gateway.yaml -n todoapp
Result:
gateway.networking.istio.io/todoapp-gateway unchanged
virtualservice.networking.istio.io/todoapp-vs configured
Simulate a service failure
With Istio, you can test your app's reliability by simulating service failures.
When accessing the recommender
service, there is a 3-second timeout. If the service does not respond within this time, the recommendations section is not displayed.
You can simulate a failure by specifying a timeout longer than 3 seconds in the VirtualService
resource configuration. For example, this code block implements a 50-percent probability of a 5-second delay:
fault:
delay:
percentage:
value: 50.0
fixedDelay: 5s
To simulate a failure of your test app:
-
Apply the
VirtualService
configuration:kubectl apply -f https://raw.githubusercontent.com/yandex-cloud-examples/yc-mk8s-todo-app/main/kube/virtualservice-delay.yaml -n todoapp
Result:
destinationrule.networking.istio.io/recommender-dr created virtualservice.networking.istio.io/recommender-vs created
-
Refresh the test app page several times. When there is a response delay, the recommendations section is not displayed although the request is handled by the
v2
pod. The app handles a failure of therecommender
service correctly.
To roll back the VirtualService
configuration, run this command:
kubectl delete -f https://raw.githubusercontent.com/yandex-cloud-examples/yc-mk8s-todo-app/main/kube/virtualservice-delay.yaml -n todoapp
Result:
destinationrule.networking.istio.io "recommender-dr" deleted
virtualservice.networking.istio.io "recommender-vs" deleted
Redistribute traffic
When upgrading the microservice version, you can redistribute traffic between its versions without affecting the number of application pods. You can manage traffic routes using the weight
parameter of the VirtualService
resource.
To redistribute traffic in your test app:
-
Set the weight for
v1
andv2
to 50%:kubectl apply -f https://raw.githubusercontent.com/yandex-cloud-examples/yc-mk8s-todo-app/main/kube/virtualservice-weight-v2-50.yaml -n todoapp
Result:
destinationrule.networking.istio.io/todoapp-dr unchanged virtualservice.networking.istio.io/todoapp-vs configured
-
Refresh the test app page several times. The app is handled by the
v1
andv2
pod versions in roughly equal proportions. -
Increase the weight for
v2
to 100%:kubectl apply -f https://raw.githubusercontent.com/yandex-cloud-examples/yc-mk8s-todo-app/main/kube/virtualservice-weight-v2-100.yaml -n todoapp
Result:
destinationrule.networking.istio.io/todoapp-dr unchanged virtualservice.networking.istio.io/todoapp-vs configured
-
Refresh the test app page several times. The app is only handled by the
v2
pods.
Set authentication mode using mutual TLS
By default, applications running an Istio sidecar proxy exchange traffic with mutual TLS encryption.
You can configure a strict authentication policy by prohibiting unencrypted traffic from applications that use no Istio sidecar proxy.
To test how your test app runs in different modes:
-
Create an authentication policy:
kubectl apply -f https://raw.githubusercontent.com/yandex-cloud-examples/yc-mk8s-todo-app/main/kube/peerauthentication.yaml -n todoapp
Result:
peerauthentication.security.istio.io/default created
-
Try creating a pod in the
default
namespace to test a connection to thetodoapp
service:kubectl run -i -n default \ --rm \ --restart=Never curl \ --image=curlimages/curl \ --command \ -- sh -c 'curl -k http://todoapp.todoapp.svc.cluster.local'
Result:
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (56) Recv failure: Connection reset by peer pod "curl" deleted pod default/curl terminated (Error)
-
Delete the authentication policy:
kubectl delete -f https://raw.githubusercontent.com/yandex-cloud-examples/yc-mk8s-todo-app/main/kube/peerauthentication.yaml -n todoapp
Result:
peerauthentication.security.istio.io "default" deleted
-
Try creating a pod once again:
kubectl run -i -n default \ --rm \ --restart=Never curl \ --image=curlimages/curl \ --command \ -- sh -c 'curl -k http://todoapp.todoapp.svc.cluster.local'
Result:
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2658 100 2658 0 0 147k 0 --:--:-- --:--:-- --:--:-- 152k <!DOCTYPE html> <html lang="ru"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Todoapp</title> ...
View Istio metrics on the Prometheus dashboard
-
Make sure that
Prometheus
is installed and available in the Managed Service for Kubernetes cluster:kubectl get service prometheus -n istio-system
Result:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE prometheus ClusterIP 10.96.147.249 <none> 9090/TCP 15d
-
Configure
prometheus
service port forwarding to the local computer:kubectl port-forward service/prometheus 9090:9090 -n istio-system
-
To open the Prometheus dashboard, paste
http://localhost:9090
into the browser address bar. -
Enter the following request in the Expression field:
istio_requests_total{destination_service="recommender.todoapp.svc.cluster.local"}
-
Go to the Graph tab. It shows Istio metrics.
View Istio metrics on the Grafana dashboard
-
Make sure that
Grafana
is installed and available in the Managed Service for Kubernetes cluster:kubectl get service grafana -n istio-system
Result:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.96.205.86 <none> 3000/TCP 15d
-
Configure
grafana
service port forwarding to the local computer:kubectl port-forward service/grafana 3000:3000 -n istio-system
-
To open the Grafana dashboard, paste
http://localhost:3000
into the browser address bar. -
In the list of dashboards, find and open the Istio Mesh Dashboard. It shows the metrics of requests to your test app's services.
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
-
In the command line, go to the directory with the current Terraform configuration file with an infrastructure plan.
-
Delete the
k8s-cluster.tf
configuration file. -
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Confirm updating the resources.
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the resources described in the
k8s-cluster.tf
configuration file will be deleted. -