Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Architecture solutions
  • Recommendations on fault tolerance in Yandex Cloud
  • Deploying a web app in a fault-tolerant configuration in Yandex Cloud
  • Testing fault tolerance in Yandex Cloud

In this article:

  • System architecture
  • Network
  • PostgreSQL
  • Kubernetes
  • L7 load balancer
  • Scaling features and modifications
  • Test app
  • Expected Yandex Cloud resource consumption
  • Get your cloud ready
  • Required paid resources
  • Create your infrastructure
  • Test your web application
  • How to delete the resources you created

Deploying a web app in a fault-tolerant configuration in Yandex Cloud

Written by
Yandex Cloud
Updated at August 14, 2025
  • System architecture
    • Network
    • PostgreSQL
    • Kubernetes
    • L7 load balancer
  • Scaling features and modifications
  • Test app
  • Expected Yandex Cloud resource consumption
  • Get your cloud ready
    • Required paid resources
  • Create your infrastructure
  • Test your web application
  • How to delete the resources you created

This guide gives an example of how to deploy a web app in a fault-tolerant configuration in the Yandex Cloud infrastructure. The infrastructure behind our web app is built around a group of scalable managed services of the Yandex Cloud ecosystem: Yandex Managed Service for Kubernetes, Yandex Managed Service for PostgreSQL, and Yandex Application Load Balancer.

In addition to the core components listed above, there are auxiliary services used to launch and test the app. These are Yandex Container Registry, Yandex Certificate Manager, Yandex Cloud DNS, Yandex Compute Cloud, Yandex Identity and Access Management, Yandex Key Management Service, and Yandex Virtual Private Cloud.

System architectureSystem architecture

The solution's infrastructure is engineered based on the fault-tolerant infrastructure recommendations and PostgreSQL cluster topology planning recommendations.

NetworkNetwork

The infrastructure is comprised of one Virtual Private Cloud cloud network named net-todo-app.

SubnetsSubnets

The net-todo-app network consists of six subnets:

  • net-todo-app-k8s1, net-todo-app-k8s2, and net-todo-app-k8s3 for Kubernetes cluster nodes, one in each availability zone.
  • net-todo-app-db1, net-todo-app-db2, and net-todo-app-db3 for PostgreSQL cluster nodes, one in each availability zone.

Security groupsSecurity groups

Network access to infrastructure resources is controlled with the help of these security groups:

  • db-todo-app allows incoming traffic to PostgreSQL cluster nodes only from Kubernetes cluster nodes to TCP port 6432.

  • k8s-cluster-todo-app allows:

    • Incoming traffic to the Kubernetes cluster's TCP ports 443 and 6443 from cluster nodes.

    • The security group also allows incoming traffic to the cluster's TCP ports 443 and 6443 from the internet. This goes well with a test environment.

      Warning

      In a production environment, limit access to your Kubernetes cluster to internal or individual public IP addresses.

  • k8s-nodes-todo-app allows:

    • Incoming traffic from other cluster nodes as well as from the CIDRs of the cluster and the services.

    • Incoming traffic from the Kubernetes cluster to TCP ports 10250 and 10256 for kubectl exec/kubectl logs and for kubeproxy health checks.

    • Incoming traffic from Application Load Balancer resource units to TCP port 10501 for cluster node availability checks.

    • Incoming traffic from any internet addresses to ports in the 30000 to 32767 range to publish the services run in the cluster.

      Warning

      If you require no access to Kubernetes services bypassing the L7 load balancer, restrict access to this range reserving it only for the Application Load Balancer resource units.

    • Incoming traffic from the Yandex Network Load Balancer health check system. If not intending to use any Network Load Balancer tools, delete this rule from the security group.

    • Incoming traffic from the Kubernetes cluster to TCP port 4443 for the metric collector.

  • k8s-alb-todo-app allows:

    • Incoming traffic from the internet to TCP ports 80 and 443 enabling user access to the web app.
    • Incoming traffic for the whole TCP port range to health-check the load balancer.

In addition, all security groups allow incoming ICMP traffic.

Resource addressesResource addresses

Your new infrastructure uses two public IP addresses:

  • IP address of the primary-address-todo-app L7 load balancer.
  • IP address of the Kubernetes cluster (not counted towards the overall public IP address quota).

The Kubernetes and PostgreSQL cluster nodes use internal addresses.

PostgreSQLPostgreSQL

The web app database is hosted in a managed Managed Service for PostgreSQL cluster named main-todo-app.

Boasting the most fault-tolerant configuration, the cluster has its worker nodes in three availability zones, as per the PostgreSQL cluster topology planning guidelines.

The PostgreSQL cluster nodes have no public IP addresses; you can access the database only via internal IP addresses – and that only from the Kubernetes cluster nodes or through the Yandex WebSQL user interface.

The cluster has a database named todo and a user named todo, both used by the web app.

KubernetesKubernetes

To run the app components, there is a managed Managed Service for Kubernetes cluster named main-todo-app.

In line with the fault-tolerant infrastructure recommendations, the cluster has the following configuration:

  • High-availability Managed Service for Kubernetes cluster with masters in three availability zones.
  • The cluster uses the DNS request caching service NodeLocal DNS Cache.

The Kubernetes cluster employs an auxiliary Application Load Balancer ingress controller service to manage the L7 load balancer configuration with the help of Ingress objects.

Public API access is on to manage the Kubernetes auxiliary services with the help of Terraform manifests. Access to the API is restricted with the help of security groups.

Kubernetes cluster nodes need access to the internet, including to download Docker images from the Yandex Container Registry registry. The cluster nodes access the internet using a NAT gateway named net-todo-app-egress-nat and a route table named net-todo-app-default-route-table associated with the Kubernetes cluster subnets. Their internet access is not restricted.

L7 load balancerL7 load balancer

The infrastructure uses the managed Application Load Balancer solution for web app load balancing. Managed Service for Kubernetes dynamically creates the Application Load Balancer L7 load balancer with the help of Ingress objects. The Application Load Balancer ingress controller monitors changes to Ingress objects and modifies the relevant load balancer settings, including its creation and deletion. Ingress is part of the app installation Helm chart.

Application Load Balancer is integrated with Yandex Certificate Manager, a service that automatically gets your Let's Encrypt certificates for you.

Scaling features and modificationsScaling features and modifications

All the infrastructure components are scalable, both horizontally and vertically:

  • The Managed Service for Kubernetes cluster can be autoscaled by adding new nodes.
  • The Managed Service for PostgreSQL cluster can autoscale its storage based on utilization, but adding additional cluster nodes is a manual operation.
  • Application Load Balancer supports automatic and manual scaling depending on load.
  • You can quickly expand the infrastructure using other managed services such as Yandex Managed Service for Valkey™, Yandex Managed Service for Apache Kafka®, Yandex Object Storage, etc.

In addition to the built-in features, you can connect extra components:

  • You can connect a Yandex Smart Web Security security profile for protection against bots, DDoS and web attacks, also acting as WAF and ARL.
  • If you need to restrict access to the internet or use a fixed IP address for access, you can easily modify the infrastructure and set up an internet connection using a NAT instance or another networking product from Yandex Cloud Marketplace.

Test appTest app

The todo test web app deployed in this tutorial is adapted to operate in a cloud infrastructure. For a runtime environment, it uses a managed Kubernetes cluster. The app is comprised of two components: frontend and backend.

Both are assembled and packed into a distroless container image named gcr.io/distroless/base-debian12 for maximum compactness and security.

The backend is written in Go and requires a PostgreSQL DBMS. In accordance with the fault-tolerant infrastructure recommendations, the app implements a health check feature to monitor the availability of connected resources (in particular, the PostgreSQL cluster).

The frontend is written in React. The frontend is published to an Angie web server, statically built from the source code for the sake of size and security.

The Docker images and Helm chart you need for the installation reside in a Container Registry.

Expected Yandex Cloud resource consumptionExpected Yandex Cloud resource consumption

Quota Quantity
Application Load Balancer
L7 load balancers 1
HTTP routers 2
Backend groups 2
Target groups 2
Certificate Manager
TLS certificates 1
Cloud DNS
DNS zones 1
Resource records 4
Compute Cloud
Instance groups 3
Virtual machines 3
Disks 3
Total number of VM vCPUs 12
Total VM RAM 24 GB
Total size of non-replicated SSDs 279 GB
Identity and Access Management
Service accounts 3
Authorized keys 1
Key Management Service
Symmetric keys 1
Managed Service for PostgreSQL
PostgreSQL clusters 1
Total number of database host vCPUs 6
Total database host RAM 24 GB
Total database host storage size 99 GB
Managed Service for Kubernetes
Kubernetes clusters 1
Node groups 3
Nodes 3
Total number of cluster node vCPUs 12
Total cluster node RAM 24 GB
Total cluster node disk size 279 GB
Total number of vCPUs of all cluster masters 6
Total RAM of all cluster masters 24 GB
Virtual Private Cloud
Cloud networks 1
Subnets 8
Public IP addresses 1
Static public IP addresses 1
Security groups 4
Gateways 1
NAT gateways 1
Route tables 1
Static routes 1

Before you start creating your infrastructure, make sure your cloud has enough unused quotas for resources.

You create the infrastructure with the help of the Yandex Cloud Terraform provider. For the source code discussed in the tutorial, visit GitHub.

To deploy your web app in a fault-tolerant Yandex Cloud environment:

  1. Get your cloud ready.
  2. Create your infrastructure.
  3. Test your web application.

If you no longer need the resources you created, delete them.

Get your cloud readyGet your cloud ready

Sign up for Yandex Cloud and create a billing account:

  1. Navigate to the management console and log in to Yandex Cloud or create a new account.
  2. On the Yandex Cloud Billing page, make sure you have a billing account linked and it has the ACTIVE or TRIAL_ACTIVE status. If you do not have a billing account, create one and link a cloud to it.

If you have an active billing account, you can navigate to the cloud page to create or select a folder for your infrastructure.

Learn more about clouds and folders here.

Required paid resourcesRequired paid resources

The infrastructure support costs for a web app include:

  • Fee for VM computing resources and disks the Kubernetes cluster will be deployed on (see Compute Cloud pricing).
  • Fee for using the L7 load balancer’s computing resources (see Yandex Application Load Balancer pricing).
  • Fee for using the master of the Managed Service for Kubernetes managing cluster and outbound traffic (see Yandex Managed Service for Kubernetes pricing).
  • Fee for using public IP addresses and NAT gateway (see Yandex Virtual Private Cloud pricing).
  • Fee for a continuously running Managed Service for PostgreSQL cluster (see Managed Service for PostgreSQL pricing).
  • Fee for using a public DNS zone and public DNS requests (see Yandex Cloud DNS pricing).
  • Fee for logging and log storage in a log group (see Yandex Cloud Logging pricing).

Create your infrastructureCreate your infrastructure

With Terraform, you can quickly create a cloud infrastructure in Yandex Cloud and manage it using configuration files. These files store the infrastructure description written in HashiCorp Configuration Language (HCL). If you change the configuration files, Terraform automatically detects which part of your configuration is already deployed, and what should be added or removed.

Terraform is distributed under the Business Source License. The Yandex Cloud provider for Terraform is distributed under the MPL-2.0 license.

For more information about the provider resources, see the relevant documentation on the Terraform website or its mirror.

To create an infrastructure using Terraform:

  1. Install Terraform, get the credentials, and specify the source for installing Yandex Cloud (see Configure your provider, step 1).

  2. Set up your infrastructure description files:

    1. Clone the repository with configuration files:

      git clone https://github.com/yandex-cloud-examples/yc-mk8s-ha-todo-application.git
      
    2. Navigate to the repository directory:

      cd yc-mk8s-ha-todo-application
      
    3. In the terraform.tfvars file, set the following user-defined properties:

      • folder_id: Folder ID.
      • target_host: Your domain's name. The domain must be delegated to Yandex Cloud DNS.
  3. Create the resources:

    1. In the terminal, go to the directory where you edited the configuration file.

    2. Make sure the configuration file is correct using this command:

      terraform validate
      

      If the configuration is correct, you will get this message:

      Success! The configuration is valid.
      
    3. Run this command:

      terraform plan
      

      You will see a detailed list of resources. No changes will be made at this step. If the configuration contains any errors, Terraform will show them.

    4. Apply the changes:

      terraform apply
      
    5. Type yes and press Enter to confirm the changes.

The required infrastructure will be deployed in the selected folder. The deployment may take up to 40 minutes.

Note

Once your infrastructure has been successfully created, wait for 5-7 minutes before you test the web app. This time is required for the ingress controller to create and start the L7 load balancer.

Test your web applicationTest your web application

In your web browser address bar, enter your domain's name you specified in terraform.tfvars.

This will open a web app named Todo app deployed in the fault-tolerant Yandex Cloud infrastructure.

How to delete the resources you createdHow to delete the resources you created

To stop paying for the resources and delete the infrastructure you created, do the following:

  1. In the command line, navigate to the directory with the Terraform configuration file.

  2. Run this command:

    terraform destroy
    
  3. Type yes and press Enter.

Wait until the deletion process is over. You can check the deletion of all your resources in the management console.

Was the article helpful?

Previous
Recommendations on fault tolerance in Yandex Cloud
Next
Testing fault tolerance in Yandex Cloud
© 2025 Direct Cursus Technology L.L.C.