Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Managed Service for Kubernetes
  • Comparing with other Yandex Cloud services
  • Getting started
    • Resource relationships
    • Release channels and updates
    • Support for Kubernetes versions
    • Zones of control in Managed Service for Kubernetes
    • Updating node group OS
    • Encryption
      • Node group autoscaling
      • Node group deploy policy
      • Evicting pods from nodes
      • Dynamic resource allocation for a node
      • Node groups with GPUs
      • Reserved instance pools for node groups
      • Variables in a node template
    • Networking in Managed Service for Kubernetes
    • Network settings and cluster policies
    • Autoscaling
    • Audit policy
    • External cluster nodes
    • Quotas and limits
    • Recommendations on using Managed Service for Kubernetes
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes

In this article:

  • Limitations
  • Examples
  • See also
  1. Concepts
  2. Node group
  3. Reserved instance pools for node groups

Reserved instance pools for Yandex Managed Service for Kubernetes node groups

Written by
Yandex Cloud
Updated at April 28, 2026
  • Limitations
  • Examples
    • See also

Warning

Reserved instance pools are billable: you pay for the whole unused volume of reserved computing resources of VMs, GPU clusters, and software accelerated networks according to the Yandex Compute Cloud pricing policy. For more information, see Using reserved instance pools.

The reserved instance pool feature is at the Preview stage.

A reserved instance pool is the total of computing resources reserved by the user in a given availability zone to secure their guaranteed availability to the user for the purpose of creating VMs of a particular configuration in this availability zone.

For more information, see Reserved instance pools in Compute Cloud.

In Managed Service for Kubernetes, you can use reserved instance pools for fixed-size node groups. This ensures resource availability for cluster nodes.

In Managed Service for Kubernetes, you can use reserved instance pools via the CLI, Terraform, and API.

Tip

Reserved instance pools are created in a specific availability zone. To automate the distribution of multi-zone group nodes across reserved instance pools of a specific availability zone, use the node template variables.

LimitationsLimitations

Reserved instance pools are not supported for node groups with the following properties:

  • Autoscaling
  • vCPU performance levels below 100%
  • Preemptible VMs
  • VM placement groups

Note

Make sure the following properties are identical in the node group and reserved instance pool configurations:

  • Platform
  • Number of vCPUs
  • Amount of RAM
  • Availability zone

The number of group nodes in each availability zone must not exceed the size of the reserved instance pools in those zones.

ExamplesExamples

In this example, we will create a node group in one availability zone with two nodes from that zone's reserved instance pool.

CLI
Terraform
yc managed-kubernetes node-group create \
  --name k8s-reserved-ng \
  --cluster-id <cluster_ID> \
  --platform-id standard-v4a \
  --cores 4 \
  --memory 8 \
  --disk-size 64 \
  --disk-type network-ssd \
  --fixed-size 2 \
  --location zone=ru-central1-a,subnet-id=<subnet_ID> \
  --network-interface security-group-ids=[<security_group_IDs>] \
  --reserved-instance-pool-id <reserved_instance_pool_ID>

Where:

  • --cluster-id: Cluster ID.
  • --location: Availability zone and subnet ID.
  • --network-interface security-group-ids: Security group IDs.
  • --reserved-instance-pool-id: Reserved instance pool ID.
resource "yandex_kubernetes_node_group" "k8s-reserved-ng" {
  cluster_id = "<cluster_ID>"
  name       = "k8s-reserved-ng"

  instance_template {
    platform_id = "standard-v4a"
    reserved_instance_pool_id = "<reserved_instance_pool_ID>"

    resources {
      cores  = 4
      memory = 8
    }

    boot_disk {
      size = 64
      type = "network-ssd"
    }

    network_interface {
      subnet_ids = ["<subnet_ID>"]
      security_group_ids = ["<security_group_IDs>"]
      nat        = true
    }
  }

  scale_policy {
    fixed_scale {
      size = 2
    }
  }

  allocation_policy {
    location {
      zone = "ru-central1-a"
    }
  }
}

Where:

  • cluster_id: Cluster ID.
  • subnet_ids: Subnet ID.
  • security_group_ids: Security group IDs.
  • reserved_instance_pool_id: Reserved instance pool ID.

Examples for a multi-zone group with nodes from a reserved instance pool using variables are provided on the Node template variables page.

See alsoSee also

  • Reserved instance pools in Compute Cloud
  • Working with reserved instance pools
  • Variables in a Yandex Managed Service for Kubernetes node template
  • Creating a group with nodes from a Yandex Compute Cloud reserved instance pool

Was the article helpful?

Previous
Node groups with GPUs
Next
Variables in a node template
© 2026 Direct Cursus Technology L.L.C.