Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Data Processing
  • Getting started
    • Resource relationships
    • Runtime environment
    • Yandex Data Processing component interfaces and ports
    • Jobs in Yandex Data Processing
    • Spark jobs
    • Automatic scaling
    • Decommissioning subclusters and hosts
    • Networking in Yandex Data Processing
    • Maintenance
    • Quotas and limits
    • Storage in Yandex Data Processing
    • Component properties
    • Apache Iceberg™ in Yandex Data Processing
    • Delta Lake in Yandex Data Processing
    • Logs in Yandex Data Processing
    • Initialization scripts
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Public materials
  • FAQ
  1. Concepts
  2. Automatic scaling

Autoscaling of subclusters

Written by
Yandex Cloud
Updated at February 18, 2025

Note

Autoscaling of subclusters is supported for Yandex Data Processing clusters version 1.4 and higher.

Yandex Data Processing supports autoscaling of data processing subclusters based on metrics received by Yandex Monitoring:

  • If the metric value exceeds the specified threshold, new hosts will be added to a subcluster. You can start using them in a YARN cluster running Apache Spark or Apache Hive as soon as the host status changes to Alive.
  • If the value of the key metric falls below the specified threshold, the system will first start decommissioning and then removing redundant hosts in the subcluster.

You can read more about autoscaling in the Instance Groups documentation.

You can choose the scaling method that best suits your needs:

  • Default scaling: Scaling based on the yarn.cluster.containersPending metric.

    This is an internal YARN metric that shows the number of resource allocation units that pending jobs in the queue expect to get assigned. It is suitable for clusters that have lots of relatively small jobs managed by Apache Hadoop® YARN. This scaling method does not require any additional configuration.

  • CPU utilization target, %: Scaling based on the vCPU usage metric. You can learn more about this type of scaling in the Instance Groups documentation.

To set up autoscaling of your cluster based on other metrics and formulas, contact support.

You can set the following autoscaling parameters:

  • Initial (minimum) size of the group.
  • Decommissioning timeout in seconds. The maximum value is 86400 seconds (24 hours). The default value is 120 seconds.
  • Type of VM instances: standard or preemptible.
  • Maximum group size.
  • Time period for calculating the average load on each VM instance in the group.
  • Instance warmup period: Interval during which instance metrics are not used after it starts. Average metric values for the group are used instead.
  • Stabilization period (minutes or seconds): Interval during which the number of instances in the group cannot be decreased.

Was the article helpful?

Previous
Spark jobs
Next
Decommissioning subclusters and hosts
© 2025 Direct Cursus Technology L.L.C.