Autoscaling of subclusters
Note
Autoscaling of subclusters is supported for Yandex Data Processing clusters version 1.4 and higher.
Yandex Data Processing supports autoscaling of data processing subclusters based on metrics received by Yandex Monitoring:
- If the metric value exceeds the specified threshold, new hosts will be added to a subcluster. You can start using them in a YARN cluster running Apache Spark or Apache Hive as soon as the host status changes to Alive.
- If the value of the key metric falls below the specified threshold, the system will first start decommissioning and then removing redundant hosts in the subcluster.
You can read more about autoscaling in the Instance Groups documentation.
You can choose the scaling method that best suits your needs:
-
Default scaling: Scaling based on the
yarn.cluster.containersPending
metric.This is an internal YARN metric that shows the number of resource allocation units that pending jobs in the queue expect to get assigned. It is suitable for clusters that have lots of relatively small jobs managed by Apache Hadoop® YARN. This scaling method does not require any additional configuration.
-
CPU utilization target, %: Scaling based on the vCPU usage metric. You can learn more about this type of scaling in the Instance Groups documentation.
To set up autoscaling of your cluster based on other metrics and formulas, contact support.
You can set the following autoscaling parameters:
- Initial (minimum) size of the group.
- Decommissioning timeout in seconds. The maximum value is
86400
seconds (24 hours), the default one is120
seconds. - Type of VM instances: standard or preemptible.
- Maximum group size.
- Time period for calculating the average load on each VM instance in the group.
- Instance warmup period: Interval during which instance metrics are not used after it starts. Average metric values for the group are used instead.
- Stabilization period (minutes or seconds): Interval during which the number of instances in the group cannot be decreased.