Monitoring Apache Spark™ cluster state
Data on the cluster and host state is available in the management console
Diagnostic information about cluster states is presented as graphs.
Charts are updated every 15 seconds.
Note
The most appropriate multiple units (MB, GB, and more) are automatically used in charts.
You can configure alerts in Yandex Monitoring to receive notifications about cluster failures. In Yandex Monitoring, there are two alert thresholds: Warning and Alarm. If the specified threshold is exceeded, you will receive alerts via the configured notification channels.
Monitoring the cluster state
To view detailed information on the state of a Apache Spark™ cluster:
-
In the management console
, navigate to the relevant folder. -
In the list of services, select Managed Service for Apache Spark™.
-
Click the cluster name and select the Monitoring tab.
-
To get started with Yandex Monitoring metrics, dashboards, or alerts, click Open in Monitoring in the top panel.
The page displays the following charts:
-
Under Cluster Resource Usage:
-
Total Allocated Nodes: Number of used cluster hosts.
-
Total Running Containers & Total Running Jobs: Number of running jobs and containers.
- Spark Containers: Number of running containers.
- Spark Jobs: Number of running jobs.
-
Pending Containers: Number of containers waiting to run.
-
CPU Resources: Availability of processor cores.
- Allocated CPU: Number of CPUs in use.
- Allocatable CPU: Number of CPUs available to containers.
- Capacity CPU: Total CPUs per cluster. Some CPUs may be reserved for system needs.
-
Available CPU: Number of available CPUs in the cluster.
-
CPU Usage/Limits: CPU utilization by containers.
- Additional containers CPU limited: CPU usage limit for system containers.
- Additional containers CPU usage: Number of CPUs used by the system containers.
- Spark containers CPU usage: CPU usage limit for Spark application containers.
- Spark containers CPU limited: Number of CPUs used by Spark application containers.
-
Memory Resources: Available RAM.
- Capacity Memory: Total host RAM. Some RAM may be reserved for system needs.
- Allocatable Memory: Host RAM available to containers.
- Allocated Memory: Host RAM in use.
-
Available Memory: Available cluster RAM.
-
Memory Usage/Limits: RAM utilization by containers.
- Additional containers Memory limited: RAM limit for system containers.
- Additional containers Memory usage: RAM used by system containers.
- Spark containers Memory limited: RAM limit for Spark application containers.
- Spark containers Memory usage: RAM used by Spark application containers.
-
-
Under Driver Pool:
- Driver Pool: Allocated Nodes: Number of Apache Spark™ driver hosts.
- Driver Pool: Running Containers: Number of running containers in the driver pool.
- Spark Drivers: Running Containers By Nodes: Number of running containers on Apache Spark™ driver hosts.
- Spark Drivers: CPU Limits By Nodes: CPU limit for Apache Spark™ driver hosts.
- Spark Drivers: Used CPU By Nodes: CPUs used by Apache Spark™ driver hosts.
- Driver Pool: Available CPU By Nodes: CPUs available on Apache Spark™ driver hosts.
- Spark Drivers: Memory Limits By Nodes: RAM limit for Apache Spark™ driver hosts.
- Spark Drivers: Used Memory By Nodes: RAM used by Apache Spark™ driver hosts.
- Driver Pool: Available Memory By Nodes: RAM available on Apache Spark™ driver hosts.
-
Under Executor Pool:
- Executor Pool: Allocated Nodes: Number of Apache Spark™ executor hosts.
- Executor Pool: Running Containers: Number of running containers in the Apache Spark™ executor pool.
- Spark Executors: Running Containers By Node: Number of running containers on Apache Spark™ executor hosts.
- Spark Executors: CPU Limits By Nodes: CPUs limit for Apache Spark™ executor hosts.
- Spark Executors: Used CPU By Nodes: CPUs used by Apache Spark™ executor hosts.
- Executor Pool: Available CPU By Nodes: CPUs available on Apache Spark™ executor hosts.
- Spark Executors: Memory Limits By Nodes: RAM limit on Apache Spark™ executor hosts.
- Spark Executors: Used Memory By Nodes: RAM used by Apache Spark™ executor hosts.
- Executor Pool: Available Memory By Nodes: RAM available on Apache Spark™ executor hosts.
-
Under Spark Jobs:
- Running Executors By Jobs: Number of Apache Spark™ executor hosts by jobs in progress.
- Spark Application: Running Stages: Number of stages in progress by jobs.
- Spark Application: Active Tasks: Number of tasks in progress by jobs.
- Spark CPU Limits By Jobs: CPU limit for jobs.
- Spark Used CPU By Jobs: CPUs used by jobs.
- Spark Application: Completed Stages: Number of completed stages by jobs.
- Spark Memory Limits By Jobs: RAM limit by jobs.
- Spark Used Memory By Jobs: RAM used by jobs.
- Spark Application: Completed Tasks: Number of completed tasks by jobs.
- Spark Application: Failed Stages: Number of failed job stages, by jobs.
- Spark Application: Waiting Stages: Number of pending job stages, by jobs.
- Spark Application: Failed Tasks: Number of failed tasks, by jobs.
Alert settings in Yandex Monitoring
To configure cluster state indicator alerts:
- In the management console
, select the folder with the cluster for which you want to configure alerts. - In the list of services, select
Monitoring. - Under Service dashboards, select Managed Service for Apache Spark™ — Cluster Overview.
- In the chart you need, click
and select Create alert. - If the chart shows multiple metrics, select a data query to generate a metric and click Continue. You can learn more about the query language in the Yandex Monitoring documentation.
- Set the
AlarmandWarningthreshold values to trigger the alert. - Click Create alert.
To have other cluster health indicators monitored automatically:
- Create an alert.
- Add a status metric.
- In the alert parameters, set the alert thresholds.
For a complete list of supported metrics, see this Monitoring article.
Cluster state and status
The State of a cluster shows the health of its hosts, while the Status shows whether the cluster is started, stopped, or is at an intermediate stage.
To view a state and status of a cluster:
- Go to the folder page and select Managed Service for Apache Spark™.
- Hover over the indicator in the cluster row of the Availability column.
Cluster states
| State | Description | Suggested actions |
|---|---|---|
| ALIVE | Cluster is operating normally. | No action is required. |
| DEGRADED | Cluster is not running at its full capacity: the state of at least one of the hosts is other than ALIVE. |
Run the diagnostics:
|
| DEAD | The cluster is down: none of its hosts are running. | Make a support request
|
| UNKNOWN | Cluster state is unknown. | Make a support request
|
Cluster statuses
| Status | Description | Suggested actions |
|---|---|---|
| CREATING | Preparing for the first start | Wait a while and get started. The time it takes to create a cluster depends on the host class. |
| RUNNING | The cluster is operating normally | No action is required. |
| STOPPING | The cluster is stopping | After a while, the cluster status will switch to STOPPED and the cluster will be disabled. No action is required. |
| STOPPED | The cluster is stopped | Start the cluster to get it running again. |
| STARTING | Starting the cluster that was stopped earlier | After a while, the cluster status will switch to RUNNING. Wait a while and get started. |
| UPDATING | Updating the cluster's configuration | Once the update is complete, the cluster will get the status it had prior to the update: RUNNING or STOPPED. |
| ERROR | Error when performing an operation with the cluster or during a maintenance window | If the cluster remains in this status for a long time, contact support |
| STATUS_UNKNOWN | The cluster is unable to determine its status | If the cluster remains in this status for a long time, contact support |