Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Data Processing
  • Getting started
    • All guides
      • Working with logs
      • Monitoring the state of clusters and hosts
      • Spark application status monitoring
      • Diagnosing and troubleshooting Spark application performance issues
    • Creating and using Python virtual environments
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Public materials
  • FAQ

In this article:

  • Checking the application list
  • Checking application logs
  • Checking the application queue
  • Checking application details
  • Checking resources allocated to the application
  • Checking persisted RDDs
  • Checking the list of SQL queries and their execution plans
  1. Step-by-step guides
  2. Logs and monitoring
  3. Spark application status monitoring

Monitoring the state of Spark applications

Written by
Yandex Cloud
Updated at January 29, 2026
  • Checking the application list
  • Checking application logs
  • Checking the application queue
  • Checking application details
  • Checking resources allocated to the application
  • Checking persisted RDDs
  • Checking the list of SQL queries and their execution plans

To evaluate the performance of Spark applications in a Yandex Data Processing cluster, you can check the following:

  • Application list
  • Application logs
  • Application queue
  • Application details
  • Resources allocated to the application
  • Persisted RDDs
  • List of SQL queries and their execution plans

Note

Make sure the cluster has the component web interfaces enabled. If not, enable them.

Checking the application listChecking the application list

  1. Open the folder dashboard.
  2. Go to Yandex Data Processing.
  3. Click the name of your cluster.
  4. Under UI Proxy, select YARN Resource Manager Web UI.

It shows information about all running and completed applications.

Checking application logsChecking application logs

  1. Open the folder dashboard.

  2. Go to Yandex Data Processing.

  3. Click the name of your cluster.

  4. Under UI Proxy, select YARN Resource Manager Web UI.

  5. Find the application you need and click its ID in the ID column.

    This will open a window with info on the application's performance and a table listing its run attempts.

  6. Click the link next to the attempt in question in the Logs column.

Checking the application queueChecking the application queue

  1. Open the folder dashboard.
  2. Go to Yandex Data Processing.
  3. Click the name of your cluster.
  4. Under UI Proxy, select YARN Resource Manager Web UI.
  5. In the left-hand menu, navigate to Scheduler.

The Application Queues section shows the queue of applications and the resources they use.

Checking application detailsChecking application details

YARN Resource Manager Web UI
Spark History Server Web UI
  1. Open the folder dashboard.

  2. Go to Yandex Data Processing.

  3. Click the name of your cluster.

  4. Under UI Proxy, select YARN Resource Manager Web UI.

  5. Find the application in question and follow the link in the Tracking UI column. The link name depends on the application status:

    • ApplicationMaster for running applications
    • History for completed applications
  1. Open the folder dashboard.

  2. Go to Yandex Data Processing.

  3. Click the name of your cluster.

  4. Under UI Proxy, select Spark History Server Web UI.

    This will open the list of completed applications. To switch to the list of running applications, click Show incomplete applications at the bottom of the table.

  5. Find the application in question and follow the link in the App ID column.

This will open the Spark History Server Web UI window with details on the application you selected:

  • Event Timeline: History of job runs with info on added and removed executors.
  • Active Jobs: List of jobs being run or pending.
  • Completed Jobs: List of completed jobs.

For each job, the table specifies:

  • Start time (Submitted)
  • Duration
  • Stages: Succeeded/Total
  • Tasks: Succeeded/Total

Checking resources allocated to the applicationChecking resources allocated to the application

  1. Open the folder dashboard.
  2. Go to Yandex Data Processing.
  3. Click the name of your cluster.
  4. Under UI Proxy, select Spark History Server Web UI.
  5. In the top menu, navigate to Executors.

The UI will display two tables:

  • Summary: High-level information, such as the number and status of executors and resources in use.
  • Executors: Information about each executor.

The tables specify the following:

  • Amount of resources available per resource executor.
  • Number of running and completed tasks.
  • Task duration (Task Time), including the time spent on garbage collection (GC Time).

Tip

If garbage collection takes much time:

  • Make sure you have enough memory allocated to the executor.
  • Configure the garbage collector manually. To learn how to do this, see the Apache Spark documentation.

Checking persisted RDDsChecking persisted RDDs

  1. Open the folder dashboard.
  2. Go to Yandex Data Processing.
  3. Click the name of your cluster.
  4. Under UI Proxy, select Spark History Server Web UI.
  5. In the top menu, navigate to Storage.

The UI displays the list of cacheable tables (RDDs). For each RDD, it shows information about the used memory and disk space, as well as caching progress.

To view details, click the RDD name.

Checking the list of SQL queries and their execution plansChecking the list of SQL queries and their execution plans

  1. Open the folder dashboard.
  2. Go to Yandex Data Processing.
  3. Click the name of your cluster.
  4. Under UI Proxy, select Spark History Server Web UI.
  5. In the top menu, navigate to SQL.

The table lists executed SQL queries, including their start time and duration.

To see the query execution plan, click the query text in the Description column. The query execution plan is displayed as a flowchart. To view it as text, click Details at the bottom of the figure.

The query execution plan contains stats for each operator along with the number of completed tasks and their duration. If the query is still running, the current stats will be shown.

Was the article helpful?

Previous
Monitoring the state of clusters and hosts
Next
Diagnosing and troubleshooting Spark application performance issues
© 2026 Direct Cursus Technology L.L.C.