Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Managed Service for Apache Spark™
  • Getting started
  • Access management
  • Pricing policy
  • Yandex Monitoring metrics
  • Terraform reference
    • API authentication
      • Overview
        • Overview
        • List
        • Create
        • Get
        • ListLog
        • Cancel
  • Release notes

In this article:

  • gRPC request
  • GetJobRequest
  • Job
  • SparkJob
  • PysparkJob
  • SparkConnectJob
  1. API reference
  2. gRPC
  3. Job
  4. Get

Managed Spark API, gRPC: JobService.Get

Written by
Yandex Cloud
Updated at September 25, 2025
  • gRPC request
  • GetJobRequest
  • Job
  • SparkJob
  • PysparkJob
  • SparkConnectJob

Returns the specified Spark job.

gRPC requestgRPC request

rpc Get (GetJobRequest) returns (Job)

GetJobRequestGetJobRequest

{
  "cluster_id": "string",
  "job_id": "string"
}

Field

Description

cluster_id

string

Required field. ID of the Spark cluster.

job_id

string

Required field. ID of the Spark job to return.

JobJob

{
  "id": "string",
  "cluster_id": "string",
  "created_at": "google.protobuf.Timestamp",
  "started_at": "google.protobuf.Timestamp",
  "finished_at": "google.protobuf.Timestamp",
  "name": "string",
  "created_by": "string",
  "status": "Status",
  // Includes only one of the fields `spark_job`, `pyspark_job`, `spark_connect_job`
  "spark_job": {
    "args": [
      "string"
    ],
    "jar_file_uris": [
      "string"
    ],
    "file_uris": [
      "string"
    ],
    "archive_uris": [
      "string"
    ],
    "properties": "map<string, string>",
    "main_jar_file_uri": "string",
    "main_class": "string",
    "packages": [
      "string"
    ],
    "repositories": [
      "string"
    ],
    "exclude_packages": [
      "string"
    ]
  },
  "pyspark_job": {
    "args": [
      "string"
    ],
    "jar_file_uris": [
      "string"
    ],
    "file_uris": [
      "string"
    ],
    "archive_uris": [
      "string"
    ],
    "properties": "map<string, string>",
    "main_python_file_uri": "string",
    "python_file_uris": [
      "string"
    ],
    "packages": [
      "string"
    ],
    "repositories": [
      "string"
    ],
    "exclude_packages": [
      "string"
    ]
  },
  "spark_connect_job": {
    "jar_file_uris": [
      "string"
    ],
    "file_uris": [
      "string"
    ],
    "archive_uris": [
      "string"
    ],
    "properties": "map<string, string>",
    "packages": [
      "string"
    ],
    "repositories": [
      "string"
    ],
    "exclude_packages": [
      "string"
    ]
  },
  // end of the list of possible fields
  "ui_url": "string",
  "service_account_id": "string",
  "connect_url": "string"
}

Spark job.

Field

Description

id

string

Required. Unique ID of the Spark job.
This ID is assigned by MDB in the process of creating Spark job.

cluster_id

string

Required. Unique ID of the Spark cluster.

created_at

google.protobuf.Timestamp

The time when the Spark job was created.

started_at

google.protobuf.Timestamp

The time when the Spark job was started.

finished_at

google.protobuf.Timestamp

The time when the Spark job was finished.

name

string

Name of the Spark job.

created_by

string

The id of the user who created the job

status

enum Status

Status.

  • STATUS_UNSPECIFIED
  • PROVISIONING: Job created and is waiting to acquire.
  • PENDING: Job acquired and is waiting for execution.
  • RUNNING: Job is running.
  • ERROR: Job failed.
  • DONE: Job finished.
  • CANCELLED: Job cancelled.
  • CANCELLING: Job is waiting for cancellation.

spark_job

SparkJob

Includes only one of the fields spark_job, pyspark_job, spark_connect_job.

Job specification.

pyspark_job

PysparkJob

Includes only one of the fields spark_job, pyspark_job, spark_connect_job.

Job specification.

spark_connect_job

SparkConnectJob

Includes only one of the fields spark_job, pyspark_job, spark_connect_job.

Job specification.

ui_url

string

Spark UI Url.

service_account_id

string

Service account used to access Cloud resources.

connect_url

string

Spark Connect Url.

SparkJobSparkJob

Field

Description

args[]

string

Optional arguments to pass to the driver.

jar_file_uris[]

string

Jar file URIs to add to the CLASSPATHs of the Spark driver and tasks.

file_uris[]

string

URIs of files to be copied to the working directory of Spark drivers and distributed tasks.

archive_uris[]

string

URIs of archives to be extracted in the working directory of Spark drivers and tasks.

properties

object (map<string, string>)

A mapping of property names to values, used to configure Spark.

main_jar_file_uri

string

URI of the jar file containing the main class.

main_class

string

The name of the driver's main class.

packages[]

string

List of maven coordinates of jars to include on the driver and executor classpaths.

repositories[]

string

List of additional remote repositories to search for the maven coordinates given with --packages.

exclude_packages[]

string

List of groupId:artifactId, to exclude while resolving the dependencies provided in --packages to avoid dependency conflicts.

PysparkJobPysparkJob

Field

Description

args[]

string

Optional arguments to pass to the driver.

jar_file_uris[]

string

Jar file URIs to add to the CLASSPATHs of the Spark driver and tasks.

file_uris[]

string

URIs of files to be copied to the working directory of Spark drivers and distributed tasks.

archive_uris[]

string

URIs of archives to be extracted in the working directory of Spark drivers and tasks.

properties

object (map<string, string>)

A mapping of property names to values, used to configure Spark.

main_python_file_uri

string

URI of the main Python file to use as the driver. Must be a .py file.

python_file_uris[]

string

URIs of Python files to pass to the PySpark framework.

packages[]

string

List of maven coordinates of jars to include on the driver and executor classpaths.

repositories[]

string

List of additional remote repositories to search for the maven coordinates given with --packages.

exclude_packages[]

string

List of groupId:artifactId, to exclude while resolving the dependencies provided in --packages to avoid dependency conflicts.

SparkConnectJobSparkConnectJob

Field

Description

jar_file_uris[]

string

Jar file URIs to add to the CLASSPATHs of the Spark driver and tasks.

file_uris[]

string

URIs of files to be copied to the working directory of Spark drivers and distributed tasks.

archive_uris[]

string

URIs of archives to be extracted in the working directory of Spark drivers and tasks.

properties

object (map<string, string>)

A mapping of property names to values, used to configure Spark.

packages[]

string

List of maven coordinates of jars to include on the driver and executor classpaths.

repositories[]

string

List of additional remote repositories to search for the maven coordinates given with --packages.

exclude_packages[]

string

List of groupId:artifactId, to exclude while resolving the dependencies provided in --packages to avoid dependency conflicts.

Was the article helpful?

Previous
Create
Next
ListLog
© 2025 Direct Cursus Technology L.L.C.