Managing Spark jobs
Creating a job
Warning
Once created, the job will run automatically.
To create a job:
-
Navigate to the folder dashboard
and select Managed Service for Apache Spark™. -
Click the name of your cluster and open the Jobs tab.
-
Click Create job.
-
Enter the job name.
-
In the Job type field, select
Spark. -
In the Main jar field, specify the path to the application's main JAR file in the following format:
File location Path format Instance file system file:///<file_path>Object Storage bucket s3a://<bucket_name>/<file_path>Internet http://<path_to_file>orhttps://<path_to_file>Archives in standard Linux formats, such as
zip,gz,xz,bz2, etc., are supported.The cluster service account needs read access to all the files in the bucket. Step-by-step guides on how to set up access to Object Storage are provided in Editing a bucket ACL.
-
In the Main class field, specify the name of the main application class.
-
Specify job arguments.
If an argument, variable, or property is in several space-separated parts, specify each part separately. At the same time, it is important to preserve the order in which you declare arguments, variables, and properties.
The
-n 1000argument, for instance, must be converted into two arguments,-nand1000, in that order. -
Optionally, specify the paths to JAR files, if any.
-
Optionally, configure advanced settings:
- Specify paths to the required files and archives.
- In the Properties field, specify component properties as
key-valuepairs. - Specify the coordinates of included and excluded Maven packages as well as URLs of additional repositories for package search.
-
Click Submit job.
-
Get an IAM token for API authentication and save it as an environment variable:
export IAM_TOKEN="<IAM_token>" -
Clone the cloudapi
repository:cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapiBelow, we assume the repository contents are stored in the
~/cloudapi/directory. -
Use the JobService.Create call and send the following request, e.g., via gRPCurl
:grpcurl \ -format json \ -import-path ~/cloudapi/ \ -import-path ~/cloudapi/third_party/googleapis/ \ -proto ~/cloudapi/yandex/cloud/spark/v1/job_service.proto \ -rpc-header "Authorization: Bearer $IAM_TOKEN" \ -d '{ "cluster_id": "<cluster_ID>", "name": "<job_name>", "spark_job": { "args": [ <list_of_arguments> ], "jar_file_uris": [ <list_of_paths_to_JAR_files> ], "file_uris": [ <list_of_paths_to_files> ], "archive_uris": [ <list_of_paths_to_archives> ], "properties": { <list_of_properties> }, "main_jar_file_uri": "<path_to_main_JAR_file>", "main_class": "<main_class_name>", "packages": [ <list_of_package_Maven_coordinates> ], "repositories": [ <URLs_of_repositories_for_package_search> ], "exclude_packages": [ <list_of_Maven_coordinates_of_excluded_packages> ] } }' \ spark.api.cloud.yandex.net:443 \ yandex.cloud.spark.v1.JobService.CreateWhere:
-
name: Spark job name. -
spark_job: Spark job parameters:-
args: Job arguments. -
jar_file_uris: Paths to JAR files. -
file_uris: Paths to files. -
file_uris: Paths to archives. -
properties: Component properties askey:valuepairs. -
main_jar_file_uri: Path to the application's main JAR file in the following format:File location Path format Instance file system file:///<file_path>Object Storage bucket s3a://<bucket_name>/<file_path>Internet http://<path_to_file>orhttps://<path_to_file>Archives in standard Linux formats, such as
zip,gz,xz,bz2, etc., are supported.The cluster service account needs read access to all the files in the bucket. Step-by-step guides on how to set up access to Object Storage are provided in Editing a bucket ACL.
-
main_class: Main class name. -
packages: Maven coordinates of the JAR files ingroupId:artifactId:versionformat. -
repositories: URLs of additional repositories for package search. -
exclude_packages: Maven coordinates of the packages to exclude, ingroupId:artifactIdformat.
-
You can get the cluster ID with the list of clusters in the folder.
-
-
View the server response to make sure your request was successful.
Cancel a job
Note
You cannot cancel jobs with the ERROR, DONE, or CANCELLED status. To find out a job's status, retrieve a list of jobs in the cluster.
- Navigate to the folder dashboard
and select Managed Service for Apache Spark™. - Click the name of your cluster and open the Jobs tab.
- Click the job name.
- Click Cancel in the top-right corner of the page.
- In the window that opens, select Cancel job.
If you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.
To cancel a job, do the following:
-
View the description of the CLI command for canceling a job:
yc managed-spark job cancel --help -
Cancel a job by running this command:
yc managed-spark job cancel <job_name_or_ID> \ --cluster-id <cluster_ID>You can get the cluster ID with the list of clusters in the folder.
You can get the job name and ID with the list of cluster jobs.
-
Get an IAM token for API authentication and save it as an environment variable:
export IAM_TOKEN="<IAM_token>" -
Clone the cloudapi
repository:cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapiBelow, we assume the repository contents are stored in the
~/cloudapi/directory. -
Use the JobService.Cancel call and send the following request, e.g., via gRPCurl
:grpcurl \ -format json \ -import-path ~/cloudapi/ \ -import-path ~/cloudapi/third_party/googleapis/ \ -proto ~/cloudapi/yandex/cloud/spark/v1/job_service.proto \ -rpc-header "Authorization: Bearer $IAM_TOKEN" \ -d '{ "cluster_id": "<cluster_ID>", "job_id": "<job_ID>" }' \ spark.api.cloud.yandex.net:443 \ yandex.cloud.spark.v1.JobService.CancelYou can get the cluster ID with the list of folder clusters, and the job ID, with the list of cluster jobs.
-
View the server response to make sure your request was successful.
Get a list of jobs
- Navigate to the folder dashboard
and select Managed Service for Apache Spark™. - Click the name of your cluster and open the Jobs tab.
If you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.
To get a list of cluster jobs:
-
See the description of the CLI command for getting a list of jobs:
yc managed-spark job list --help -
Get the list of jobs by running this command:
yc managed-spark job list \ --cluster-id <cluster_ID>You can get the cluster ID with the list of clusters in the folder.
-
Get an IAM token for API authentication and save it as an environment variable:
export IAM_TOKEN="<IAM_token>" -
Clone the cloudapi
repository:cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapiBelow, we assume the repository contents are stored in the
~/cloudapi/directory. -
Use the JobService.List call and send the following request, e.g., via gRPCurl
:grpcurl \ -format json \ -import-path ~/cloudapi/ \ -import-path ~/cloudapi/third_party/googleapis/ \ -proto ~/cloudapi/yandex/cloud/spark/v1/job_service.proto \ -rpc-header "Authorization: Bearer $IAM_TOKEN" \ -d '{ "cluster_id": "<cluster_ID>" }' \ spark.api.cloud.yandex.net:443 \ yandex.cloud.spark.v1.JobService.ListYou can get the cluster ID with the list of clusters in the folder.
-
View the server response to make sure your request was successful.
Get general info about a job
- Navigate to the folder dashboard
and select Managed Service for Apache Spark™. - Click the name of your cluster and open the Jobs tab.
- Click the job name.
If you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.
To get information about a job:
-
View the description of the CLI command for getting information about a job:
yc managed-spark job get --help -
Get information about the job by running this command:
yc managed-spark job get <job_ID> \ --cluster-id <cluster_ID>You can get the cluster ID with the list of clusters in the folder.
You can get the job ID with the list of cluster jobs.
-
Get an IAM token for API authentication and save it as an environment variable:
export IAM_TOKEN="<IAM_token>" -
Clone the cloudapi
repository:cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapiBelow, we assume the repository contents are stored in the
~/cloudapi/directory. -
Use the JobService.Get call and send the following request, e.g., via gRPCurl
:grpcurl \ -format json \ -import-path ~/cloudapi/ \ -import-path ~/cloudapi/third_party/googleapis/ \ -proto ~/cloudapi/yandex/cloud/spark/v1/job_service.proto \ -rpc-header "Authorization: Bearer $IAM_TOKEN" \ -d '{ "cluster_id": "<cluster_ID>", "job_id": "<job_ID>" }' \ spark.api.cloud.yandex.net:443 \ yandex.cloud.spark.v1.JobService.GetYou can get the cluster ID with the list of folder clusters, and the job ID, with the list of cluster jobs.
-
View the server response to make sure your request was successful.
Get job execution logs
Warning
To get job execution logs, enable logging in your cluster while creating it.
- Navigate to the folder dashboard
and select Managed Service for Apache Spark™. - Click the name of your cluster and open the Jobs tab.
- Click the job name.
- In the Output logs field, click the link.
If you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.
To get job execution logs:
-
See the description of the CLI command for getting job logs:
yc managed-spark job log --help -
Get job logs by running this command:
yc managed-spark job log <job_ID> \ --cluster-id <cluster_ID>You can get the cluster ID with the list of clusters in the folder.
You can get the job ID with the list of cluster jobs.
To get logs for multiple jobs, list their IDs separated by spaces, e.g.:
yc managed-spark job log c9q9veov4uql******** c9qu8uftedte******** \ --cluster-id c9q8ml85r1oh********
-
Get an IAM token for API authentication and save it as an environment variable:
export IAM_TOKEN="<IAM_token>" -
Clone the cloudapi
repository:cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapiBelow, we assume the repository contents are stored in the
~/cloudapi/directory. -
Use the JobService.ListLog call and send the following request, e.g., via gRPCurl
:grpcurl \ -format json \ -import-path ~/cloudapi/ \ -import-path ~/cloudapi/third_party/googleapis/ \ -proto ~/cloudapi/yandex/cloud/spark/v1/job_service.proto \ -rpc-header "Authorization: Bearer $IAM_TOKEN" \ -d '{ "cluster_id": "<cluster_ID>", "job_id": "<job_ID>" }' \ spark.api.cloud.yandex.net:443 \ yandex.cloud.spark.v1.JobService.ListLogYou can request the cluster ID with the list of clusters in the folder, and the job ID, with the list of cluster jobs.
-
View the server response to make sure your request was successful.