DataSphere Jobs API v2, gRPC: ProjectJobService.List
Lists jobs.
gRPC request
rpc List (ListProjectJobRequest) returns (ListProjectJobResponse)
ListProjectJobRequest
{
"project_id": "string",
"page_size": "int64",
"page_token": "string",
"filter": "string"
}
Field |
Description |
project_id |
string ID of the project. |
page_size |
int64 The maximum number of results per page to return. If the number of available |
page_token |
string Page token. To get the next page of results, set |
filter |
string restrictions:
|
ListProjectJobResponse
{
"jobs": [
{
"id": "string",
"name": "string",
"desc": "string",
"created_at": "google.protobuf.Timestamp",
"finished_at": "google.protobuf.Timestamp",
"status": "JobStatus",
"config": "string",
"created_by_id": "string",
"project_id": "string",
"job_parameters": {
"input_files": [
{
"desc": {
"path": "string",
"var": "string"
},
"sha256": "string",
"size_bytes": "int64",
"compression_type": "FileCompressionType"
}
],
"output_files": [
{
"path": "string",
"var": "string"
}
],
"s3_mount_ids": [
"string"
],
"dataset_ids": [
"string"
],
"cmd": "string",
"env": {
"vars": "map<string, string>",
// Includes only one of the fields `docker_image_resource_id`, `docker_image_spec`
"docker_image_resource_id": "string",
"docker_image_spec": {
"image_url": "string",
"username": "string",
// Includes only one of the fields `password_plain_text`, `password_ds_secret_name`
"password_plain_text": "string",
"password_ds_secret_name": "string"
// end of the list of possible fields
},
// end of the list of possible fields
"python_env": {
"conda_yaml": "string",
"local_modules": [
{
"desc": {
"path": "string",
"var": "string"
},
"sha256": "string",
"size_bytes": "int64",
"compression_type": "FileCompressionType"
}
],
"python_version": "string",
"requirements": [
"string"
],
"pip_options": {
"index_url": "string",
"extra_index_urls": [
"string"
],
"trusted_hosts": [
"string"
],
"no_deps": "bool"
}
}
},
"attach_project_disk": "bool",
"cloud_instance_types": [
{
"name": "string"
}
],
"extended_working_storage": {
"type": "StorageType",
"size_gb": "int64"
},
"arguments": [
{
"name": "string",
"value": "string"
}
],
"output_datasets": [
{
"name": "string",
"description": "string",
"labels": "map<string, string>",
"size_gb": "int64",
"var": "string"
}
],
"graceful_shutdown_parameters": {
"timeout": "google.protobuf.Duration",
"signal": "int64"
},
"spark_parameters": {
"connector_id": "string"
}
},
"data_expires_at": "google.protobuf.Timestamp",
"data_cleared": "bool",
"output_files": [
{
"desc": {
"path": "string",
"var": "string"
},
"sha256": "string",
"size_bytes": "int64",
"compression_type": "FileCompressionType"
}
],
"log_files": [
{
"desc": {
"path": "string",
"var": "string"
},
"sha256": "string",
"size_bytes": "int64",
"compression_type": "FileCompressionType"
}
],
"diagnostic_files": [
{
"desc": {
"path": "string",
"var": "string"
},
"sha256": "string",
"size_bytes": "int64",
"compression_type": "FileCompressionType"
}
],
"data_size_bytes": "int64",
"started_at": "google.protobuf.Timestamp",
"status_details": "string",
"actual_cloud_instance_type": {
"name": "string"
},
"parent_job_id": "string",
"file_errors": [
{
// Includes only one of the fields `output_file_desc`, `log_file_name`
"output_file_desc": {
"path": "string",
"var": "string"
},
"log_file_name": "string",
// end of the list of possible fields
"description": "string",
"type": "ErrorType"
}
],
"output_datasets": [
{
"desc": {
"name": "string",
"description": "string",
"labels": "map<string, string>",
"size_gb": "int64",
"var": "string"
},
"id": "string"
}
]
}
],
"next_page_token": "string"
}
Field |
Description |
jobs[] |
Instances of the jobs. |
next_page_token |
string This token allows you to get the next page of results for list requests. If the number of results |
Job
Instance of the job.
Field |
Description |
id |
string ID of the job. |
name |
string Name of the job. |
desc |
string Description of the job. |
created_at |
Create job timestamp. |
finished_at |
Finish job timestamp. |
status |
enum JobStatus Status of the job.
|
config |
string Config of the job, copied from configuration file. |
created_by_id |
string ID of the user who created the job. |
project_id |
string ID of the project. |
job_parameters |
|
data_expires_at |
Job data expiration timestamp. |
data_cleared |
bool Marks if the job data has been cleared. |
output_files[] |
Output files of the job. |
log_files[] |
Job log files. |
diagnostic_files[] |
Job diagnostics files. |
data_size_bytes |
int64 Job total data size. |
started_at |
Start job timestamp. |
status_details |
string Details. |
actual_cloud_instance_type |
Actual VM instance type, job is running on. |
parent_job_id |
string Reference to the parent job. |
file_errors[] |
Failed uploads. |
output_datasets[] |
Created datasets. |
JobParameters
Job parameters.
Field |
Description |
input_files[] |
List of input files. |
output_files[] |
List of output files descriptions. |
s3_mount_ids[] |
string List of DataSphere S3 mount ids. |
dataset_ids[] |
string List of DataSphere dataset ids. |
cmd |
string Job run command. |
env |
Job environment description. |
attach_project_disk |
bool Should project disk be attached to VM. |
cloud_instance_types[] |
VM specification. |
extended_working_storage |
Extended working storage configuration. |
arguments[] |
List of literal arguments. |
output_datasets[] |
List of DataSets descriptions to create. |
graceful_shutdown_parameters |
Graceful shutdown settings. |
spark_parameters |
Spark connector settings. |
File
Field |
Description |
desc |
|
sha256 |
string SHA256 of the file. |
size_bytes |
int64 File size in bytes. |
compression_type |
enum FileCompressionType File compression info
|
FileDesc
Field |
Description |
path |
string Path of the file on filesystem. |
var |
string Variable to use in cmd substitution. |
Environment
Field |
Description |
vars |
object (map<string, string>) Environment variables. |
docker_image_resource_id |
string DS docker image id. Includes only one of the fields |
docker_image_spec |
Includes only one of the fields |
python_env |
DockerImageSpec
Field |
Description |
image_url |
string Docker image URL. |
username |
string Username for container registry. |
password_plain_text |
string Plaintext password. Includes only one of the fields Password for container registry. |
password_ds_secret_name |
string ID of DataSphere secret containing password. Includes only one of the fields Password for container registry. |
PythonEnv
Field |
Description |
conda_yaml |
string Conda YAML. |
local_modules[] |
List of local modules descriptions. |
python_version |
string Python version reduced to major.minor |
requirements[] |
string List of pip requirements |
pip_options |
Pip install options |
PipOptions
Field |
Description |
index_url |
string --index-url option |
extra_index_urls[] |
string --extra-index-urls option |
trusted_hosts[] |
string --trusted-hosts option |
no_deps |
bool --no-deps option |
CloudInstanceType
Field |
Description |
name |
string Name of DataSphere VM configuration. |
ExtendedWorkingStorage
Extended working storage configuration.
Field |
Description |
type |
enum StorageType
|
size_gb |
int64 |
Argument
Field |
Description |
name |
string |
value |
string |
OutputDatasetDesc
Field |
Description |
name |
string Name to create dataset with |
description |
string Description to show in UI |
labels |
object (map<string, string>) |
size_gb |
int64 Size of dataset to create |
var |
string Var name to replace in cmd, like in FileDesc |
GracefulShutdownParameters
Field |
Description |
timeout |
|
signal |
int64 default 15 (SIGTERM) |
SparkParameters
Field |
Description |
connector_id |
string ID of the Spark connector. |
FileUploadError
Field |
Description |
output_file_desc |
Includes only one of the fields |
log_file_name |
string Includes only one of the fields |
description |
string |
type |
enum ErrorType
|
OutputDataset
Field |
Description |
desc |
Dataset description |
id |
string Id of created dataset |