Working with Spark connectors
In DataSphere, you can use Spark connectors to work with existing or automatically created Yandex Data Processing clusters.
Getting started
To use Yandex Data Processing clusters, set the following project parameters:
-
Default folder to enable integration with other Yandex Cloud services. A Yandex Data Processing cluster will be deployed in this folder based on the current cloud quotas. A fee for using the cluster will be debited from your cloud billing account.
-
Service account to be used by DataSphere for creating and managing clusters. The service account needs the following roles:
dataproc.agent
to use Yandex Data Processing clusters.dataproc.admin
to create clusters from Yandex Data Processing templates.vpc.user
to use the Yandex Data Processing cluster network.iam.serviceAccounts.user
to create resources in the folder on behalf of the service account.
-
Subnet for DataSphere to communicate with the Yandex Data Processing cluster. Since the Yandex Data Processing cluster needs to access the internet, make sure to configure a NAT gateway in the subnet.
Note
If you specified a subnet in the project settings, the time to allocate computing resources may be increased.
Warning
The Yandex Data Processing persistent cluster must have the livy:livy.spark.deploy-mode : client
setting.
Creating a Spark connector
-
Select the relevant project in your community or on the DataSphere homepage
in the Recent projects tab. -
Under Project resources, click
Spark connector. -
Click Create connector.
-
In the Name field, enter a name for the connector. The name format is as follows:
- The name must be from 3 to 63 characters long.
- It may contain uppercase and lowercase Latin and Russian letters, numbers, hyphens, underscores, and spaces.
- The first character must be a letter. The last character cannot be a hyphen, underscore, or space.
-
Under Yandex Data Processing cluster, choose which cluster you plan to work with:
- Select cluster: Select an existing Yandex Data Processing cluster or click Create cluster in Yandex Data Processing to go to Yandex Data Processing and create a new one. The Yandex Data Processing persistent cluster must have the
livy:livy.spark.deploy-mode : client
setting. - Create temporary cluster: Select to create a temporary Yandex Data Processing cluster. A temporary cluster will be created the first time you run computations in the project notebook.
- Select cluster: Select an existing Yandex Data Processing cluster or click Create cluster in Yandex Data Processing to go to Yandex Data Processing and create a new one. The Yandex Data Processing persistent cluster must have the
-
(Optional) Under S3 settings, specify the static access key ID and the secret in which the secret part of the static key for the S3 connector is stored.
S3 settings allows you to specify data for connecting to an S3 bucket. We recommend this when working with Object Storage buckets in any clusters, especially in Yandex Data Processing clusters without the HDFS option enabled.
-
Under Spark settings, select Use default settings to use the default Yandex Data Processing cluster settings or specify the Key and Value parameters to manually add or modify the Yandex Data Processing cluster settings.
Tip
For Yandex Data Processing clusters without the HDFS option enabled, set the additional parameter as follows:
spark.hadoop.fs.s3a.fast.upload.buffer
=bytebuffer
.For your own clusters with the Spark Connect connection type, set the additional parameter as follows:
dataproc:spark-connect
=enabled
. -
Click Create. You will see a page with detailed information about the created connector.
Using a Yandex Data Processing cluster in a project
Warning
If you use a Spark connector to work with Yandex Data Processing clusters, then the notebook will use a special environment for working with clusters instead of the standard project environment.
Open the DataSphere project:
-
Select the relevant project in your community or on the DataSphere homepage
in the Recent projects tab. - Click Open project in JupyterLab and wait for the loading to complete.
- Open the notebook tab.
- Run any cell by selecting Run → Run Selected Cells or pressing Shift + Enter.
- In the Notebook VM configurations window that opens, go to the With Yandex Data Processing cluster tab.
- Select the required configuration and connector.
- Click Select.
To disable the Yandex Data Processing temporary cluster created with the Spark connector, stop the notebook VM.
To stop paying for the Yandex Data Processing persistent cluster, delete it using the management console.
Editing a Spark connector
-
Select the relevant project in your community or on the DataSphere homepage
in the Recent projects tab. - Under Project resources, click
Spark connector. - In the list of Spark connectors, select the one you want to edit. Click
Edit. - Edit the Spark connector by changing its name or parameters.
- Click Save.
Sharing a Spark connector
Note
You can only share resources within a single organization between communities created in the same availability zone.
To share a Spark connector in a community, you need the Editor
role in the project and the Developer
role in the community. To learn more about roles in DataSphere, see Access management in DataSphere.
-
Select the relevant project in your community or on the DataSphere homepage
in the Recent projects tab. - Under Project resources, click
Spark connector. - Select the required Spark connector from the list.
- Go to the Access tab.
- Enable the visibility option next to the name of the community you want to share the Spark connector with.
To make a Spark connector available for use in another project, the project administrator should add it to the Shared tab.
Deleting a Spark connector
You can only delete a connector that is not available in the community.
-
Select the relevant project in your community or on the DataSphere homepage
in the Recent projects tab. - Under Project resources, click
Spark connector. - In the list of Spark connectors, select the one you want to delete. Click Delete.
- Click Confirm.
You will see a message saying that the connector has been deleted.
Warning
The actual deletion of resources can take up to 72 hours.