Working with Spark connectors
In DataSphere, you can use Spark connectors to work with existing or automatically created Yandex Data Processing clusters.
Getting started
To work with Yandex Data Processing clusters:
-
In the project settings, specify these parameters:
- Default folder for integrating with other Yandex Cloud services. It will house a Yandex Data Processing cluster based on the current cloud quotas. A fee for using the cluster will be debited from your cloud billing account.
- Service account with the
vpc.userrole. DataSphere will use for this account to work with the Yandex Data Processing cluster network. - Subnet for DataSphere to communicate with the Yandex Data Processing cluster. Since the Yandex Data Processing cluster needs to access the internet, make sure to configure a NAT gateway in this subnet. After you specify a subnet, the time for computing resource allocation may increase.
-
Create a service agent:
-
To allow a service agent to operate in DataSphere, ask your cloud admin or owner to run the following command in the Yandex Cloud CLI:
yc iam service-control enable datasphere --cloud-id <cloud_ID>Where
--cloud-idis the ID of the cloud you are going to use in the DataSphere community. -
Create a service account with the following roles:
dataproc.agentto use Yandex Data Processing clusters.dataproc.adminto create clusters from Yandex Data Processing templates.vpc.userto use the Yandex Data Processing cluster network.iam.serviceAccounts.userto create resources in the folder on behalf of the service account.
-
Under Spark clusters in the community settings, click Add service account and select the service account you created.
-
Warning
The Yandex Data Processing persistent cluster must have the livy:livy.spark.deploy-mode : client setting.
Creating a Spark connector
-
Select the project in your community or on the DataSphere home page
in the Recent projects tab. -
Under Project resources, click
Spark connector. -
Click Create connector.
-
In the Name field, enter a name for your connector. Follow these naming requirements:
- The name must be from 3 to 63 characters long.
- It may contain uppercase and lowercase Latin and Cyrillic letters, numbers, hyphens, underscores, and spaces.
- The first character must be a letter. The last character cannot be a hyphen, underscore, or space.
-
Under Yandex Data Processing cluster, select the cluster you plan to work with:
- Select cluster: Select an existing Yandex Data Processing cluster or click Create cluster in Yandex Data Processing to go to Yandex Data Processing and create a new one. The Yandex Data Processing persistent cluster must have the
livy:livy.spark.deploy-mode : clientsetting. - Create temporary cluster: Select this option to create a temporary Yandex Data Processing cluster. A temporary cluster will be created the first time you run computations in your project notebook.
- Select cluster: Select an existing Yandex Data Processing cluster or click Create cluster in Yandex Data Processing to go to Yandex Data Processing and create a new one. The Yandex Data Processing persistent cluster must have the
-
Optionally, under S3 settings, specify the static access key ID and the secret storing the secret part of the static key for the S3 connector.
S3 settings allows you to specify data for connecting to an S3 bucket. We recommend this when working with Object Storage buckets in any clusters, especially in Yandex Data Processing clusters without the HDFS option enabled.
-
Under Spark settings, select Use default settings to use the default Yandex Data Processing cluster settings or specify the Key and Value parameters to manually add or update the Yandex Data Processing cluster settings.
Tip
For Yandex Data Processing clusters without the HDFS option enabled, set
spark.hadoop.fs.s3a.fast.upload.buffer=bytebuffer.For your own clusters with the Spark Connect connection type, set
dataproc:spark-connect=enabled. -
Click Create. You will see a page with detailed info on the connector you created.
Using a Yandex Data Processing cluster in a project
Warning
If you are using a Spark connector to work with Yandex Data Processing clusters, the notebook will use a special project environment rather than a standard one for running the cluster.
Open the DataSphere project:
-
Select the project in your community or on the DataSphere home page
in the Recent projects tab. - Click Open project in JupyterLab and wait for the loading to complete.
- Open the notebook tab.
- Run any cell by selecting Run → Run Selected Cells or pressing Shift + Enter.
- In the Notebook VM configurations window that opens, go to the With Yandex Data Processing cluster tab.
- Select the required configuration and connector.
- Click Select.
To disable the Yandex Data Processing temporary cluster created with the Spark connector, stop the notebook VM.
To stop paying for the Yandex Data Processing persistent cluster, delete it using the management console.
Editing a Spark connector
-
Select the project in your community or on the DataSphere home page
in the Recent projects tab. - Under Project resources, click
Spark connector. - In the list of Spark connectors, select the one you want to edit. Click
Edit. - Edit the Spark connector by changing its name or parameters.
- Click Save.
Sharing a Spark connector
Note
You can only share resources within a single organization, between communities created in the same availability zone.
To share a Spark connector within a community, you need the Editor role in the project and the Developer role in the community. To learn more about roles in DataSphere, see Access management in DataSphere.
-
Select the project in your community or on the DataSphere home page
in the Recent projects tab. - Under Project resources, click
Spark connector. - Select the Spark connector from the list.
- Go to the Access tab.
- Enable the visibility option next to the name of the community you want to share the Spark connector with.
To make a Spark connector available for use in a different project, the project admin needs to add that connector on the Shared tab.
Deleting a Spark connector
You can only delete a connector that is not available to any community.
-
Select the project in your community or on the DataSphere home page
in the Recent projects tab. - Under Project resources, click
Spark connector. - In the list of Spark connectors, select the one you want to delete. Click Delete.
- Click Confirm.
You will see a message saying that the connector has been deleted.
Warning
In fact, resource deletion can take up to 72 hours.