Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Tutorials
    • All tutorials
    • Unassisted deployment of the Apache Kafka® web interface
    • Upgrading a Managed Service for Apache Kafka® cluster to migrate from ZooKeeper to KRaft
    • Migrating a database from a third-party Apache Kafka® cluster to Managed Service for Apache Kafka®
    • Moving data between Managed Service for Apache Kafka® clusters using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for YDB to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Yandex StoreDoc using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for OpenSearch using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for PostgreSQL using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Managed Service for YDB using Data Transfer
    • Delivering data from Managed Service for Apache Kafka® to Data Streams using Data Transfer
    • Delivering data from Data Streams to Managed Service for YDB using Data Transfer
    • Delivering data from Data Streams to Managed Service for Apache Kafka® using Data Transfer
    • YDB change data capture and delivery to YDS
    • Configuring Kafka Connect to work with a Managed Service for Apache Kafka® cluster
    • Synchronizing Apache Kafka® topics in Object Storage with no web access
    • Monitoring message loss in an Apache Kafka® topic
    • Automating Query tasks with Managed Service for Apache Airflow™
    • Sending requests to the Yandex Cloud API via the Yandex Cloud Python SDK
    • Configuring an SMTP server to send e-mail notifications
    • Adding data to a ClickHouse® DB
    • Migrating data to Managed Service for ClickHouse® using ClickHouse® tools
    • Migrating data to Managed Service for ClickHouse® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for ClickHouse® using Data Transfer
    • Asynchronously replicating data from PostgreSQL to ClickHouse®
    • Exchanging data between Managed Service for ClickHouse® and Yandex Data Processing
    • Configuring Managed Service for ClickHouse® for Graphite
    • Fetching data from Managed Service for Apache Kafka® to Managed Service for ClickHouse®
    • Fetching data from Managed Service for Apache Kafka® to ksqlDB
    • Fetching data from RabbitMQ to Managed Service for ClickHouse®
    • Saving a data stream from Data Streams to Managed Service for ClickHouse®
    • Asynchronous replication of data from Yandex Metrica to ClickHouse® using Data Transfer
    • Using hybrid storage in Managed Service for ClickHouse®
    • Sharding Managed Service for ClickHouse® tables
    • Loading data from Yandex Direct to a Managed Service for ClickHouse® data mart using Cloud Functions, Object Storage, and Data Transfer
    • Loading data from Object Storage to Managed Service for ClickHouse® using Data Transfer
    • Migrating data with change of storage from Managed Service for OpenSearch to Managed Service for ClickHouse® using Data Transfer
    • Loading data from Managed Service for YDB to Managed Service for ClickHouse® using Data Transfer
    • Yandex Managed Service for ClickHouse® integration with Microsoft SQL Server via ClickHouse® JDBC Bridge
    • Migrating databases from Google BigQuery to Managed Service for ClickHouse®
    • Yandex Managed Service for ClickHouse® integration with Oracle via ClickHouse® JDBC Bridge
    • Configuring Cloud DNS to access a Managed Service for ClickHouse® cluster from other cloud networks
    • Migrating a Yandex Data Processing HDFS cluster to a different availability zone
    • Importing data from Managed Service for MySQL® to Yandex Data Processing using Sqoop
    • Importing data from Managed Service for PostgreSQL to Yandex Data Processing using Sqoop
    • Mounting Object Storage buckets to the file system of Yandex Data Processing hosts
    • Working with Apache Kafka® topics using Yandex Data Processing
    • Automating operations with Yandex Data Processing using Managed Service for Apache Airflow™
    • Shared use of Yandex Data Processing tables through Apache Hive™ Metastore
    • Transferring metadata across Yandex Data Processing clusters using Apache Hive™ Metastore
    • Importing data from Object Storage, processing it, and exporting it to Managed Service for ClickHouse®
    • Migrating collections from a third-party MongoDB cluster to Yandex StoreDoc
    • Migrating data to Yandex StoreDoc
    • Migrating Yandex StoreDoc cluster from 4.4 to 6.0
    • Sharding Yandex StoreDoc collections
    • Yandex StoreDoc performance analysis and tuning
    • Managed Service for MySQL® performance analysis and tuning
    • Syncing data from a third-party MySQL® cluster to Managed Service for MySQL® using Data Transfer
    • Migrating a database from Managed Service for MySQL® to a third-party MySQL® cluster
    • Migrating a database from Managed Service for MySQL® to Object Storage using Data Transfer
    • Migrating data from Object Storage to Managed Service for MySQL® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for MySQL® to Managed Service for Apache Kafka® using Debezium
    • Migrating a database from Managed Service for MySQL® to Managed Service for YDB using Data Transfer
    • MySQL® change data capture and delivery to YDS
    • Migrating data from Managed Service for MySQL® to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from AWS RDS for PostgreSQL to Managed Service for PostgreSQL using Data Transfer
    • Migrating data from Managed Service for MySQL® to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Configuring an index policy in Managed Service for OpenSearch
    • Migrating data from a third-party OpenSearch cluster to Managed Service for OpenSearch using Data Transfer
    • Loading data from Managed Service for OpenSearch to Object Storage using Data Transfer
    • Migrating data from Managed Service for OpenSearch to Managed Service for YDB using Data Transfer
    • Copying data from Managed Service for OpenSearch to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Authenticating a Managed Service for OpenSearch cluster in OpenSearch Dashboards using Keycloak
    • Using the yandex-lemmer plugin in Managed Service for OpenSearch
    • Creating a PostgreSQL cluster for 1C:Enterprise
    • Searching for the Managed Service for PostgreSQL cluster performance issues
    • Managed Service for PostgreSQL performance analysis and tuning
    • Logical replication in PostgreSQL
    • Migrating a database from a third-party PostgreSQL cluster to Managed Service for PostgreSQL
    • Migrating a database from Managed Service for PostgreSQL
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Data Transfer
    • Delivering data from Managed Service for PostgreSQL to Managed Service for Apache Kafka® using Debezium
    • Delivering data from Managed Service for PostgreSQL to Managed Service for YDB using Data Transfer
    • Migrating a database from Managed Service for PostgreSQL to Object Storage
    • Migrating data from Object Storage to Managed Service for PostgreSQL using Data Transfer
    • PostgreSQL change data capture and delivery to YDS
    • Migrating data from Managed Service for PostgreSQL to Managed Service for MySQL® using Data Transfer
    • Migrating data from Managed Service for PostgreSQL to Managed Service for OpenSearch using Data Transfer
    • Fixing string sorting issues in PostgreSQL after upgrading glibc
    • Migrating a database from Greenplum® to ClickHouse®
    • Migrating a database from Greenplum® to PostgreSQL
    • Exporting Greenplum® data to a cold storage in Object Storage
    • Loading data from Object Storage to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Copying data from Managed Service for OpenSearch to Yandex MPP Analytics for PostgreSQL using Yandex Data Transfer
    • Creating an external table from an Object Storage bucket table using a configuration file
    • Getting data from external sources using named queries in Greenplum®
    • Migrating a database from a third-party Valkey™ cluster to Yandex Managed Service for Valkey™
    • Using a Yandex Managed Service for Valkey™ cluster as a PHP session storage
    • Loading data from Object Storage to Managed Service for YDB using Data Transfer
    • Loading data from Managed Service for YDB to Object Storage using Data Transfer
    • Processing Audit Trails events
    • Processing Cloud Logging logs
    • Processing Debezium CDC streams
    • Analyzing data with Jupyter
    • Processing files with usage details in Yandex Cloud Billing
    • Ingesting data into storage systems
    • Smart log processing
    • Data transfer in microservice architectures
    • Migrating data to Object Storage using Data Transfer
    • Migrating data from a third-party Greenplum® or PostgreSQL cluster to Yandex MPP Analytics for PostgreSQL using Data Transfer
    • Migrating Yandex StoreDoc clusters
    • Migrating MySQL® clusters
    • Migrating to a third-party MySQL® cluster
    • Migrating PostgreSQL clusters
    • Creating a schema registry to deliver data in Debezium CDC format from Apache Kafka®
    • Automating operations using Yandex Managed Service for Apache Airflow™
    • Working with an Object Storage table from a PySpark job
    • Integrating Yandex Managed Service for Apache Spark™ with Apache Hive™ Metastore
    • Running a PySpark job using Yandex Managed Service for Apache Airflow™
    • Using Yandex Object Storage in Yandex Managed Service for Apache Spark™

In this article:

  • Required paid resources
  • Set up the infrastructure
  • Prepare a PySpark job
  • Check the result
  • Delete the resources you created
  1. Building a data platform
  2. Integrating Yandex Managed Service for Apache Spark™ with Apache Hive™ Metastore

Yandex Managed Service for Apache Spark™ integration with Apache Hive™ Metastore

Written by
Yandex Cloud
Updated at December 3, 2025
  • Required paid resources
  • Set up the infrastructure
  • Prepare a PySpark job
  • Check the result
  • Delete the resources you created

You can connect an Apache Hive™ Metastore cluster to an Apache Spark™ cluster. In this case, metadata produced by job runs will get uploaded to the Apache Hive™ Metastore cluster. The saved metadata can be used by another Apache Spark™ cluster.

Below, we walk you through an example of using a PySpark job to create a database and a table within it and then load the data from the new database into a Yandex Object Storage bucket. Database metadata is stored in an Apache Hive™ Metastore cluster connected to an Apache Spark™ cluster.

To implement the above example, do the following:

  1. Set up the infrastructure.
  2. Prepare and run a PySpark job.
  3. Check the result.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost for this solution includes:

  • Object Storage bucket fee for storage and data operations (see Object Storage pricing).
  • Yandex Cloud Logging fee for the amount of data written and the time of its retention (see Cloud Logging pricing).
  • Fee for the computing resources of Managed Service for Apache Spark™ cluster components (see Managed Service for Apache Spark™ pricing).
  • Fee for the computing resources of Apache Hive™ Metastore cluster components (see Yandex MetaData Hub pricing).

Set up the infrastructureSet up the infrastructure

Management console
  1. Create a service account named spark-agent for the Apache Spark™ cluster and assign it the managed-spark.integrationProvider role to enable the Apache Spark™ cluster to interact with other resources.

  2. Create a service account named metastore-agent and assign it the managed-metastore.integrationProvider and storage.uploader roles to enable your Apache Hive™ Metastore cluster to interact with other resources and export metadata to the Object Storage bucket.

  3. Create buckets:

    • One for the PySpark job source code.
    • One for output data.
  4. Grant permissions to the spark-agent service account for the created buckets:

    • Bucket for the PySpark job source code: READ.
    • Bucket for output data: READ and WRITE.
  5. Grant the READ and WRITE permissions for the output bucket to the metastore-agent service account.

  6. Create a cloud network named integration-network.

    This will automatically create three subnets in different availability zones.

  7. For the Apache Spark™ cluster, create a security group named spark-sg in integration-network. Add the following rule to it:

    • For outgoing traffic, to allow Apache Spark™ cluster connections to Apache Hive™ Metastore:

      • Port range: 9083
      • Protocol: Any
      • Destination: CIDR
      • CIDR blocks: 0.0.0.0/0
  8. For the Apache Hive™ Metastore cluster, create a security group named metastore-sg in integration-network. Add the following rules to it:

    • For incoming client traffic:

      • Port range: 30000-32767
      • Protocol: Any
      • Source: CIDR
      • CIDR blocks: 0.0.0.0/0
    • For incoming load balancer traffic:

      • Port range: 10256
      • Protocol: Any
      • Source: Load balancer health checks
  9. Create a Apache Hive™ Metastore cluster with the following parameters:

    • Service account: metastore-agent.
    • Version: 3.1.
    • Network: integration-network.
    • Subnet: integration-network-ru-central1-a.
    • Security groups: metastore-sg.
  10. Create a Managed Service for Apache Spark™ cluster with the following parameters:

    • Service account: spark-agent.
    • Network: integration-network.
    • Subnet: integration-network-ru-central1-a.
    • Security groups: spark-sg.
    • Metastore: Apache Hive™ Metastore cluster you created earlier.

Prepare a PySpark jobPrepare a PySpark job

For a PySpark job, we will use a Python script that creates a database named database_1 and a table named table_1. The script will be stored in the Object Storage bucket.

Prepare a script file:

Management console
  1. Create a local file named job-create-table.py and paste the following script to it:

    job-create-table.py
    import random
    import sys
    from pyspark.sql import SparkSession
    
    
    def prepare_table(spark, database, table):
        create_database_sql = "create database if not exists {database}"
        create_table_sql = """
        create table if not exists {database}.{table} (
            id int,
            value double
        )
        """
        truncate_table_sql = "truncate table {database}.{table}"
    
        spark.sql(create_database_sql.format(database=database))
        spark.sql(create_table_sql.format(database=database, table=table))
        spark.sql(truncate_table_sql.format(database=database, table=table))
    
    
    def write_data(spark, database, table):
        data = [(i, random.random()) for i in range(100_000)]
        # Creating a dataframe
        df = spark.createDataFrame(data, schema=['id', 'value'])
        table_full_name = "{database}.{table}".format(database=database, table=table)
        df.write.mode('overwrite').format('json').saveAsTable(table_full_name)
    
    
    def main():
        # Creating a Spark session
        spark = (
            SparkSession
            .builder
            .appName('job-create-table')
            .enableHiveSupport()
            .config('spark.sql.warehouse.dir', sys.argv[1])
            .getOrCreate()
        )
        database, table = 'database_1', 'table_1'
        prepare_table(spark, database, table)
        write_data(spark, database, table)
    
    
    if __name__ == '__main__':
        if len(sys.argv) != 2:
            print("Usage: job-create-table s3a://<bucket>/<folder>", file=sys.stderr)
            sys.exit(-1)
        main()
    
  2. In the source code bucket, create a folder named scripts and upload the job-create-table.py file to this folder.

  3. In the output bucket, create the warehouse folder to load data from database_1 to.

  4. Create a job with the following settings:

    • Job type: PySpark
    • Main python file: s3a://<source_code_bucket>/scripts/job-create-table.py
    • Arguments: s3a://<bucket_for_output_data>/warehouse

Check the resultCheck the result

Management console
  1. Navigate to the folder dashboard and select Managed Service for Apache Spark™.

  2. Click the name of your cluster and open the Jobs tab.

  3. Wait for the PySpark job you created to change its status to Done.

  4. Make sure the file with data from database_1 appears in the warehouse folder in your output data bucket.

  5. Make sure the Apache Hive™ Metastore cluster has the metadata on database_1:

    1. Export the metadata from the Apache Hive™ Metastore cluster to the output bucket.
    2. Download the metadata file and make sure it mentions database_1.

Delete the resources you createdDelete the resources you created

Some resources are not free of charge. Delete the resources you no longer need to avoid paying for them:

Management console
  1. Apache Hive™ Metastore cluster.
  2. Apache Spark™ cluster.
  3. Object Storage buckets. Before deleting your buckets, make sure to have deleted all objects from those buckets.

Was the article helpful?

Previous
Working with an Object Storage table from a PySpark job
Next
Running a PySpark job using Yandex Managed Service for Apache Airflow™
© 2025 Direct Cursus Technology L.L.C.