Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Managed Service for Apache Spark™
  • Getting started
    • All tutorials
    • Working with Yandex Object Storage using a local Hive catalog
    • Working with Yandex Object Storage using a Apache Hive™ Metastore cluster
    • Working with an Apache Iceberg™ table from a PySpark job
    • Running a PySpark job using Yandex Managed Service for Apache Airflow™
    • Automating operations using Yandex Managed Service for Apache Airflow™
  • Access management
  • Pricing policy
  • Yandex Monitoring metrics
  • Audit Trails events
  • Terraform reference
  • Release notes

In this article:

  • Required paid resources
  • Set up your infrastructure
  • Prepare a PySpark job
  • Check the result
  • Delete the resources you created
  1. Tutorials
  2. Working with Yandex Object Storage using a Apache Hive™ Metastore cluster

Working with Yandex Object Storage from a PySpark job using an Apache Hive™ Metastore cluster

Written by
Yandex Cloud
Updated at March 5, 2026
  • Required paid resources
  • Set up your infrastructure
  • Prepare a PySpark job
  • Check the result
  • Delete the resources you created

In a PySpark job, you can use the global Hive catalog. To do it, you need to connect a Apache Hive™ Metastore cluster to your Yandex Managed Service for Apache Spark™ cluster.

Apache Hive™ Metastore provides for:

  • Centralized storage of metadata on databases, tables, and partitions.
  • Simplified access to data without specifying paths and schemas manually.
  • Storing table and column statistics for query optimization.

This tutorial shows an example of working with a table in a Yandex Object Storage bucket from a PySpark job using the global Hive catalog. The database metadata is saved to the Apache Hive™ Metastore cluster before being exported to an output bucket.

To implement the above example:

  1. Set up your infrastructure.
  2. Prepare and run a PySpark job.
  3. Check the result.

If you no longer need the resources you created, delete them.

Note

Yandex Managed Service for Apache Spark™ cluster integration with Apache Hive™ Metastore allows using the Apache Iceberg™ table format in Spark jobs. For more information, see Working with an Apache Iceberg™ table from a PySpark job.

Required paid resourcesRequired paid resources

The support cost for this solution includes:

  • Object Storage bucket fee for storage and data operations (see Object Storage pricing).
  • Yandex Cloud Logging fee for the amount of data written and the time of its retention (see Cloud Logging pricing).
  • Fee for the computing resources of Yandex Managed Service for Apache Spark™ cluster components (see Yandex Managed Service for Apache Spark™ pricing).
  • Fee for the computing resources of Apache Hive™ Metastore cluster components (see Yandex MetaData Hub pricing).

Set up your infrastructureSet up your infrastructure

Management console
  1. Create a service account named spark-agent for the Yandex Managed Service for Apache Spark™ cluster and assign it the managed-spark.integrationProvider role to enable the Yandex Managed Service for Apache Spark™ cluster to interact with other resources.

  2. Create a service account named metastore-agent and assign it the managed-metastore.integrationProvider and storage.uploader roles to enable your Apache Hive™ Metastore cluster to interact with other resources and export metadata to the Object Storage bucket.

  3. Create buckets:

    • One for the PySpark job source code.
    • One for output data.
  4. Grant permissions to the spark-agent service account for the created buckets:

    • Bucket for the PySpark job source code: READ.
    • Bucket for output data: READ and WRITE.
  5. Grant the READ and WRITE permissions for the output bucket to the metastore-agent service account.

  6. Create a cloud network named integration-network.

    This will automatically create three subnets in different availability zones.

  7. For the Yandex Managed Service for Apache Spark™ cluster, create a security group named spark-sg in integration-network. Add the following rule to it:

    • For outgoing traffic, to allow Yandex Managed Service for Apache Spark™ cluster connections to Apache Hive™ Metastore:

      • Port range: 9083
      • Protocol: Any
      • Destination: CIDR
      • CIDR blocks: 0.0.0.0/0
  8. For the Apache Hive™ Metastore cluster, create a security group named metastore-sg in integration-network. Add the following rules to it:

    • For incoming client traffic:

      • Port range: 30000-32767
      • Protocol: Any
      • Source: CIDR
      • CIDR blocks: 0.0.0.0/0
    • For incoming load balancer traffic:

      • Port range: 10256
      • Protocol: Any
      • Source: Load balancer health checks
  9. Create a Apache Hive™ Metastore cluster with the following parameters:

    • Service account: metastore-agent.
    • Version: 3.1.
    • Network: integration-network.
    • Subnet: integration-network-ru-central1-a.
    • Security groups: metastore-sg.
  10. Create a Yandex Managed Service for Apache Spark™ cluster with the following parameters:

    • Service account: spark-agent.
    • Network: integration-network.
    • Subnet: integration-network-ru-central1-a.
    • Security groups: spark-sg.
    • Metastore: Apache Hive™ Metastore cluster you created earlier.

Prepare a PySpark jobPrepare a PySpark job

For a PySpark job, we will use a Python script that creates a database named database_1 and a table named table_1. To allow the Yandex Managed Service for Apache Spark™ cluster to access the global Apache Hive™ Metastore catalog, call the enableHiveSupport() method in the script. The script will be stored in the Object Storage bucket.

Prepare a script file:

Management console
  1. Create a local file named job-create-table.py and paste the following script to it:

    job-create-table.py
    import random
    import sys
    from pyspark.sql import SparkSession
    
    
    def prepare_table(spark, database, table):
        create_database_sql = "create database if not exists {database}"
        create_table_sql = """
        create table if not exists {database}.{table} (
            id int,
            value double
        )
        """
        truncate_table_sql = "truncate table {database}.{table}"
    
        spark.sql(create_database_sql.format(database=database))
        spark.sql(create_table_sql.format(database=database, table=table))
        spark.sql(truncate_table_sql.format(database=database, table=table))
    
    
    def write_data(spark, database, table):
        data = [(i, random.random()) for i in range(100_000)]
        # Creating a dataframe
        df = spark.createDataFrame(data, schema=['id', 'value'])
        table_full_name = "{database}.{table}".format(database=database, table=table)
        df.write.mode('overwrite').format('json').saveAsTable(table_full_name)
    
    
    def main():
        # Creating a Spark session
        spark = (
            SparkSession
            .builder
            .appName('job-create-table')
            .enableHiveSupport()
            .config('spark.sql.warehouse.dir', sys.argv[1])
            .getOrCreate()
        )
        database, table = 'database_1', 'table_1'
        prepare_table(spark, database, table)
        write_data(spark, database, table)
    
    
    if __name__ == '__main__':
        if len(sys.argv) != 2:
            print("Usage: job-create-table s3a://<bucket>/<folder>", file=sys.stderr)
            sys.exit(-1)
        main()
    
  2. In the source code bucket, create a folder named scripts and upload the job-create-table.py file to this folder.

  3. In the output bucket, create the warehouse folder to load data from database_1 to.

  4. Create a job with the following settings:

    • Job type: PySpark
    • Main python file: s3a://<source_code_bucket>/scripts/job-create-table.py
    • Arguments: s3a://<bucket_for_output_data>/warehouse

Check the resultCheck the result

Management console
  1. Navigate to the folder dashboard and select Managed Service for Apache Spark™.

  2. Click the name of your cluster and select the Jobs tab.

  3. Wait for the PySpark job you created to change its status to Done.

  4. Make sure the file with data from database_1 appears in the warehouse folder in your output data bucket.

  5. Make sure the Apache Hive™ Metastore cluster has the metadata on database_1:

    1. Export the metadata from the Apache Hive™ Metastore cluster to the output bucket.
    2. Download the metadata file and make sure it mentions database_1.

Delete the resources you createdDelete the resources you created

Some resources are not free of charge. Delete the resources you no longer need to avoid paying for them:

Management console
  1. Apache Hive™ Metastore cluster.
  2. Yandex Managed Service for Apache Spark™ cluster.
  3. Object Storage buckets. Before deleting your buckets, make sure to have deleted all objects from those buckets.

Was the article helpful?

Previous
Working with Yandex Object Storage using a local Hive catalog
Next
Working with an Apache Iceberg™ table from a PySpark job
© 2026 Direct Cursus Technology L.L.C.