Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Data Processing
  • Getting started
    • All tutorials
      • Overview
      • Working with Hive jobs
      • Working with MapReduce jobs
      • Working with PySpark jobs
      • Working with Spark jobs
      • Running Apache Hive jobs
      • Running Spark applications
      • Running jobs from a remote host
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Public materials
  • FAQ

In this article:

  • Working with jobs in the Yandex Cloud CLI
  • Getting started
  • Providing an SQL query in the job run command
  • Providing an SQL query in an Object Storage object
  • Working with jobs in the Hive CLI
  • Getting started
  • Running jobs using the Hive CLI
  1. Tutorials
  2. Working with jobs
  3. Running Apache Hive jobs

Running Apache Hive jobs

Written by
Yandex Cloud
Updated at April 23, 2025
  • Working with jobs in the Yandex Cloud CLI
    • Getting started
    • Providing an SQL query in the job run command
    • Providing an SQL query in an Object Storage object
  • Working with jobs in the Hive CLI
    • Getting started
    • Running jobs using the Hive CLI

Hive is a tool for accessing data storage in the Hadoop ecosystem. It allows you to work with data in different formats and DBMS's using an SQL-like query language. It is mainly used for working with data in HDFS, HBase, S3-compatible storage, and relational databases.

You can run Hive jobs from the Yandex Cloud CLI and directly on the server using the Hive CLI.

Note

You can view the job logs and search data in them using Yandex Cloud Logging. For more information, see Working with logs.

Working with jobs in the Yandex Cloud CLIWorking with jobs in the Yandex Cloud CLI

If you do not have the Yandex Cloud CLI yet, install and initialize it.

Jobs are run using the Yandex Cloud CLI through the Yandex Data Processing agent installed on the cluster master host. The agent gets job parameters through the Yandex Data Processing API.

The executable file and its dependencies must be located in a storage accessible to the Yandex Data Processing cluster service account. The executed application itself must have access to the storages in which the source data set and execution results are saved.

There are two ways to send an SQL query to Hive:

  • In the job run command.
  • In the Object Storage object the Yandex Data Processing cluster service account has read access to.

The query execution result is saved to an Yandex Object Storage bucket linked to the cluster together with the service output.

Getting startedGetting started

  1. Create a service account with the dataproc.agent and dataproc.provisioner roles.

  2. In Object Storage, create buckets and configure access to them:

    1. Create a bucket for the input data and grant the READ permission for this bucket to the cluster service account.
    2. Create a bucket for the processing output and grant the cluster service account READ and WRITE permissions for this bucket.
  3. Create a Yandex Data Processing cluster with the following settings:

    • Services:
      • HDFS
      • HIVE
      • MAPREDUCE
      • SPARK
      • YARN
    • Service account: Select the service account you previously created.
    • Bucket name: Select a bucket for the processing results.
    • Public access: Enable this option to access hosts of all subclusters.

Providing an SQL query in the job run commandProviding an SQL query in the job run command

  1. Create an external table for the data from the example in Parquet format. The table will contain a list of flights between US cities in 2018. Run the following query using the Yandex Cloud CLI:

    yc dataproc job create-hive \
       --cluster-id=<cluster_ID> \
       --name=task-one \
       --query-list="CREATE EXTERNAL TABLE flights (
         Year bigint,
         Month bigint,
         FlightDate string,
         Flight_Number_Reporting_Airline bigint,
         OriginAirportID bigint,
         DestAirportID bigint)
         STORED AS PARQUET LOCATION 's3a://yc-mdb-examples/dataproc/example01/set01';"
    
    Example of response to query
    done (14s)
    id: c9qloj5crovu********
    cluster_id: c9qkjos5sa3d********
    created_at: "2024-12-04T03:51:03.285819Z"
    started_at: "2024-12-04T03:51:15.884426Z"
    finished_at: "2024-12-04T03:51:15.884426Z"
    name: task-one
    created_by: ajefhe0o8uas********
    status: DONE
    hive_job:
      query_list:
        queries:
          - CREATE EXTERNAL TABLE flights (Year bigint, Month bigint, FlightDate string, Flight_Number_Reporting_Airline bigint, OriginAirportID bigint, DestAirportID bigint) STORED AS PARQUET LOCATION 's3a://yc-mdb-examples/dataproc/example01/set01';
    application_info: {}
    
  2. Make sure the table was successfully created. To do this, request the number of flights by month:

    yc dataproc job create-hive \
       --cluster-id=<cluster_ID> \
       --name=task-two \
       --query-list="SELECT Month, COUNT(*) FROM flights GROUP BY Month;"
    
    Example of response to query
    done (34s)
    id: c9quejacclo3********
    cluster_id: c9qkjos5sa3d********
    created_at: "2024-12-04T05:15:38.436203Z"
    started_at: "2024-12-04T05:16:11.608422Z"
    finished_at: "2024-12-04T05:16:11.608422Z"
    name: task-two
    created_by: ajefhe0o8uas********
    status: DONE
    hive_job:
      query_list:
        queries:
          - SELECT Month, COUNT(*) FROM flights GROUP BY Month;
    application_info: {}
    
  3. In the processing results bucket, go to the following folder:

    dataproc/clusters/<cluster_ID>/jobs/<Hive_job_ID>
    

    The job ID is specified in the YC CLI job execution command output and in the API response to the job execution.

  4. Download the driveroutput.000000000 file containing the answer to your query.

    Example of response to query
    Init job c9quejacclo3******** at Wed Dec  4 05:15:40 UTC 2024
    ...
    Connecting to jdbc:hive2://localhost:10000
    Connected to: Apache Hive (version 3.1.2)
    Driver: Hive JDBC (version 3.1.2)
    Transaction isolation: TRANSACTION_REPEATABLE_READ
    0: jdbc:hive2://localhost:10000> SELECT Month, COUNT(*) FROM flights GROUP BY Month;;
    +--------+---------+
    | month  |   _c1   |
    +--------+---------+
    | 1      | 570118  |
    | 2      | 520731  |
    | 3      | 611987  |
    | 4      | 596046  |
    | 5      | 616529  |
    | 6      | 626193  |
    | 7      | 645299  |
    | 8      | 644673  |
    | 9      | 585749  |
    | 10     | 616101  |
    | 11     | 586178  |
    | 12     | 593842  |
    +--------+---------+
    12 rows selected (27.532 seconds)
    0: jdbc:hive2://localhost:10000>
    0: jdbc:hive2://localhost:10000> Closing: 0: jdbc:hive2://localhost:10000
    

    If there are errors when executing the query, information about them will also be saved to the driveroutput.000000000 file.

Providing an SQL query in an Object Storage objectProviding an SQL query in an Object Storage object

  1. Create a file named create-table.sql and put an SQL query into it to create an external table for the example data in Parquet format. The table will contain a list of flights between US cities in 2018. Your SQL query will look as follows:

    CREATE EXTERNAL TABLE flights (
        Year bigint,
        Month bigint,
        FlightDate string,
        Flight_Number_Reporting_Airline bigint,
        OriginAirportID bigint,
        DestAirportID bigint)
        STORED AS PARQUET LOCATION 's3a://yc-mdb-examples/dataproc/example01/set01';
    
  2. Upload the create-table.sql file to the source data bucket.

  3. Run this command:

    yc dataproc job create-hive \
      --cluster-id=<cluster_ID> \
      --name=task-one \
      --query-file-uri="s3a://<input_data_bucket_name>/create-table.sql"
    
    Example of response to query
    done (8s)
    id: c9qhpt6334qs********
    cluster_id: c9qkjos5sa3d********
    created_at: "2024-12-04T04:21:20.062704Z"
    started_at: "2024-12-04T04:21:27.702644Z"
    finished_at: "2024-12-04T04:21:27.702644Z"
    name: task-one
    created_by: ajefhe0o8uas********
    status: DONE
    hive_job:
      query_file_uri: s3a://<input_data_bucket_name>/create-table.sql
    application_info: {}
    
  4. Create a file named get-data.sql and put an SQL query into it for the number of flights by month:

    SELECT Month, COUNT(*) FROM flights GROUP BY Month;
    
  5. Upload the get-data.sql file to the source data bucket.

  6. Run this command:

    yc dataproc job create-hive \
      --cluster-id=<cluster_ID> \
      --name=task-two \
      --query-file-uri="s3a://<input_data_bucket_name>/get-data.sql"
    
    Example of response to query
    done (37s)
    id: c9q2srq817cu********
    cluster_id: c9qkjos5sa3d********
    created_at: "2024-12-04T04:24:58.480468Z"
    started_at: "2024-12-04T04:25:34.613549Z"
    finished_at: "2024-12-04T04:25:34.613549Z"
    name: task-two
    created_by: ajefhe0o8uas********
    status: DONE
    hive_job:
      query_file_uri: s3a://<input_data_bucket_name>/get-data.sql
    application_info: {}
    
  7. In the processing results bucket, go to the following folder:

    dataproc/clusters/<cluster_ID>/jobs/<Hive_job_ID>
    

    The job ID is specified in the YC CLI job execution command output and in the API response to the job execution.

  8. Download the driveroutput.000000000 file containing the answer to your query.

    Example of response to query
    Init job c9q2gha5hocg******** at Wed Dec  4 06:56:45 UTC 2024
    ...
    Connecting to jdbc:hive2://localhost:10000
    Connected to: Apache Hive (version 3.1.2)
    Driver: Hive JDBC (version 3.1.2)
    Transaction isolation: TRANSACTION_REPEATABLE_READ
    0: jdbc:hive2://localhost:10000> SELECT Month, COUNT(*) FROM flights GROUP BY Month;
    +--------+---------+
    | month  |   _c1   |
    +--------+---------+
    | 1      | 570118  |
    | 2      | 520731  |
    | 3      | 611987  |
    | 4      | 596046  |
    | 5      | 616529  |
    | 6      | 626193  |
    | 7      | 645299  |
    | 8      | 644673  |
    | 9      | 585749  |
    | 10     | 616101  |
    | 11     | 586178  |
    | 12     | 593842  |
    +--------+---------+
    12 rows selected (28.801 seconds)
    0: jdbc:hive2://localhost:10000> Closing: 0: jdbc:hive2://localhost:10000
    

    If there are errors when executing the query, information about them will also be saved to the driveroutput.000000000 file.

Working with jobs in the Hive CLIWorking with jobs in the Hive CLI

Getting startedGetting started

  1. Create a service account with the dataproc.agent and dataproc.provisioner roles.

  2. Create a Yandex Data Processing cluster with the following settings:

    • Services:
      • HDFS
      • HIVE
      • SPARK
      • YARN
    • Service account: Select the service account you previously created.
    • Public access: Enable this option to access hosts of all subclusters.

Running jobs using the Hive CLIRunning jobs using the Hive CLI

  1. Connect to the master host over SSH and run the hive command.

  2. Test Hive by running the select 1; command. The correct result looks like this:

    OK
    1
    Time taken: 0.077 seconds, Fetched: 1 row(s)
    
  3. Create an external table for the data from the example in Parquet format. The table will contain a list of flights between US cities in 2018. Run the following query in the Hive CLI:

    CREATE EXTERNAL TABLE flights (
        Year bigint,
        Month bigint,
        FlightDate string,
        Flight_Number_Reporting_Airline bigint,
        OriginAirportID bigint,
        DestAirportID bigint)
        STORED AS PARQUET LOCATION 's3a://yc-mdb-examples/dataproc/example01/set01';
    
  4. Check the list of tables:

    show tables;
    
  5. The list of tables should look like this:

    OK
    flights
    Time taken: 0.043 seconds, Fetched: 1 row(s)
    
  6. Request the number of flights by month:

    SELECT Month, COUNT(*) FROM flights GROUP BY Month;
    
    Example of response to query
    Query ID = root_20200119195338_28049b67-4de9-4568-a4c4-3bbe********
    Total jobs = 1
    Launching Job 1 out of 1
    Status: Running (Executing on YARN cluster with App id application_157925157****_****)
    
    ----------------------------------------------------------------------------------------------
            VERTICES      MODE        STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED
    ----------------------------------------------------------------------------------------------
    Map 1 .......... container     SUCCEEDED      6          6        0        0       0       0
    Reducer 2 ...... container     SUCCEEDED      1          1        0        0       0       0
    ----------------------------------------------------------------------------------------------
    VERTICES: 02/02  [==========================>>] 100%  ELAPSED TIME: 11.52 s
    ----------------------------------------------------------------------------------------------
    OK
    1       570118
    2       520731
    3       611987
    4       596046
    5       616529
    6       626193
    7       645299
    8       644673
    9       585749
    10      616101
    11      586178
    12      593842
    Time taken: 12.137 seconds, Fetched: 12 row(s)
    

Was the article helpful?

Previous
Working with Spark jobs
Next
Running Spark applications
© 2025 Direct Cursus Technology L.L.C.