Running Apache Hive jobs
Hive
You can run Hive jobs from the Yandex Cloud CLI and directly on the server using the Hive CLI.
Note
You can view the job logs and search data in them using Yandex Cloud Logging. For more information, see Working with logs.
Working with jobs in the Yandex Cloud CLI
If you do not have the Yandex Cloud CLI yet, install and initialize it.
Jobs are run using the Yandex Cloud CLI through the Yandex Data Processing agent installed on the cluster master host. The agent gets job parameters through the Yandex Data Processing API.
The executable file and its dependencies must be located in a storage accessible to the Yandex Data Processing cluster service account. The executed application itself must have access to the storages in which the source data set and execution results are saved.
There are two ways to send an SQL query to Hive:
- In the job run command.
- In the Object Storage object the Yandex Data Processing cluster service account has read access to.
The query execution result is saved to an Yandex Object Storage bucket linked to the cluster together with the service output.
Getting started
-
Create a service account with the
dataproc.agent
anddataproc.provisioner
roles. -
In Object Storage, create buckets and configure access to them:
- Create a bucket for the input data and grant the
READ
permission for this bucket to the cluster service account. - Create a bucket for the processing output and grant the cluster service account
READ and WRITE
permissions for this bucket.
- Create a bucket for the input data and grant the
-
Create a Yandex Data Processing cluster with the following settings:
- Services:
HDFS
HIVE
MAPREDUCE
SPARK
YARN
- Service account: Select the service account you previously created.
- Bucket name: Select a bucket for the processing results.
- Public access: Enable this option to access hosts of all subclusters.
- Services:
Providing an SQL query in the job run command
-
Create an external table for the data from the example in Parquet format. The table will contain a list of flights between US cities in 2018. Run the following query using the Yandex Cloud CLI:
yc dataproc job create-hive \ --cluster-id=<cluster_ID> \ --name=task-one \ --query-list="CREATE EXTERNAL TABLE flights ( Year bigint, Month bigint, FlightDate string, Flight_Number_Reporting_Airline bigint, OriginAirportID bigint, DestAirportID bigint) STORED AS PARQUET LOCATION 's3a://yc-mdb-examples/dataproc/example01/set01';"
Example of response to query
done (14s) id: c9qloj5crovu******** cluster_id: c9qkjos5sa3d******** created_at: "2024-12-04T03:51:03.285819Z" started_at: "2024-12-04T03:51:15.884426Z" finished_at: "2024-12-04T03:51:15.884426Z" name: task-one created_by: ajefhe0o8uas******** status: DONE hive_job: query_list: queries: - CREATE EXTERNAL TABLE flights (Year bigint, Month bigint, FlightDate string, Flight_Number_Reporting_Airline bigint, OriginAirportID bigint, DestAirportID bigint) STORED AS PARQUET LOCATION 's3a://yc-mdb-examples/dataproc/example01/set01'; application_info: {}
-
Make sure the table was successfully created. To do this, request the number of flights by month:
yc dataproc job create-hive \ --cluster-id=<cluster_ID> \ --name=task-two \ --query-list="SELECT Month, COUNT(*) FROM flights GROUP BY Month;"
Example of response to query
done (34s) id: c9quejacclo3******** cluster_id: c9qkjos5sa3d******** created_at: "2024-12-04T05:15:38.436203Z" started_at: "2024-12-04T05:16:11.608422Z" finished_at: "2024-12-04T05:16:11.608422Z" name: task-two created_by: ajefhe0o8uas******** status: DONE hive_job: query_list: queries: - SELECT Month, COUNT(*) FROM flights GROUP BY Month; application_info: {}
-
In the processing results bucket, go to the following folder:
dataproc/clusters/<cluster_ID>/jobs/<Hive_job_ID>
The job ID is specified in the YC CLI job execution command output and in the API response to the job execution.
-
Download the
driveroutput.000000000
file containing the answer to your query.Example of response to query
Init job c9quejacclo3******** at Wed Dec 4 05:15:40 UTC 2024 ... Connecting to jdbc:hive2://localhost:10000 Connected to: Apache Hive (version 3.1.2) Driver: Hive JDBC (version 3.1.2) Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:hive2://localhost:10000> SELECT Month, COUNT(*) FROM flights GROUP BY Month;; +--------+---------+ | month | _c1 | +--------+---------+ | 1 | 570118 | | 2 | 520731 | | 3 | 611987 | | 4 | 596046 | | 5 | 616529 | | 6 | 626193 | | 7 | 645299 | | 8 | 644673 | | 9 | 585749 | | 10 | 616101 | | 11 | 586178 | | 12 | 593842 | +--------+---------+ 12 rows selected (27.532 seconds) 0: jdbc:hive2://localhost:10000> 0: jdbc:hive2://localhost:10000> Closing: 0: jdbc:hive2://localhost:10000
If there are errors when executing the query, information about them will also be saved to the
driveroutput.000000000
file.
Providing an SQL query in an Object Storage object
-
Create a file named
create-table.sql
and put an SQL query into it to create an external table for the example data in Parquet format. The table will contain a list of flights between US cities in 2018. Your SQL query will look as follows:CREATE EXTERNAL TABLE flights ( Year bigint, Month bigint, FlightDate string, Flight_Number_Reporting_Airline bigint, OriginAirportID bigint, DestAirportID bigint) STORED AS PARQUET LOCATION 's3a://yc-mdb-examples/dataproc/example01/set01';
-
Upload the
create-table.sql
file to the source data bucket. -
Run the following command:
yc dataproc job create-hive \ --cluster-id=<cluster_ID> \ --name=task-one \ --query-file-uri="s3a://<input_data_bucket_name>/create-table.sql"
Example of response to query
done (8s) id: c9qhpt6334qs******** cluster_id: c9qkjos5sa3d******** created_at: "2024-12-04T04:21:20.062704Z" started_at: "2024-12-04T04:21:27.702644Z" finished_at: "2024-12-04T04:21:27.702644Z" name: task-one created_by: ajefhe0o8uas******** status: DONE hive_job: query_file_uri: s3a://<input_data_bucket_name>/create-table.sql application_info: {}
-
Create a file named
get-data.sql
and put an SQL query into it for the number of flights by month:SELECT Month, COUNT(*) FROM flights GROUP BY Month;
-
Upload the
get-data.sql
file to the source data bucket. -
Run the following command:
yc dataproc job create-hive \ --cluster-id=<cluster_ID> \ --name=task-two \ --query-file-uri="s3a://<input_data_bucket_name>/get-data.sql"
Example of response to query
done (37s) id: c9q2srq817cu******** cluster_id: c9qkjos5sa3d******** created_at: "2024-12-04T04:24:58.480468Z" started_at: "2024-12-04T04:25:34.613549Z" finished_at: "2024-12-04T04:25:34.613549Z" name: task-two created_by: ajefhe0o8uas******** status: DONE hive_job: query_file_uri: s3a://<input_data_bucket_name>/get-data.sql application_info: {}
-
In the processing results bucket, go to the following folder:
dataproc/clusters/<cluster_ID>/jobs/<Hive_job_ID>
The job ID is specified in the YC CLI job execution command output and in the API response to the job execution.
-
Download the
driveroutput.000000000
file containing the answer to your query.Example of response to query
Init job c9q2gha5hocg******** at Wed Dec 4 06:56:45 UTC 2024 ... Connecting to jdbc:hive2://localhost:10000 Connected to: Apache Hive (version 3.1.2) Driver: Hive JDBC (version 3.1.2) Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:hive2://localhost:10000> SELECT Month, COUNT(*) FROM flights GROUP BY Month; +--------+---------+ | month | _c1 | +--------+---------+ | 1 | 570118 | | 2 | 520731 | | 3 | 611987 | | 4 | 596046 | | 5 | 616529 | | 6 | 626193 | | 7 | 645299 | | 8 | 644673 | | 9 | 585749 | | 10 | 616101 | | 11 | 586178 | | 12 | 593842 | +--------+---------+ 12 rows selected (28.801 seconds) 0: jdbc:hive2://localhost:10000> Closing: 0: jdbc:hive2://localhost:10000
If there are errors when executing the query, information about them will also be saved to the
driveroutput.000000000
file.
Working with jobs in the Hive CLI
Getting started
-
Create a service account with the
dataproc.agent
anddataproc.provisioner
roles. -
Create a Yandex Data Processing cluster with the following settings:
- Services:
HDFS
HIVE
SPARK
YARN
.
- Service account: Select the service account you previously created.
- Public access: Enable this option to access hosts of all subclusters.
- Services:
Running jobs using the Hive CLI
-
Connect to the master host over SSH and run the
hive
command. -
Test Hive by running the
select 1;
command. The correct result looks like this:OK 1 Time taken: 0.077 seconds, Fetched: 1 row(s)
-
Create an external table for the data from the example in Parquet format. The table will contain a list of flights between US cities in 2018. Run the following query in the Hive CLI:
CREATE EXTERNAL TABLE flights ( Year bigint, Month bigint, FlightDate string, Flight_Number_Reporting_Airline bigint, OriginAirportID bigint, DestAirportID bigint) STORED AS PARQUET LOCATION 's3a://yc-mdb-examples/dataproc/example01/set01';
-
Check the list of tables:
show tables;
-
The list of tables should look like this:
OK flights Time taken: 0.043 seconds, Fetched: 1 row(s)
-
Request the number of flights by month:
SELECT Month, COUNT(*) FROM flights GROUP BY Month;
Example of response to query
Query ID = root_20200119195338_28049b67-4de9-4568-a4c4-3bbe******** Total jobs = 1 Launching Job 1 out of 1 Status: Running (Executing on YARN cluster with App id application_157925157****_****) ---------------------------------------------------------------------------------------------- VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED ---------------------------------------------------------------------------------------------- Map 1 .......... container SUCCEEDED 6 6 0 0 0 0 Reducer 2 ...... container SUCCEEDED 1 1 0 0 0 0 ---------------------------------------------------------------------------------------------- VERTICES: 02/02 [==========================>>] 100% ELAPSED TIME: 11.52 s ---------------------------------------------------------------------------------------------- OK 1 570118 2 520731 3 611987 4 596046 5 616529 6 626193 7 645299 8 644673 9 585749 10 616101 11 586178 12 593842 Time taken: 12.137 seconds, Fetched: 12 row(s)