Migrating databases from Google BigQuery to Managed Service for ClickHouse®
You can migrate a database from Google BigQuery to Yandex Managed Service for ClickHouse® and then use Yandex DataLens to analyze it.
The table is compressed, then moved to a Google Storage bucket and then — to an Yandex Object Storage bucket. After that, the data is imported to a Managed Service for ClickHouse® cluster where you can analyze it using Yandex DataLens.
This method of migration has the following benefits:
- You can specify the export format and the compression ratio.
- Lower data amounts are involved, which accelerates migration and reduces its cost.
However, in this case, the data is migrated "as is", without transforming or copying the updated increments.
To migrate the database from Google BigQuery to Managed Service for ClickHouse®:
- Migrate data from Google BigQuery to Yandex Object Storage.
- Set up the mapping of data from Yandex Object Storage to the Managed Service for ClickHouse® cluster.
- Analyze the data with Yandex DataLens.
If you no longer need the resources you created, delete them.
Getting started
To migrate your database, create Google Cloud and Yandex Cloud resources.
Create Google Cloud resources
-
Create a Google Cloud service account
with theBigQuery Data Editor
andStorage Object Admin
roles. -
Create an access key for the service account
and save it as a.json
file. -
Install the Google BigQuery Python SDK utility
. This package requires Python 3.7 or higher. -
Prepare a dataset for Google BigQuery. As an example, here we use a public dataset
calledgoogle_trends
for Google BigQuery that includes theinternational_top_terms
table with the following columns:rank
country_name
country_code
region_name
week
score
region_code
term
refresh_date
Create Yandex Cloud resources
-
Create a service account with the
storage.uploader
role to access the Object Storage bucket. -
Create a static access key for the service account. Save the key ID and secret key, you will need them later.
-
Create a Managed Service for ClickHouse® cluster with any suitable configuration. When creating a cluster:
- Use the service account you created earlier.
- Enable the DataLens access parameter.
-
Create an Object Storage bucket. When creating the bucket, enable public access to read objects and list objects in the bucket.
Migrate data from Google BigQuery to Yandex Object Storage
-
Create a file named
credentials.boto
with access credentials for Google Cloud and Yandex Cloud resources:[Credentials] gs_service_client_id =<Google_Cloud_service_account> gs_service_key_file =<absolute_path_to_JSON_file> aws_access_key_id =<service_account_key_ID> aws_secret_access_key =<service_account_secret_key> [GSUtil] default_project_id =<Google_Cloud_project_ID> [s3] calling_format=boto.s3.connection.OrdinaryCallingFormat host=storage.yandexcloud.net
Where:
gs_service_client_id
: Google Cloud service account name inservice-account-name@project-id.iam.gserviceaccount.com
format.gs_service_key_file
: Absolute path to the JSON file of the access key of the Google Cloud service account.aws_access_key_id
: Yandex Cloud service account key ID.aws_secret_access_key
: Yandex Cloud service account Secret key.default_project_id
: Google Cloud project ID .
-
Create a
main.py
script file for data compression and migration:main.py
from google.cloud import bigquery import sys import argparse import time import subprocess import os os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="<absolute_path_to_JSON_file_of_Google_Cloud_service_account_access_key>" os.environ["BOTO_CONFIG"]="<absolute_path_to_credentials.boto_file>" def parse_args(): parser = argparse.ArgumentParser(description='Export data from Google Big Query to Yandex Cloud object storage') parser.add_argument('--bq_project', type=str, help='GBQ project ID') parser.add_argument('--bq_location', type=str, help='GBQ table AND GS location') parser.add_argument('--gs_bucket', type=str, help='GS export destination bucket') parser.add_argument('--yc_bucket', type=str, help='YC copy destination bucket') parser.add_argument('--gsutil_path', type=str, help='GSutil exec path', default='gsutil') return parser.parse_args() def select_from_list(message, elements): print(message) print("\t{}. {}".format(0, "Export all")) for ind in range(len(elements)): if isinstance(elements[ind].reference, bigquery.DatasetReference): print("\t{}. {}".format(ind+1, elements[ind].reference.dataset_id)) elif isinstance(elements[ind].reference, bigquery.TableReference): print("\t{}. {}".format(ind+1, elements[ind].reference.table_id)) try: return int(input("(any letter for cancel) >> ")) except ValueError: print("Exiting") sys.exit() if __name__ == '__main__': args = parse_args() client = bigquery.Client() datasets = list(client.list_datasets(args.bq_project)) dataset_selector = select_from_list("Datasets in project {}".format(args.bq_project), datasets) export_list = [] for i in range(len(datasets)): dataset_ref = datasets[i].reference if dataset_selector == 0: export_list += list(client.list_tables(dataset_ref)) else: if i == dataset_selector - 1: tables = list(client.list_tables(dataset_ref)) table_selector = select_from_list("Tables in dataset {}".format(dataset_ref.dataset_id), tables) for j in range(len(tables)): if table_selector == 0 or j == table_selector - 1: export_list.append(tables[j]) print("Starting tables export") for n in range(len(export_list)): table_ref = export_list[n].reference # Creating Extract Job config. Selecting compression level and data format. job_config = bigquery.job.ExtractJobConfig() job_config.compression = bigquery.Compression.GZIP job_config.destination_format = bigquery.DestinationFormat.PARQUET print("Exporting {} table".format(table_ref.table_id)) extract_job = client.extract_table( source=table_ref, destination_uris="gs://{}/{}".format(args.gs_bucket, "{}-*".format(table_ref.table_id)), job_id="export-job-{}-{}".format(table_ref.table_id, round(time.time() * 1000)), location=args.bq_location, job_config=job_config) extract_job.result() print("Tables export done") # Calling gsutil rsync to synchronize source and destination buckets. source_uri = "gs://{}/".format(args.gs_bucket) destination_uri = "s3://{}/".format(args.yc_bucket) print("Synchronizing {} with {}...".format(source_uri, destination_uri)) proc = subprocess.Popen([args.gsutil_path, "-m", "rsync", source_uri, destination_uri], stdout=sys.stdout, stderr=sys.stderr) proc.communicate() print("Buckets synchronization done")
-
Run the
main.py
script to start migrating data from Google BigQuery to the Google Storage bucket and then to the Yandex Object Storage bucket:python main.py \ --bq_project=<Google_Cloud_project_ID> \ --bq_location=US \ --gs_bucket=<Google_Cloud_Storage_bucket_name> \ --yc_bucket=<Object_Storage_bucket_name>
Wait until the data migrates completely.
Set up the mapping of data from Yandex Object Storage to the Managed Service for ClickHouse® cluster
-
To create a view based on the imported data, connect to the Managed Service for ClickHouse® cluster database and run the SQL query:
CREATE view db1.v$google_top_rising_terms on cluster on cluster '{cluster}' AS (SELECT term, score, rank, country_name, country_code, region_name, region_code, week, refresh_date FROM s3Cluster( '<cluster_ID>', 'https://storage.yandexcloud.net/<Object_Storage_bucket_name>/top_terms-*', 'Parquet', 'rank Int32, country_name String, country_code String, region_name String, week Timestamp, score Nullable(Int32), region_code String, term String, refresh_date Timestamp') )
Where:
db1
: Name of the database in the Managed Service for ClickHouse® cluster where you want to create a view.v$google_top_rising_terms
: Name of the view for the imported data.<cluster_ID>
: Managed Service for ClickHouse® cluster ID. You can retrieve it with a list of clusters in the folder.top_terms-*
: Key part of the names of the Object Storage bucket objects. For example, if you move from Google Cloud a table containing rows withtop_terms
for name, then, in the Object Storage bucket, they will look as a set of objects with names liketop_terms-000000000001
,top_terms-000000000002
, and so on. In this case, in the SQL query, you must specifytop_terms-*
for the view to include all the entries with this name from that table.
-
To output the first 100 entries from the selected view, run the SQL query (in the example, we use the
v$google_top_rising_terms
view and thedb1
database):SELECT * FROM db1.v$google_top_rising_terms limit 100
Use Yandex DataLens to analyze the data
-
Connect the Managed Service for ClickHouse® cluster to DataLens.
-
Create a dataset from the
db1.v$google_top_rising_terms
table. For thescore
field, select the average aggregation. -
- Drag the
country_name
field to the X section. - Drag the
score
field to the Y section. - Drag the
term
field to the Filters section. In the resulting form, enter the settings:- Operation: Belongs to a set.
- Available: Enter a term from a list of available terms, then click Apply filter.
- Drag the
term
field to the Sorting section.
- Drag the
The system will analyze the usage of this query in the search system, and the result will be output as a bar chart by country.
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
- Delete the Managed Service for ClickHouse® cluster.
- Delete all objects from the Object Storage bucket, then delete the bucket.
- Delete the Google Storage bucket
.
ClickHouse® is a registered trademark of ClickHouse, Inc