Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Yandex Managed Service for ClickHouse®
  • Getting started
    • All tutorials
    • Adding data to the database
    • Migrating data to Managed Service for ClickHouse® using ClickHouse®
    • Migrating data to Managed Service for ClickHouse® using Data Transfer
    • Sharding tables
    • Data resharding in a cluster
    • Using a hybrid storage
    • Fetching data from Managed Service for Apache Kafka®
    • Fetching data from RabbitMQ
    • Exchanging data with Yandex Data Processing
    • Configuring Yandex Cloud DNS for cluster access from other cloud networks
    • Analyzing Yandex Object Storage logs in Yandex DataLens
    • Configuring Managed Service for ClickHouse® for Graphite
    • Saving a Yandex Data Streams data stream in Managed Service for ClickHouse®
    • Migrating a database from Google BigQuery
    • Delivering data from Managed Service for Apache Kafka® using Yandex Data Transfer
    • Migrating data from Yandex Direct using Yandex Cloud Functions, Yandex Object Storage, and Yandex Data Transfer
    • Loading data from Yandex Object Storage to Managed Service for ClickHouse® using Yandex Data Transfer
    • Migrating a database from Greenplum® to ClickHouse®
    • Migrating a database from MySQL® to ClickHouse® using Yandex Data Transfer
    • Asynchronously replicating data from PostgreSQL to ClickHouse®
    • Loading data from Yandex Managed Service for YDB to Managed Service for ClickHouse® using Yandex Data Transfer
    • Copying data from Managed Service for OpenSearch to Managed Service for ClickHouse® using Yandex Data Transfer
    • Entering data into storage systems
    • Using parameters
    • Examples of creating QL charts
    • Web analytics with funnels and cohorts calculated based on Yandex Metrica data
    • AppMetrica: direct connection
    • AppMetrica: data export, post-processing, and visualization
    • Loading data from Yandex Metrica to a ClickHouse® data mart
    • Yandex Tracker: data export and visualization
    • Retail chain's dashboard based on a ClickHouse® DB
    • Analyzing sales and locations of pizzerias based on data from the ClickHouse® database and Marketplace
    • Geocoding with the Yandex Maps API for data visualization in DataLens
    • Importing data from Object Storage, processing and exporting to Managed Service for ClickHouse®
    • Working with data using Query
    • Federated data queries using Query
  • Access management
  • Pricing policy
  • Terraform reference
  • Yandex Monitoring metrics
  • Audit Trails events
  • Public materials
  • Release notes

In this article:

  • Required paid resources
  • Getting started
  • Create Google Cloud resources
  • Create the Yandex Cloud resources
  • Migrate data from Google BigQuery to Yandex Object Storage
  • Set up the mapping of data from Yandex Object Storage to the Managed Service for ClickHouse® cluster
  • Use Yandex DataLens to analyze the data
  • Delete the resources you created
  1. Tutorials
  2. Migrating a database from Google BigQuery

Migrating databases from Google BigQuery to Managed Service for ClickHouse®

Written by
Yandex Cloud
Updated at April 25, 2025
  • Required paid resources
  • Getting started
    • Create Google Cloud resources
    • Create the Yandex Cloud resources
  • Migrate data from Google BigQuery to Yandex Object Storage
  • Set up the mapping of data from Yandex Object Storage to the Managed Service for ClickHouse® cluster
  • Use Yandex DataLens to analyze the data
  • Delete the resources you created

You can migrate a database from Google BigQuery to Yandex Managed Service for ClickHouse® and then use Yandex DataLens to analyze it.

The table is compressed, then moved to a Google Storage bucket and then — to an Yandex Object Storage bucket. After that, the data is imported to a Managed Service for ClickHouse® cluster where you can analyze it using Yandex DataLens.

This method of migration has the following benefits:

  • You can specify the export format and the compression ratio.
  • Lower data amounts are involved, which accelerates migration and reduces its cost.

However, in this case, the data is migrated as is without transforming or copying the updated increments.

To migrate the database from Google BigQuery to Managed Service for ClickHouse®:

  1. Migrate data from Google BigQuery to Yandex Object Storage.
  2. Set up the mapping of data from Yandex Object Storage to the Managed Service for ClickHouse® cluster.
  3. Analyze the data with Yandex DataLens.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Managed Service for ClickHouse® cluster fee: Using computing resources allocated to hosts (including ZooKeeper hosts) and disk space (see Managed Service for ClickHouse® pricing).
  • Fee for using public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).
  • Object Storage bucket fee: Storing data and performing operations with it (see Object Storage pricing).
  • Fee for using Yandex DataLens (see DataLens pricing).

Getting startedGetting started

To migrate your database, create Google Cloud and Yandex Cloud resources.

Create Google Cloud resourcesCreate Google Cloud resources

  1. Create a Google Storage bucket.

  2. Create a Google Cloud service account with the BigQuery Data Editor and Storage Object Admin roles.

  3. Create an access key for the service account and save it as a .json file.

  4. Download and install the google-cloud-sdk CLI utilities.

  5. Authenticate in gcloud CLI.

  6. Install the Google BigQuery Python SDK utility. This package requires Python 3.7 or higher.

  7. Prepare a dataset for Google BigQuery. As an example, here we use a public dataset called google_trends for Google BigQuery that includes the international_top_terms table with the following columns:

    • rank
    • country_name
    • country_code
    • region_name
    • week
    • score
    • region_code
    • term
    • refresh_date

Create the Yandex Cloud resourcesCreate the Yandex Cloud resources

  1. Create a service account with the storage.uploader role to access the Object Storage bucket.

  2. Create a static access key for the service account. Save the key ID and secret key, you will need them later.

  3. Create a Managed Service for ClickHouse® cluster with any suitable configuration. When creating a cluster:

    • Use the service account you created earlier.
    • Enable the DataLens access parameter.
  4. Use Zookeeper to enable fault tolerance.

  5. Create an Object Storage bucket. When creating the bucket, enable public access to read objects and list objects in the bucket.

Migrate data from Google BigQuery to Yandex Object StorageMigrate data from Google BigQuery to Yandex Object Storage

  1. Create a file named credentials.boto with access credentials for Google Cloud and Yandex Cloud resources:

    [Credentials]
    gs_service_client_id  =<Google_Cloud_service_account>
    gs_service_key_file   =<absolute_path_to_JSON_file>
    aws_access_key_id     =<service_account_key_ID>
    aws_secret_access_key =<service_account_secret_key>
    
    [GSUtil]
      default_project_id    =<Google_Cloud_project_ID>
    
    [s3]
      calling_format=boto.s3.connection.OrdinaryCallingFormat
      host=storage.yandexcloud.net
    

    Where:

    • gs_service_client_id: Google Cloud service account name in service-account-name@project-id.iam.gserviceaccount.com format.
    • gs_service_key_file: Absolute path to the JSON file of the access key of the Google Cloud service account.
    • aws_access_key_id: Yandex Cloud service account key ID.
    • aws_secret_access_key: Yandex Cloud service account Secret key.
    • default_project_id: Google Cloud project ID.
  2. Create a main.py script file for data compression and migration:

    main.py
    from google.cloud import bigquery
    import sys
    import argparse
    import time
    import subprocess
    import os
    os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="<absolute_path_to_JSON_file_of_Google_Cloud_service_account_access_key>"
    os.environ["BOTO_CONFIG"]="<absolute_path_to_credentials.boto_file>"
    
    def parse_args():
        parser = argparse.ArgumentParser(description='Export data from Google Big Query to Yandex Cloud object storage')
        parser.add_argument('--bq_project', type=str, help='GBQ project ID')
        parser.add_argument('--bq_location', type=str, help='GBQ table AND GS location')
        parser.add_argument('--gs_bucket', type=str, help='GS export destination bucket')
        parser.add_argument('--yc_bucket', type=str, help='YC copy destination bucket')
        parser.add_argument('--gsutil_path', type=str, help='GSutil exec path', default='gsutil')
        return parser.parse_args()
    
    def select_from_list(message, elements):
        print(message)
        print("\t{}. {}".format(0, "Export all"))
        for ind in range(len(elements)):
            if isinstance(elements[ind].reference, bigquery.DatasetReference):
                print("\t{}. {}".format(ind+1, elements[ind].reference.dataset_id))
            elif isinstance(elements[ind].reference, bigquery.TableReference):
                print("\t{}. {}".format(ind+1, elements[ind].reference.table_id))
        try:
            return int(input("(any letter for cancel) >> "))
        except ValueError:
            print("Exiting")
            sys.exit()
    
    if __name__ == '__main__':
        args = parse_args()
        client = bigquery.Client()
    
        datasets = list(client.list_datasets(args.bq_project))
        dataset_selector = select_from_list("Datasets in project {}".format(args.bq_project), datasets)
        export_list = []
        for i in range(len(datasets)):
            dataset_ref = datasets[i].reference
            if dataset_selector == 0:
                export_list += list(client.list_tables(dataset_ref))
            else:
                if i == dataset_selector - 1:
                    tables = list(client.list_tables(dataset_ref))
                    table_selector = select_from_list("Tables in dataset {}".format(dataset_ref.dataset_id),
                                                      tables)
                    for j in range(len(tables)):
                        if table_selector == 0 or j == table_selector - 1:
                            export_list.append(tables[j])
    
        print("Starting tables export")
        for n in range(len(export_list)):
            table_ref = export_list[n].reference
    
            # Creating Extract Job config. Selecting compression level and data format.
            job_config = bigquery.job.ExtractJobConfig()
            job_config.compression = bigquery.Compression.GZIP
            job_config.destination_format = bigquery.DestinationFormat.PARQUET
    
            print("Exporting {} table".format(table_ref.table_id))
            extract_job = client.extract_table(
                source=table_ref,
                destination_uris="gs://{}/{}".format(args.gs_bucket, "{}-*".format(table_ref.table_id)),
                job_id="export-job-{}-{}".format(table_ref.table_id, round(time.time() * 1000)),
                location=args.bq_location,
                job_config=job_config)
            extract_job.result()
        print("Tables export done")
    
        # Calling gsutil rsync to synchronize source and destination buckets.
        source_uri = "gs://{}/".format(args.gs_bucket)
        destination_uri = "s3://{}/".format(args.yc_bucket)
        print("Synchronizing {} with {}...".format(source_uri, destination_uri))
        proc = subprocess.Popen([args.gsutil_path, "-m", "rsync", source_uri, destination_uri],
                                stdout=sys.stdout,
                                stderr=sys.stderr)
        proc.communicate()
        print("Buckets synchronization done")
    
  3. Run the main.py script to start migrating data from Google BigQuery to the Google Storage bucket and then to the Yandex Object Storage bucket:

    python main.py \
        --bq_project=<Google_Cloud_project_ID> \
        --bq_location=US \
        --gs_bucket=<Google_Cloud_Storage_bucket_name> \
        --yc_bucket=<Object_Storage_bucket_name>
    

    Wait until the data migrates completely.

Set up the mapping of data from Yandex Object Storage to the Managed Service for ClickHouse® clusterSet up the mapping of data from Yandex Object Storage to the Managed Service for ClickHouse® cluster

  1. To create a view based on the imported data, connect to the Managed Service for ClickHouse® cluster database and run the SQL query:

    CREATE view db1.v$google_top_rising_terms on cluster on cluster '{cluster}' AS
    (SELECT
    term,
    score,
    rank,
    country_name,
    country_code,
    region_name,
    region_code,
    week,
    refresh_date
    FROM s3Cluster(
      '<cluster_ID>',
      'https://storage.yandexcloud.net/<Object_Storage_bucket_name>/top_terms-*',
      'Parquet',
      'rank Int32,
      country_name String,
      country_code String,
      region_name String,
      week Timestamp,
      score Nullable(Int32),
      region_code String,
      term String,
      refresh_date Timestamp')
    )
    

    Where:

    • db1: Name of the database in the Managed Service for ClickHouse® cluster where you want to create a view.
    • v$google_top_rising_terms: Name of the view for the imported data.
    • <cluster_ID>: Managed Service for ClickHouse® cluster ID. You can retrieve it with a list of clusters in the folder.
    • top_terms-*: Key part of the names of the Object Storage bucket objects. For example, if you move from Google Cloud a table containing rows with top_terms for name, then, in the Object Storage bucket, they will look as a set of objects with names like top_terms-000000000001, top_terms-000000000002, and so on. In this case, in the SQL query, you must specify top_terms-* for the view to include all the entries with this name from that table.
  2. To output the first 100 entries from the selected view, run the SQL query (in the example, we use the v$google_top_rising_terms view and the db1 database):

    SELECT * FROM db1.v$google_top_rising_terms limit 100
    

Use Yandex DataLens to analyze the dataUse Yandex DataLens to analyze the data

  1. Connect the Managed Service for ClickHouse® cluster to DataLens.

  2. Create a dataset from the db1.v$google_top_rising_terms table. For the score field, select the average aggregation.

  3. Create a bar chart:

    1. Drag the country_name field to the X section.
    2. Drag the score field to the Y section.
    3. Drag the term field to the Filters section. In the resulting form, enter the settings:
      • Operation: Belongs to a set.
      • Available: Enter a term from a list of available terms, then click Apply filter.
    4. Drag the term field to the Sorting section.

The system will analyze the usage of this query in the search system, and the result will be output as a bar chart by country.

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

  1. Delete the Managed Service for ClickHouse® cluster.
  2. Delete all objects from the Object Storage bucket, then delete the bucket.
  3. Delete the Google Storage bucket.

ClickHouse® is a registered trademark of ClickHouse, Inc.

Was the article helpful?

Previous
Saving a Yandex Data Streams data stream in Managed Service for ClickHouse®
Next
Delivering data from Managed Service for Apache Kafka® using Yandex Data Transfer
Yandex project
© 2025 Yandex.Cloud LLC