Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Object Storage
    • All tutorials
    • Getting statistics on object queries with S3 Select
    • Getting website traffic statistics with S3 Select
    • Getting statistics on object queries using Yandex Query
    • Generating a resource-by-resource cost breakdown report using S3 Select
    • Server-side encryption
    • Integrating an L7 load balancer with CDN and Object Storage
    • Blue-green and canary deployment of service versions
    • Analyzing logs in DataLens
    • Mounting buckets to the file system of Yandex Data Processing hosts
    • Using Object Storage in Yandex Data Processing
    • Importing data from Object Storage, processing and exporting to Managed Service for ClickHouse®
    • Mounting a bucket as a disk in Windows
    • Migrating data from Yandex Data Streams using Yandex Data Transfer
    • Using hybrid storage in Yandex Managed Service for ClickHouse®
    • Loading data from Yandex Managed Service for OpenSearch to Yandex Object Storage using Yandex Data Transfer
    • Automatically copying objects from one bucket to another
    • Recognizing audio files in a bucket on a regular basis
    • Training a model in Yandex DataSphere on data from Object Storage
    • Connecting to Object Storage from VPC
    • Migrating data to Yandex Managed Service for PostgreSQL using Yandex Data Transfer
    • Uploading data to Yandex Managed Service for Greenplum® using Yandex Data Transfer
    • Uploading data to Yandex Managed Service for ClickHouse® using Yandex Data Transfer
    • Uploading data to Yandex Managed Service for YDB using Yandex Data Transfer
    • Exchanging data between Yandex Managed Service for ClickHouse® and Yandex Data Processing
    • Uploading data from Yandex Managed Service for YDB using Yandex Data Transfer
    • Hosting a static Gatsby website in Object Storage
    • Migrating a database from Managed Service for PostgreSQL to Object Storage
    • Exchanging data between Yandex Managed Service for ClickHouse® and Yandex Data Processing
    • Importing data from Yandex Managed Service for PostgreSQL to Yandex Data Processing using Sqoop
    • Importing data from Yandex Managed Service for MySQL® to Yandex Data Processing using Sqoop
    • Migrating data from Yandex Object Storage to Yandex Managed Service for MySQL® using Yandex Data Transfer
    • Migrating a database from Yandex Managed Service for MySQL® to Yandex Object Storage
    • Exporting Greenplum® data to a cold storage in Yandex Object Storage
    • Loading data from Yandex Direct to a Yandex Managed Service for ClickHouse® data mart using Yandex Cloud Functions, Yandex Object Storage, and Yandex Data Transfer
    • Migrating data from Elasticsearch to Yandex Managed Service for OpenSearch
    • Uploading Terraform states to Object Storage
    • Locking Terraform states using Managed Service for YDB
    • Visualizing Yandex Query data
    • Publishing game updates
    • VM backups using Hystax Acura
    • Backing up to Object Storage with CloudBerry Desktop Backup
    • Backing up to Object Storage with Duplicati
    • Backing up to Object Storage with Bacula
    • Backing up to Yandex Object Storage with Veeam Backup
    • Backing up to Object Storage with Veritas Backup Exec
    • Managed Service for Kubernetes cluster backups in Object Storage
    • Developing a custom integration in API Gateway
    • URL shortener
    • Storing application runtime logs
    • Developing a skill for Alice and a website with authorization
    • Creating an interactive serverless application using WebSocket
    • Deploying a web application using the Java Servlet API
    • Developing a Telegram bot
    • Replicating logs to Object Storage using Fluent Bit
    • Replicating logs to Object Storage using Data Streams
    • Uploading audit logs to ArcSight SIEM
    • Exporting audit logs to SIEM Splunk systems
    • Creating an MLFlow server for logging experiments and artifacts
    • Operations with data using Yandex Query
    • Federated data queries using Query
    • Recognizing text in image archives using Vision OCR
    • Converting a video to a GIF in Python
    • Automating tasks using Managed Service for Apache Airflow™
    • Processing files with usage details in Yandex Cloud Billing
    • Deploying a web app with JWT authorization in API Gateway and authentication in Firebase
    • Searching for Yandex Cloud events in Yandex Query
    • Searching for Yandex Cloud events in Object Storage
    • Creating an external table from a bucket table using a configuration file
    • Migrating databases from Google BigQuery to Managed Service for ClickHouse®
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Bucket logs
  • Release notes
  • FAQ

In this article:

  • Required paid resources
  • Getting started
  • Create Google Cloud resources
  • Create the Yandex Cloud resources
  • Migrate data from Google BigQuery to Yandex Object Storage
  • Set up the mapping of data from Yandex Object Storage to the Managed Service for ClickHouse® cluster
  • Use Yandex DataLens to analyze the data
  • Delete the resources you created
  1. Tutorials
  2. Migrating databases from Google BigQuery to Managed Service for ClickHouse®

Migrating databases from Google BigQuery to Yandex Managed Service for ClickHouse®

Written by
Yandex Cloud
Updated at April 25, 2025
  • Required paid resources
  • Getting started
    • Create Google Cloud resources
    • Create the Yandex Cloud resources
  • Migrate data from Google BigQuery to Yandex Object Storage
  • Set up the mapping of data from Yandex Object Storage to the Managed Service for ClickHouse® cluster
  • Use Yandex DataLens to analyze the data
  • Delete the resources you created

You can migrate a database from Google BigQuery to Yandex Managed Service for ClickHouse® and then use Yandex DataLens to analyze it.

The table is compressed, then moved to a Google Storage bucket and then — to an Yandex Object Storage bucket. After that, the data is imported to a Managed Service for ClickHouse® cluster where you can analyze it using Yandex DataLens.

This method of migration has the following benefits:

  • You can specify the export format and the compression ratio.
  • Lower data amounts are involved, which accelerates migration and reduces its cost.

However, in this case, the data is migrated as is without transforming or copying the updated increments.

To migrate the database from Google BigQuery to Managed Service for ClickHouse®:

  1. Migrate data from Google BigQuery to Yandex Object Storage.
  2. Set up the mapping of data from Yandex Object Storage to the Managed Service for ClickHouse® cluster.
  3. Analyze the data with Yandex DataLens.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Managed Service for ClickHouse® cluster fee: Using computing resources allocated to hosts (including ZooKeeper hosts) and disk space (see Managed Service for ClickHouse® pricing).
  • Fee for using public IP addresses if public access is enabled for cluster hosts (see Virtual Private Cloud pricing).
  • Object Storage bucket fee: Storing data and performing operations with it (see Object Storage pricing).
  • Fee for using Yandex DataLens (see DataLens pricing).

Getting startedGetting started

To migrate your database, create Google Cloud and Yandex Cloud resources.

Create Google Cloud resourcesCreate Google Cloud resources

  1. Create a Google Storage bucket.

  2. Create a Google Cloud service account with the BigQuery Data Editor and Storage Object Admin roles.

  3. Create an access key for the service account and save it as a .json file.

  4. Download and install the google-cloud-sdk CLI utilities.

  5. Authenticate in gcloud CLI.

  6. Install the Google BigQuery Python SDK utility. This package requires Python 3.7 or higher.

  7. Prepare a dataset for Google BigQuery. As an example, here we use a public dataset called google_trends for Google BigQuery that includes the international_top_terms table with the following columns:

    • rank
    • country_name
    • country_code
    • region_name
    • week
    • score
    • region_code
    • term
    • refresh_date

Create the Yandex Cloud resourcesCreate the Yandex Cloud resources

  1. Create a service account with the storage.uploader role to access the Object Storage bucket.

  2. Create a static access key for the service account. Save the key ID and secret key, you will need them later.

  3. Create a Managed Service for ClickHouse® cluster with any suitable configuration. When creating a cluster:

    • Use the service account you created earlier.
    • Enable the DataLens access parameter.
  4. Use Zookeeper to enable fault tolerance.

  5. Create an Object Storage bucket. When creating the bucket, enable public access to read objects and list objects in the bucket.

Migrate data from Google BigQuery to Yandex Object StorageMigrate data from Google BigQuery to Yandex Object Storage

  1. Create a file named credentials.boto with access credentials for Google Cloud and Yandex Cloud resources:

    [Credentials]
    gs_service_client_id  =<Google_Cloud_service_account>
    gs_service_key_file   =<absolute_path_to_JSON_file>
    aws_access_key_id     =<service_account_key_ID>
    aws_secret_access_key =<service_account_secret_key>
    
    [GSUtil]
      default_project_id    =<Google_Cloud_project_ID>
    
    [s3]
      calling_format=boto.s3.connection.OrdinaryCallingFormat
      host=storage.yandexcloud.net
    

    Where:

    • gs_service_client_id: Google Cloud service account name in service-account-name@project-id.iam.gserviceaccount.com format.
    • gs_service_key_file: Absolute path to the JSON file of the access key of the Google Cloud service account.
    • aws_access_key_id: Yandex Cloud service account key ID.
    • aws_secret_access_key: Yandex Cloud service account Secret key.
    • default_project_id: Google Cloud project ID.
  2. Create a main.py script file for data compression and migration:

    main.py
    from google.cloud import bigquery
    import sys
    import argparse
    import time
    import subprocess
    import os
    os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="<absolute_path_to_JSON_file_of_Google_Cloud_service_account_access_key>"
    os.environ["BOTO_CONFIG"]="<absolute_path_to_credentials.boto_file>"
    
    def parse_args():
        parser = argparse.ArgumentParser(description='Export data from Google Big Query to Yandex Cloud object storage')
        parser.add_argument('--bq_project', type=str, help='GBQ project ID')
        parser.add_argument('--bq_location', type=str, help='GBQ table AND GS location')
        parser.add_argument('--gs_bucket', type=str, help='GS export destination bucket')
        parser.add_argument('--yc_bucket', type=str, help='YC copy destination bucket')
        parser.add_argument('--gsutil_path', type=str, help='GSutil exec path', default='gsutil')
        return parser.parse_args()
    
    def select_from_list(message, elements):
        print(message)
        print("\t{}. {}".format(0, "Export all"))
        for ind in range(len(elements)):
            if isinstance(elements[ind].reference, bigquery.DatasetReference):
                print("\t{}. {}".format(ind+1, elements[ind].reference.dataset_id))
            elif isinstance(elements[ind].reference, bigquery.TableReference):
                print("\t{}. {}".format(ind+1, elements[ind].reference.table_id))
        try:
            return int(input("(any letter for cancel) >> "))
        except ValueError:
            print("Exiting")
            sys.exit()
    
    if __name__ == '__main__':
        args = parse_args()
        client = bigquery.Client()
    
        datasets = list(client.list_datasets(args.bq_project))
        dataset_selector = select_from_list("Datasets in project {}".format(args.bq_project), datasets)
        export_list = []
        for i in range(len(datasets)):
            dataset_ref = datasets[i].reference
            if dataset_selector == 0:
                export_list += list(client.list_tables(dataset_ref))
            else:
                if i == dataset_selector - 1:
                    tables = list(client.list_tables(dataset_ref))
                    table_selector = select_from_list("Tables in dataset {}".format(dataset_ref.dataset_id),
                                                      tables)
                    for j in range(len(tables)):
                        if table_selector == 0 or j == table_selector - 1:
                            export_list.append(tables[j])
    
        print("Starting tables export")
        for n in range(len(export_list)):
            table_ref = export_list[n].reference
    
            # Creating Extract Job config. Selecting compression level and data format.
            job_config = bigquery.job.ExtractJobConfig()
            job_config.compression = bigquery.Compression.GZIP
            job_config.destination_format = bigquery.DestinationFormat.PARQUET
    
            print("Exporting {} table".format(table_ref.table_id))
            extract_job = client.extract_table(
                source=table_ref,
                destination_uris="gs://{}/{}".format(args.gs_bucket, "{}-*".format(table_ref.table_id)),
                job_id="export-job-{}-{}".format(table_ref.table_id, round(time.time() * 1000)),
                location=args.bq_location,
                job_config=job_config)
            extract_job.result()
        print("Tables export done")
    
        # Calling gsutil rsync to synchronize source and destination buckets.
        source_uri = "gs://{}/".format(args.gs_bucket)
        destination_uri = "s3://{}/".format(args.yc_bucket)
        print("Synchronizing {} with {}...".format(source_uri, destination_uri))
        proc = subprocess.Popen([args.gsutil_path, "-m", "rsync", source_uri, destination_uri],
                                stdout=sys.stdout,
                                stderr=sys.stderr)
        proc.communicate()
        print("Buckets synchronization done")
    
  3. Run the main.py script to start migrating data from Google BigQuery to the Google Storage bucket and then to the Yandex Object Storage bucket:

    python main.py \
        --bq_project=<Google_Cloud_project_ID> \
        --bq_location=US \
        --gs_bucket=<Google_Cloud_Storage_bucket_name> \
        --yc_bucket=<Object_Storage_bucket_name>
    

    Wait until the data migrates completely.

Set up the mapping of data from Yandex Object Storage to the Managed Service for ClickHouse® clusterSet up the mapping of data from Yandex Object Storage to the Managed Service for ClickHouse® cluster

  1. To create a view based on the imported data, connect to the Managed Service for ClickHouse® cluster database and run the SQL query:

    CREATE view db1.v$google_top_rising_terms on cluster on cluster '{cluster}' AS
    (SELECT
    term,
    score,
    rank,
    country_name,
    country_code,
    region_name,
    region_code,
    week,
    refresh_date
    FROM s3Cluster(
      '<cluster_ID>',
      'https://storage.yandexcloud.net/<Object_Storage_bucket_name>/top_terms-*',
      'Parquet',
      'rank Int32,
      country_name String,
      country_code String,
      region_name String,
      week Timestamp,
      score Nullable(Int32),
      region_code String,
      term String,
      refresh_date Timestamp')
    )
    

    Where:

    • db1: Name of the database in the Managed Service for ClickHouse® cluster where you want to create a view.
    • v$google_top_rising_terms: Name of the view for the imported data.
    • <cluster_ID>: Managed Service for ClickHouse® cluster ID. You can retrieve it with a list of clusters in the folder.
    • top_terms-*: Key part of the names of the Object Storage bucket objects. For example, if you move from Google Cloud a table containing rows with top_terms for name, then, in the Object Storage bucket, they will look as a set of objects with names like top_terms-000000000001, top_terms-000000000002, and so on. In this case, in the SQL query, you must specify top_terms-* for the view to include all the entries with this name from that table.
  2. To output the first 100 entries from the selected view, run the SQL query (in the example, we use the v$google_top_rising_terms view and the db1 database):

    SELECT * FROM db1.v$google_top_rising_terms limit 100
    

Use Yandex DataLens to analyze the dataUse Yandex DataLens to analyze the data

  1. Connect the Managed Service for ClickHouse® cluster to DataLens.

  2. Create a dataset from the db1.v$google_top_rising_terms table. For the score field, select the average aggregation.

  3. Create a bar chart:

    1. Drag the country_name field to the X section.
    2. Drag the score field to the Y section.
    3. Drag the term field to the Filters section. In the resulting form, enter the settings:
      • Operation: Belongs to a set.
      • Available: Enter a term from a list of available terms, then click Apply filter.
    4. Drag the term field to the Sorting section.

The system will analyze the usage of this query in the search system, and the result will be output as a bar chart by country.

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

  1. Delete the Managed Service for ClickHouse® cluster.
  2. Delete all objects from the Object Storage bucket, then delete the bucket.
  3. Delete the Google Storage bucket.

ClickHouse® is a registered trademark of ClickHouse, Inc.

Was the article helpful?

Previous
Creating an external table from a bucket table using a configuration file
Next
Overview
© 2025 Direct Cursus Technology L.L.C.