Managed Service for ClickHouse API, gRPC: ClusterService.Create
- gRPC request
- CreateClusterRequest
- ConfigSpec
- Clickhouse
- ClickhouseConfig
- AccessControlImprovements
- MergeTree
- Compression
- ExternalDictionary
- Structure
- Id
- Key
- Attribute
- Layout
- Range
- HttpSource
- Header
- MysqlSource
- Replica
- ClickhouseSource
- MongodbSource
- PostgresqlSource
- GraphiteRollup
- Pattern
- Retention
- Kafka
- KafkaTopic
- Rabbitmq
- QueryMaskingRule
- QueryCache
- JdbcBridge
- Macro
- Resources
- DiskSizeAutoscaling
- Zookeeper
- Access
- CloudStorage
- DatabaseSpec
- UserSpec
- Permission
- UserSettings
- UserQuota
- HostSpec
- MaintenanceWindow
- AnytimeMaintenanceWindow
- WeeklyMaintenanceWindow
- ShardSpec
- ShardConfigSpec
- Clickhouse
- operation.Operation
- CreateClusterMetadata
- Cluster
- Monitoring
- ClusterConfig
- Clickhouse
- ClickhouseConfigSet
- ClickhouseConfig
- AccessControlImprovements
- MergeTree
- Compression
- ExternalDictionary
- Structure
- Id
- Key
- Attribute
- Layout
- Range
- HttpSource
- Header
- MysqlSource
- Replica
- ClickhouseSource
- MongodbSource
- PostgresqlSource
- GraphiteRollup
- Pattern
- Retention
- Kafka
- KafkaTopic
- Rabbitmq
- QueryMaskingRule
- QueryCache
- JdbcBridge
- Macro
- Resources
- DiskSizeAutoscaling
- Zookeeper
- Access
- CloudStorage
- MaintenanceWindow
- AnytimeMaintenanceWindow
- WeeklyMaintenanceWindow
- MaintenanceOperation
Creates a ClickHouse cluster in the specified folder.
gRPC request
rpc Create (CreateClusterRequest) returns (operation.Operation)
CreateClusterRequest
{
"folder_id": "string",
"name": "string",
"description": "string",
"labels": "map<string, string>",
"environment": "Environment",
"config_spec": {
"version": "string",
"clickhouse": {
"config": {
"background_pool_size": "google.protobuf.Int64Value",
"background_merges_mutations_concurrency_ratio": "google.protobuf.Int64Value",
"background_schedule_pool_size": "google.protobuf.Int64Value",
"background_fetches_pool_size": "google.protobuf.Int64Value",
"background_move_pool_size": "google.protobuf.Int64Value",
"background_distributed_schedule_pool_size": "google.protobuf.Int64Value",
"background_buffer_flush_schedule_pool_size": "google.protobuf.Int64Value",
"background_message_broker_schedule_pool_size": "google.protobuf.Int64Value",
"background_common_pool_size": "google.protobuf.Int64Value",
"dictionaries_lazy_load": "google.protobuf.BoolValue",
"log_level": "LogLevel",
"query_log_retention_size": "google.protobuf.Int64Value",
"query_log_retention_time": "google.protobuf.Int64Value",
"query_thread_log_enabled": "google.protobuf.BoolValue",
"query_thread_log_retention_size": "google.protobuf.Int64Value",
"query_thread_log_retention_time": "google.protobuf.Int64Value",
"part_log_retention_size": "google.protobuf.Int64Value",
"part_log_retention_time": "google.protobuf.Int64Value",
"metric_log_enabled": "google.protobuf.BoolValue",
"metric_log_retention_size": "google.protobuf.Int64Value",
"metric_log_retention_time": "google.protobuf.Int64Value",
"trace_log_enabled": "google.protobuf.BoolValue",
"trace_log_retention_size": "google.protobuf.Int64Value",
"trace_log_retention_time": "google.protobuf.Int64Value",
"text_log_enabled": "google.protobuf.BoolValue",
"text_log_retention_size": "google.protobuf.Int64Value",
"text_log_retention_time": "google.protobuf.Int64Value",
"text_log_level": "LogLevel",
"opentelemetry_span_log_enabled": "google.protobuf.BoolValue",
"opentelemetry_span_log_retention_size": "google.protobuf.Int64Value",
"opentelemetry_span_log_retention_time": "google.protobuf.Int64Value",
"query_views_log_enabled": "google.protobuf.BoolValue",
"query_views_log_retention_size": "google.protobuf.Int64Value",
"query_views_log_retention_time": "google.protobuf.Int64Value",
"asynchronous_metric_log_enabled": "google.protobuf.BoolValue",
"asynchronous_metric_log_retention_size": "google.protobuf.Int64Value",
"asynchronous_metric_log_retention_time": "google.protobuf.Int64Value",
"session_log_enabled": "google.protobuf.BoolValue",
"session_log_retention_size": "google.protobuf.Int64Value",
"session_log_retention_time": "google.protobuf.Int64Value",
"zookeeper_log_enabled": "google.protobuf.BoolValue",
"zookeeper_log_retention_size": "google.protobuf.Int64Value",
"zookeeper_log_retention_time": "google.protobuf.Int64Value",
"asynchronous_insert_log_enabled": "google.protobuf.BoolValue",
"asynchronous_insert_log_retention_size": "google.protobuf.Int64Value",
"asynchronous_insert_log_retention_time": "google.protobuf.Int64Value",
"processors_profile_log_enabled": "google.protobuf.BoolValue",
"processors_profile_log_retention_size": "google.protobuf.Int64Value",
"processors_profile_log_retention_time": "google.protobuf.Int64Value",
"error_log_enabled": "google.protobuf.BoolValue",
"error_log_retention_size": "google.protobuf.Int64Value",
"error_log_retention_time": "google.protobuf.Int64Value",
"access_control_improvements": {
"select_from_system_db_requires_grant": "google.protobuf.BoolValue",
"select_from_information_schema_requires_grant": "google.protobuf.BoolValue"
},
"max_connections": "google.protobuf.Int64Value",
"max_concurrent_queries": "google.protobuf.Int64Value",
"max_table_size_to_drop": "google.protobuf.Int64Value",
"max_partition_size_to_drop": "google.protobuf.Int64Value",
"keep_alive_timeout": "google.protobuf.Int64Value",
"uncompressed_cache_size": "google.protobuf.Int64Value",
"mark_cache_size": "google.protobuf.Int64Value",
"timezone": "string",
"geobase_enabled": "google.protobuf.BoolValue",
"geobase_uri": "string",
"default_database": "google.protobuf.StringValue",
"total_memory_profiler_step": "google.protobuf.Int64Value",
"total_memory_tracker_sample_probability": "google.protobuf.DoubleValue",
"async_insert_threads": "google.protobuf.Int64Value",
"backup_threads": "google.protobuf.Int64Value",
"restore_threads": "google.protobuf.Int64Value",
"merge_tree": {
"parts_to_delay_insert": "google.protobuf.Int64Value",
"parts_to_throw_insert": "google.protobuf.Int64Value",
"inactive_parts_to_delay_insert": "google.protobuf.Int64Value",
"inactive_parts_to_throw_insert": "google.protobuf.Int64Value",
"max_avg_part_size_for_too_many_parts": "google.protobuf.Int64Value",
"max_parts_in_total": "google.protobuf.Int64Value",
"max_replicated_merges_in_queue": "google.protobuf.Int64Value",
"number_of_free_entries_in_pool_to_lower_max_size_of_merge": "google.protobuf.Int64Value",
"number_of_free_entries_in_pool_to_execute_mutation": "google.protobuf.Int64Value",
"max_bytes_to_merge_at_min_space_in_pool": "google.protobuf.Int64Value",
"max_bytes_to_merge_at_max_space_in_pool": "google.protobuf.Int64Value",
"min_bytes_for_wide_part": "google.protobuf.Int64Value",
"min_rows_for_wide_part": "google.protobuf.Int64Value",
"cleanup_delay_period": "google.protobuf.Int64Value",
"max_cleanup_delay_period": "google.protobuf.Int64Value",
"merge_selecting_sleep_ms": "google.protobuf.Int64Value",
"max_merge_selecting_sleep_ms": "google.protobuf.Int64Value",
"min_age_to_force_merge_seconds": "google.protobuf.Int64Value",
"min_age_to_force_merge_on_partition_only": "google.protobuf.BoolValue",
"merge_max_block_size": "google.protobuf.Int64Value",
"deduplicate_merge_projection_mode": "DeduplicateMergeProjectionMode",
"lightweight_mutation_projection_mode": "LightweightMutationProjectionMode",
"replicated_deduplication_window": "google.protobuf.Int64Value",
"replicated_deduplication_window_seconds": "google.protobuf.Int64Value",
"fsync_after_insert": "google.protobuf.BoolValue",
"fsync_part_directory": "google.protobuf.BoolValue",
"min_compressed_bytes_to_fsync_after_fetch": "google.protobuf.Int64Value",
"min_compressed_bytes_to_fsync_after_merge": "google.protobuf.Int64Value",
"min_rows_to_fsync_after_merge": "google.protobuf.Int64Value",
"ttl_only_drop_parts": "google.protobuf.BoolValue",
"merge_with_ttl_timeout": "google.protobuf.Int64Value",
"merge_with_recompression_ttl_timeout": "google.protobuf.Int64Value",
"max_number_of_merges_with_ttl_in_pool": "google.protobuf.Int64Value",
"materialize_ttl_recalculate_only": "google.protobuf.BoolValue",
"check_sample_column_is_correct": "google.protobuf.BoolValue",
"allow_remote_fs_zero_copy_replication": "google.protobuf.BoolValue"
},
"compression": [
{
"method": "Method",
"min_part_size": "int64",
"min_part_size_ratio": "double",
"level": "google.protobuf.Int64Value"
}
],
"dictionaries": [
{
"name": "string",
"structure": {
"id": {
"name": "string"
},
"key": {
"attributes": [
{
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
}
]
},
"range_min": {
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
},
"range_max": {
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
},
"attributes": [
{
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
}
]
},
"layout": {
"type": "Type",
"size_in_cells": "int64",
"allow_read_expired_keys": "google.protobuf.BoolValue",
"max_update_queue_size": "int64",
"update_queue_push_timeout_milliseconds": "int64",
"query_wait_timeout_milliseconds": "int64",
"max_threads_for_updates": "int64",
"initial_array_size": "int64",
"max_array_size": "int64",
"access_to_key_from_attributes": "google.protobuf.BoolValue"
},
// Includes only one of the fields `fixed_lifetime`, `lifetime_range`
"fixed_lifetime": "int64",
"lifetime_range": {
"min": "int64",
"max": "int64"
},
// end of the list of possible fields
// Includes only one of the fields `http_source`, `mysql_source`, `clickhouse_source`, `mongodb_source`, `postgresql_source`
"http_source": {
"url": "string",
"format": "string",
"headers": [
{
"name": "string",
"value": "string"
}
]
},
"mysql_source": {
"db": "string",
"table": "string",
"port": "int64",
"user": "string",
"password": "string",
"replicas": [
{
"host": "string",
"priority": "int64",
"port": "int64",
"user": "string",
"password": "string"
}
],
"where": "string",
"invalidate_query": "string",
"close_connection": "google.protobuf.BoolValue",
"share_connection": "google.protobuf.BoolValue"
},
"clickhouse_source": {
"db": "string",
"table": "string",
"host": "string",
"port": "int64",
"user": "string",
"password": "string",
"where": "string",
"secure": "google.protobuf.BoolValue"
},
"mongodb_source": {
"db": "string",
"collection": "string",
"host": "string",
"port": "int64",
"user": "string",
"password": "string",
"options": "string"
},
"postgresql_source": {
"db": "string",
"table": "string",
"hosts": [
"string"
],
"port": "int64",
"user": "string",
"password": "string",
"invalidate_query": "string",
"ssl_mode": "SslMode"
}
// end of the list of possible fields
}
],
"graphite_rollup": [
{
"name": "string",
"patterns": [
{
"regexp": "string",
"function": "string",
"retention": [
{
"age": "int64",
"precision": "int64"
}
]
}
],
"path_column_name": "string",
"time_column_name": "string",
"value_column_name": "string",
"version_column_name": "string"
}
],
"kafka": {
"security_protocol": "SecurityProtocol",
"sasl_mechanism": "SaslMechanism",
"sasl_username": "string",
"sasl_password": "string",
"enable_ssl_certificate_verification": "google.protobuf.BoolValue",
"max_poll_interval_ms": "google.protobuf.Int64Value",
"session_timeout_ms": "google.protobuf.Int64Value",
"debug": "Debug",
"auto_offset_reset": "AutoOffsetReset"
},
"kafka_topics": [
{
"name": "string",
"settings": {
"security_protocol": "SecurityProtocol",
"sasl_mechanism": "SaslMechanism",
"sasl_username": "string",
"sasl_password": "string",
"enable_ssl_certificate_verification": "google.protobuf.BoolValue",
"max_poll_interval_ms": "google.protobuf.Int64Value",
"session_timeout_ms": "google.protobuf.Int64Value",
"debug": "Debug",
"auto_offset_reset": "AutoOffsetReset"
}
}
],
"rabbitmq": {
"username": "string",
"password": "string",
"vhost": "string"
},
"query_masking_rules": [
{
"name": "string",
"regexp": "string",
"replace": "string"
}
],
"query_cache": {
"max_size_in_bytes": "google.protobuf.Int64Value",
"max_entries": "google.protobuf.Int64Value",
"max_entry_size_in_bytes": "google.protobuf.Int64Value",
"max_entry_size_in_rows": "google.protobuf.Int64Value"
},
"jdbc_bridge": {
"host": "string",
"port": "google.protobuf.Int64Value"
},
"mysql_protocol": "google.protobuf.BoolValue",
"custom_macros": [
{
"name": "string",
"value": "string"
}
],
"builtin_dictionaries_reload_interval": "google.protobuf.Int64Value"
},
"resources": {
"resource_preset_id": "string",
"disk_size": "int64",
"disk_type_id": "string"
},
"disk_size_autoscaling": {
"planned_usage_threshold": "google.protobuf.Int64Value",
"emergency_usage_threshold": "google.protobuf.Int64Value",
"disk_size_limit": "google.protobuf.Int64Value"
}
},
"zookeeper": {
"resources": {
"resource_preset_id": "string",
"disk_size": "int64",
"disk_type_id": "string"
},
"disk_size_autoscaling": {
"planned_usage_threshold": "google.protobuf.Int64Value",
"emergency_usage_threshold": "google.protobuf.Int64Value",
"disk_size_limit": "google.protobuf.Int64Value"
}
},
"backup_window_start": "google.type.TimeOfDay",
"access": {
"data_lens": "bool",
"web_sql": "bool",
"metrika": "bool",
"serverless": "bool",
"data_transfer": "bool",
"yandex_query": "bool"
},
"cloud_storage": {
"enabled": "bool",
"move_factor": "google.protobuf.DoubleValue",
"data_cache_enabled": "google.protobuf.BoolValue",
"data_cache_max_size": "google.protobuf.Int64Value",
"prefer_not_to_merge": "google.protobuf.BoolValue"
},
"sql_database_management": "google.protobuf.BoolValue",
"sql_user_management": "google.protobuf.BoolValue",
"admin_password": "string",
"embedded_keeper": "google.protobuf.BoolValue",
"backup_retain_period_days": "google.protobuf.Int64Value"
},
"database_specs": [
{
"name": "string",
"engine": "DatabaseEngine"
}
],
"user_specs": [
{
"name": "string",
"password": "string",
"generate_password": "google.protobuf.BoolValue",
"permissions": [
{
"database_name": "string"
}
],
"settings": {
"readonly": "google.protobuf.Int64Value",
"allow_ddl": "google.protobuf.BoolValue",
"allow_introspection_functions": "google.protobuf.BoolValue",
"connect_timeout": "google.protobuf.Int64Value",
"connect_timeout_with_failover": "google.protobuf.Int64Value",
"receive_timeout": "google.protobuf.Int64Value",
"send_timeout": "google.protobuf.Int64Value",
"idle_connection_timeout": "google.protobuf.Int64Value",
"timeout_before_checking_execution_speed": "google.protobuf.Int64Value",
"insert_quorum": "google.protobuf.Int64Value",
"insert_quorum_timeout": "google.protobuf.Int64Value",
"insert_quorum_parallel": "google.protobuf.BoolValue",
"select_sequential_consistency": "google.protobuf.BoolValue",
"replication_alter_partitions_sync": "google.protobuf.Int64Value",
"max_replica_delay_for_distributed_queries": "google.protobuf.Int64Value",
"fallback_to_stale_replicas_for_distributed_queries": "google.protobuf.BoolValue",
"distributed_product_mode": "DistributedProductMode",
"distributed_aggregation_memory_efficient": "google.protobuf.BoolValue",
"distributed_ddl_task_timeout": "google.protobuf.Int64Value",
"distributed_ddl_output_mode": "DistributedDdlOutputMode",
"skip_unavailable_shards": "google.protobuf.BoolValue",
"use_hedged_requests": "google.protobuf.BoolValue",
"hedged_connection_timeout_ms": "google.protobuf.Int64Value",
"load_balancing": "LoadBalancing",
"prefer_localhost_replica": "google.protobuf.BoolValue",
"compile_expressions": "google.protobuf.BoolValue",
"min_count_to_compile_expression": "google.protobuf.Int64Value",
"max_block_size": "google.protobuf.Int64Value",
"min_insert_block_size_rows": "google.protobuf.Int64Value",
"min_insert_block_size_bytes": "google.protobuf.Int64Value",
"max_insert_block_size": "google.protobuf.Int64Value",
"max_partitions_per_insert_block": "google.protobuf.Int64Value",
"min_bytes_to_use_direct_io": "google.protobuf.Int64Value",
"use_uncompressed_cache": "google.protobuf.BoolValue",
"merge_tree_max_rows_to_use_cache": "google.protobuf.Int64Value",
"merge_tree_max_bytes_to_use_cache": "google.protobuf.Int64Value",
"merge_tree_min_rows_for_concurrent_read": "google.protobuf.Int64Value",
"merge_tree_min_bytes_for_concurrent_read": "google.protobuf.Int64Value",
"max_bytes_before_external_group_by": "google.protobuf.Int64Value",
"max_bytes_before_external_sort": "google.protobuf.Int64Value",
"group_by_two_level_threshold": "google.protobuf.Int64Value",
"group_by_two_level_threshold_bytes": "google.protobuf.Int64Value",
"deduplicate_blocks_in_dependent_materialized_views": "google.protobuf.BoolValue",
"local_filesystem_read_method": "LocalFilesystemReadMethod",
"remote_filesystem_read_method": "RemoteFilesystemReadMethod",
"priority": "google.protobuf.Int64Value",
"max_threads": "google.protobuf.Int64Value",
"max_insert_threads": "google.protobuf.Int64Value",
"max_memory_usage": "google.protobuf.Int64Value",
"max_memory_usage_for_user": "google.protobuf.Int64Value",
"memory_overcommit_ratio_denominator": "google.protobuf.Int64Value",
"memory_overcommit_ratio_denominator_for_user": "google.protobuf.Int64Value",
"memory_usage_overcommit_max_wait_microseconds": "google.protobuf.Int64Value",
"max_network_bandwidth": "google.protobuf.Int64Value",
"max_network_bandwidth_for_user": "google.protobuf.Int64Value",
"max_temporary_data_on_disk_size_for_query": "google.protobuf.Int64Value",
"max_temporary_data_on_disk_size_for_user": "google.protobuf.Int64Value",
"max_concurrent_queries_for_user": "google.protobuf.Int64Value",
"force_index_by_date": "google.protobuf.BoolValue",
"force_primary_key": "google.protobuf.BoolValue",
"max_rows_to_read": "google.protobuf.Int64Value",
"max_bytes_to_read": "google.protobuf.Int64Value",
"read_overflow_mode": "OverflowMode",
"max_rows_to_group_by": "google.protobuf.Int64Value",
"group_by_overflow_mode": "GroupByOverflowMode",
"max_rows_to_sort": "google.protobuf.Int64Value",
"max_bytes_to_sort": "google.protobuf.Int64Value",
"sort_overflow_mode": "OverflowMode",
"max_result_rows": "google.protobuf.Int64Value",
"max_result_bytes": "google.protobuf.Int64Value",
"result_overflow_mode": "OverflowMode",
"max_rows_in_distinct": "google.protobuf.Int64Value",
"max_bytes_in_distinct": "google.protobuf.Int64Value",
"distinct_overflow_mode": "OverflowMode",
"max_rows_to_transfer": "google.protobuf.Int64Value",
"max_bytes_to_transfer": "google.protobuf.Int64Value",
"transfer_overflow_mode": "OverflowMode",
"max_execution_time": "google.protobuf.Int64Value",
"timeout_overflow_mode": "OverflowMode",
"max_rows_in_set": "google.protobuf.Int64Value",
"max_bytes_in_set": "google.protobuf.Int64Value",
"set_overflow_mode": "OverflowMode",
"max_rows_in_join": "google.protobuf.Int64Value",
"max_bytes_in_join": "google.protobuf.Int64Value",
"join_overflow_mode": "OverflowMode",
"max_columns_to_read": "google.protobuf.Int64Value",
"max_temporary_columns": "google.protobuf.Int64Value",
"max_temporary_non_const_columns": "google.protobuf.Int64Value",
"max_query_size": "google.protobuf.Int64Value",
"max_ast_depth": "google.protobuf.Int64Value",
"max_ast_elements": "google.protobuf.Int64Value",
"max_expanded_ast_elements": "google.protobuf.Int64Value",
"max_parser_depth": "google.protobuf.Int64Value",
"min_execution_speed": "google.protobuf.Int64Value",
"min_execution_speed_bytes": "google.protobuf.Int64Value",
"input_format_values_interpret_expressions": "google.protobuf.BoolValue",
"input_format_defaults_for_omitted_fields": "google.protobuf.BoolValue",
"input_format_null_as_default": "google.protobuf.BoolValue",
"input_format_with_names_use_header": "google.protobuf.BoolValue",
"output_format_json_quote_64bit_integers": "google.protobuf.BoolValue",
"output_format_json_quote_denormals": "google.protobuf.BoolValue",
"date_time_input_format": "DateTimeInputFormat",
"date_time_output_format": "DateTimeOutputFormat",
"low_cardinality_allow_in_native_format": "google.protobuf.BoolValue",
"empty_result_for_aggregation_by_empty_set": "google.protobuf.BoolValue",
"format_regexp": "string",
"format_regexp_escaping_rule": "FormatRegexpEscapingRule",
"format_regexp_skip_unmatched": "google.protobuf.BoolValue",
"input_format_parallel_parsing": "google.protobuf.BoolValue",
"input_format_import_nested_json": "google.protobuf.BoolValue",
"format_avro_schema_registry_url": "string",
"data_type_default_nullable": "google.protobuf.BoolValue",
"http_connection_timeout": "google.protobuf.Int64Value",
"http_receive_timeout": "google.protobuf.Int64Value",
"http_send_timeout": "google.protobuf.Int64Value",
"enable_http_compression": "google.protobuf.BoolValue",
"send_progress_in_http_headers": "google.protobuf.BoolValue",
"http_headers_progress_interval": "google.protobuf.Int64Value",
"add_http_cors_header": "google.protobuf.BoolValue",
"cancel_http_readonly_queries_on_client_close": "google.protobuf.BoolValue",
"max_http_get_redirects": "google.protobuf.Int64Value",
"http_max_field_name_size": "google.protobuf.Int64Value",
"http_max_field_value_size": "google.protobuf.Int64Value",
"quota_mode": "QuotaMode",
"async_insert": "google.protobuf.BoolValue",
"wait_for_async_insert": "google.protobuf.BoolValue",
"wait_for_async_insert_timeout": "google.protobuf.Int64Value",
"async_insert_max_data_size": "google.protobuf.Int64Value",
"async_insert_busy_timeout": "google.protobuf.Int64Value",
"async_insert_use_adaptive_busy_timeout": "google.protobuf.BoolValue",
"log_query_threads": "google.protobuf.BoolValue",
"log_query_views": "google.protobuf.BoolValue",
"log_queries_probability": "google.protobuf.DoubleValue",
"log_processors_profiles": "google.protobuf.BoolValue",
"use_query_cache": "google.protobuf.BoolValue",
"enable_reads_from_query_cache": "google.protobuf.BoolValue",
"enable_writes_to_query_cache": "google.protobuf.BoolValue",
"query_cache_min_query_runs": "google.protobuf.Int64Value",
"query_cache_min_query_duration": "google.protobuf.Int64Value",
"query_cache_ttl": "google.protobuf.Int64Value",
"query_cache_max_entries": "google.protobuf.Int64Value",
"query_cache_max_size_in_bytes": "google.protobuf.Int64Value",
"query_cache_tag": "string",
"query_cache_share_between_users": "google.protobuf.BoolValue",
"query_cache_nondeterministic_function_handling": "QueryCacheNondeterministicFunctionHandling",
"query_cache_system_table_handling": "QueryCacheSystemTableHandling",
"count_distinct_implementation": "CountDistinctImplementation",
"joined_subquery_requires_alias": "google.protobuf.BoolValue",
"join_use_nulls": "google.protobuf.BoolValue",
"transform_null_in": "google.protobuf.BoolValue",
"insert_null_as_default": "google.protobuf.BoolValue",
"join_algorithm": [
"JoinAlgorithm"
],
"any_join_distinct_right_table_keys": "google.protobuf.BoolValue",
"allow_suspicious_low_cardinality_types": "google.protobuf.BoolValue",
"flatten_nested": "google.protobuf.BoolValue",
"memory_profiler_step": "google.protobuf.Int64Value",
"memory_profiler_sample_probability": "google.protobuf.DoubleValue",
"max_final_threads": "google.protobuf.Int64Value",
"max_read_buffer_size": "google.protobuf.Int64Value",
"insert_keeper_max_retries": "google.protobuf.Int64Value",
"do_not_merge_across_partitions_select_final": "google.protobuf.BoolValue",
"ignore_materialized_views_with_dropped_target_table": "google.protobuf.BoolValue",
"enable_analyzer": "google.protobuf.BoolValue",
"s3_use_adaptive_timeouts": "google.protobuf.BoolValue",
"final": "google.protobuf.BoolValue",
"compile": "google.protobuf.BoolValue",
"min_count_to_compile": "google.protobuf.Int64Value",
"async_insert_threads": "google.protobuf.Int64Value",
"async_insert_stale_timeout": "google.protobuf.Int64Value"
},
"quotas": [
{
"interval_duration": "google.protobuf.Int64Value",
"queries": "google.protobuf.Int64Value",
"errors": "google.protobuf.Int64Value",
"result_rows": "google.protobuf.Int64Value",
"read_rows": "google.protobuf.Int64Value",
"execution_time": "google.protobuf.Int64Value"
}
]
}
],
"host_specs": [
{
"zone_id": "string",
"type": "Type",
"subnet_id": "string",
"assign_public_ip": "bool",
"shard_name": "string"
}
],
"network_id": "string",
"shard_name": "string",
"service_account_id": "string",
"security_group_ids": [
"string"
],
"deletion_protection": "bool",
"maintenance_window": {
// Includes only one of the fields `anytime`, `weekly_maintenance_window`
"anytime": "AnytimeMaintenanceWindow",
"weekly_maintenance_window": {
"day": "WeekDay",
"hour": "int64"
}
// end of the list of possible fields
},
"shard_specs": [
{
"name": "string",
"config_spec": {
"clickhouse": {
"config": {
"background_pool_size": "google.protobuf.Int64Value",
"background_merges_mutations_concurrency_ratio": "google.protobuf.Int64Value",
"background_schedule_pool_size": "google.protobuf.Int64Value",
"background_fetches_pool_size": "google.protobuf.Int64Value",
"background_move_pool_size": "google.protobuf.Int64Value",
"background_distributed_schedule_pool_size": "google.protobuf.Int64Value",
"background_buffer_flush_schedule_pool_size": "google.protobuf.Int64Value",
"background_message_broker_schedule_pool_size": "google.protobuf.Int64Value",
"background_common_pool_size": "google.protobuf.Int64Value",
"dictionaries_lazy_load": "google.protobuf.BoolValue",
"log_level": "LogLevel",
"query_log_retention_size": "google.protobuf.Int64Value",
"query_log_retention_time": "google.protobuf.Int64Value",
"query_thread_log_enabled": "google.protobuf.BoolValue",
"query_thread_log_retention_size": "google.protobuf.Int64Value",
"query_thread_log_retention_time": "google.protobuf.Int64Value",
"part_log_retention_size": "google.protobuf.Int64Value",
"part_log_retention_time": "google.protobuf.Int64Value",
"metric_log_enabled": "google.protobuf.BoolValue",
"metric_log_retention_size": "google.protobuf.Int64Value",
"metric_log_retention_time": "google.protobuf.Int64Value",
"trace_log_enabled": "google.protobuf.BoolValue",
"trace_log_retention_size": "google.protobuf.Int64Value",
"trace_log_retention_time": "google.protobuf.Int64Value",
"text_log_enabled": "google.protobuf.BoolValue",
"text_log_retention_size": "google.protobuf.Int64Value",
"text_log_retention_time": "google.protobuf.Int64Value",
"text_log_level": "LogLevel",
"opentelemetry_span_log_enabled": "google.protobuf.BoolValue",
"opentelemetry_span_log_retention_size": "google.protobuf.Int64Value",
"opentelemetry_span_log_retention_time": "google.protobuf.Int64Value",
"query_views_log_enabled": "google.protobuf.BoolValue",
"query_views_log_retention_size": "google.protobuf.Int64Value",
"query_views_log_retention_time": "google.protobuf.Int64Value",
"asynchronous_metric_log_enabled": "google.protobuf.BoolValue",
"asynchronous_metric_log_retention_size": "google.protobuf.Int64Value",
"asynchronous_metric_log_retention_time": "google.protobuf.Int64Value",
"session_log_enabled": "google.protobuf.BoolValue",
"session_log_retention_size": "google.protobuf.Int64Value",
"session_log_retention_time": "google.protobuf.Int64Value",
"zookeeper_log_enabled": "google.protobuf.BoolValue",
"zookeeper_log_retention_size": "google.protobuf.Int64Value",
"zookeeper_log_retention_time": "google.protobuf.Int64Value",
"asynchronous_insert_log_enabled": "google.protobuf.BoolValue",
"asynchronous_insert_log_retention_size": "google.protobuf.Int64Value",
"asynchronous_insert_log_retention_time": "google.protobuf.Int64Value",
"processors_profile_log_enabled": "google.protobuf.BoolValue",
"processors_profile_log_retention_size": "google.protobuf.Int64Value",
"processors_profile_log_retention_time": "google.protobuf.Int64Value",
"error_log_enabled": "google.protobuf.BoolValue",
"error_log_retention_size": "google.protobuf.Int64Value",
"error_log_retention_time": "google.protobuf.Int64Value",
"access_control_improvements": {
"select_from_system_db_requires_grant": "google.protobuf.BoolValue",
"select_from_information_schema_requires_grant": "google.protobuf.BoolValue"
},
"max_connections": "google.protobuf.Int64Value",
"max_concurrent_queries": "google.protobuf.Int64Value",
"max_table_size_to_drop": "google.protobuf.Int64Value",
"max_partition_size_to_drop": "google.protobuf.Int64Value",
"keep_alive_timeout": "google.protobuf.Int64Value",
"uncompressed_cache_size": "google.protobuf.Int64Value",
"mark_cache_size": "google.protobuf.Int64Value",
"timezone": "string",
"geobase_enabled": "google.protobuf.BoolValue",
"geobase_uri": "string",
"default_database": "google.protobuf.StringValue",
"total_memory_profiler_step": "google.protobuf.Int64Value",
"total_memory_tracker_sample_probability": "google.protobuf.DoubleValue",
"async_insert_threads": "google.protobuf.Int64Value",
"backup_threads": "google.protobuf.Int64Value",
"restore_threads": "google.protobuf.Int64Value",
"merge_tree": {
"parts_to_delay_insert": "google.protobuf.Int64Value",
"parts_to_throw_insert": "google.protobuf.Int64Value",
"inactive_parts_to_delay_insert": "google.protobuf.Int64Value",
"inactive_parts_to_throw_insert": "google.protobuf.Int64Value",
"max_avg_part_size_for_too_many_parts": "google.protobuf.Int64Value",
"max_parts_in_total": "google.protobuf.Int64Value",
"max_replicated_merges_in_queue": "google.protobuf.Int64Value",
"number_of_free_entries_in_pool_to_lower_max_size_of_merge": "google.protobuf.Int64Value",
"number_of_free_entries_in_pool_to_execute_mutation": "google.protobuf.Int64Value",
"max_bytes_to_merge_at_min_space_in_pool": "google.protobuf.Int64Value",
"max_bytes_to_merge_at_max_space_in_pool": "google.protobuf.Int64Value",
"min_bytes_for_wide_part": "google.protobuf.Int64Value",
"min_rows_for_wide_part": "google.protobuf.Int64Value",
"cleanup_delay_period": "google.protobuf.Int64Value",
"max_cleanup_delay_period": "google.protobuf.Int64Value",
"merge_selecting_sleep_ms": "google.protobuf.Int64Value",
"max_merge_selecting_sleep_ms": "google.protobuf.Int64Value",
"min_age_to_force_merge_seconds": "google.protobuf.Int64Value",
"min_age_to_force_merge_on_partition_only": "google.protobuf.BoolValue",
"merge_max_block_size": "google.protobuf.Int64Value",
"deduplicate_merge_projection_mode": "DeduplicateMergeProjectionMode",
"lightweight_mutation_projection_mode": "LightweightMutationProjectionMode",
"replicated_deduplication_window": "google.protobuf.Int64Value",
"replicated_deduplication_window_seconds": "google.protobuf.Int64Value",
"fsync_after_insert": "google.protobuf.BoolValue",
"fsync_part_directory": "google.protobuf.BoolValue",
"min_compressed_bytes_to_fsync_after_fetch": "google.protobuf.Int64Value",
"min_compressed_bytes_to_fsync_after_merge": "google.protobuf.Int64Value",
"min_rows_to_fsync_after_merge": "google.protobuf.Int64Value",
"ttl_only_drop_parts": "google.protobuf.BoolValue",
"merge_with_ttl_timeout": "google.protobuf.Int64Value",
"merge_with_recompression_ttl_timeout": "google.protobuf.Int64Value",
"max_number_of_merges_with_ttl_in_pool": "google.protobuf.Int64Value",
"materialize_ttl_recalculate_only": "google.protobuf.BoolValue",
"check_sample_column_is_correct": "google.protobuf.BoolValue",
"allow_remote_fs_zero_copy_replication": "google.protobuf.BoolValue"
},
"compression": [
{
"method": "Method",
"min_part_size": "int64",
"min_part_size_ratio": "double",
"level": "google.protobuf.Int64Value"
}
],
"dictionaries": [
{
"name": "string",
"structure": {
"id": {
"name": "string"
},
"key": {
"attributes": [
{
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
}
]
},
"range_min": {
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
},
"range_max": {
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
},
"attributes": [
{
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
}
]
},
"layout": {
"type": "Type",
"size_in_cells": "int64",
"allow_read_expired_keys": "google.protobuf.BoolValue",
"max_update_queue_size": "int64",
"update_queue_push_timeout_milliseconds": "int64",
"query_wait_timeout_milliseconds": "int64",
"max_threads_for_updates": "int64",
"initial_array_size": "int64",
"max_array_size": "int64",
"access_to_key_from_attributes": "google.protobuf.BoolValue"
},
// Includes only one of the fields `fixed_lifetime`, `lifetime_range`
"fixed_lifetime": "int64",
"lifetime_range": {
"min": "int64",
"max": "int64"
},
// end of the list of possible fields
// Includes only one of the fields `http_source`, `mysql_source`, `clickhouse_source`, `mongodb_source`, `postgresql_source`
"http_source": {
"url": "string",
"format": "string",
"headers": [
{
"name": "string",
"value": "string"
}
]
},
"mysql_source": {
"db": "string",
"table": "string",
"port": "int64",
"user": "string",
"password": "string",
"replicas": [
{
"host": "string",
"priority": "int64",
"port": "int64",
"user": "string",
"password": "string"
}
],
"where": "string",
"invalidate_query": "string",
"close_connection": "google.protobuf.BoolValue",
"share_connection": "google.protobuf.BoolValue"
},
"clickhouse_source": {
"db": "string",
"table": "string",
"host": "string",
"port": "int64",
"user": "string",
"password": "string",
"where": "string",
"secure": "google.protobuf.BoolValue"
},
"mongodb_source": {
"db": "string",
"collection": "string",
"host": "string",
"port": "int64",
"user": "string",
"password": "string",
"options": "string"
},
"postgresql_source": {
"db": "string",
"table": "string",
"hosts": [
"string"
],
"port": "int64",
"user": "string",
"password": "string",
"invalidate_query": "string",
"ssl_mode": "SslMode"
}
// end of the list of possible fields
}
],
"graphite_rollup": [
{
"name": "string",
"patterns": [
{
"regexp": "string",
"function": "string",
"retention": [
{
"age": "int64",
"precision": "int64"
}
]
}
],
"path_column_name": "string",
"time_column_name": "string",
"value_column_name": "string",
"version_column_name": "string"
}
],
"kafka": {
"security_protocol": "SecurityProtocol",
"sasl_mechanism": "SaslMechanism",
"sasl_username": "string",
"sasl_password": "string",
"enable_ssl_certificate_verification": "google.protobuf.BoolValue",
"max_poll_interval_ms": "google.protobuf.Int64Value",
"session_timeout_ms": "google.protobuf.Int64Value",
"debug": "Debug",
"auto_offset_reset": "AutoOffsetReset"
},
"kafka_topics": [
{
"name": "string",
"settings": {
"security_protocol": "SecurityProtocol",
"sasl_mechanism": "SaslMechanism",
"sasl_username": "string",
"sasl_password": "string",
"enable_ssl_certificate_verification": "google.protobuf.BoolValue",
"max_poll_interval_ms": "google.protobuf.Int64Value",
"session_timeout_ms": "google.protobuf.Int64Value",
"debug": "Debug",
"auto_offset_reset": "AutoOffsetReset"
}
}
],
"rabbitmq": {
"username": "string",
"password": "string",
"vhost": "string"
},
"query_masking_rules": [
{
"name": "string",
"regexp": "string",
"replace": "string"
}
],
"query_cache": {
"max_size_in_bytes": "google.protobuf.Int64Value",
"max_entries": "google.protobuf.Int64Value",
"max_entry_size_in_bytes": "google.protobuf.Int64Value",
"max_entry_size_in_rows": "google.protobuf.Int64Value"
},
"jdbc_bridge": {
"host": "string",
"port": "google.protobuf.Int64Value"
},
"mysql_protocol": "google.protobuf.BoolValue",
"custom_macros": [
{
"name": "string",
"value": "string"
}
],
"builtin_dictionaries_reload_interval": "google.protobuf.Int64Value"
},
"resources": {
"resource_preset_id": "string",
"disk_size": "int64",
"disk_type_id": "string"
},
"weight": "google.protobuf.Int64Value",
"disk_size_autoscaling": {
"planned_usage_threshold": "google.protobuf.Int64Value",
"emergency_usage_threshold": "google.protobuf.Int64Value",
"disk_size_limit": "google.protobuf.Int64Value"
}
}
},
"shard_group_names": [
"string"
]
}
],
"disk_encryption_key_id": "google.protobuf.StringValue"
}
|
Field |
Description |
|
folder_id |
string Required field. ID of the folder to create the ClickHouse cluster in. |
|
name |
string Required field. Name of the ClickHouse cluster. The name must be unique within the folder. |
|
description |
string Description of the ClickHouse cluster. |
|
labels |
object (map<string, string>) Custom labels for the ClickHouse cluster as |
|
environment |
enum Environment Required field. Deployment environment of the ClickHouse cluster.
|
|
config_spec |
Required field. Configuration and resources for hosts that should be created for the ClickHouse cluster. |
|
database_specs[] |
Descriptions of databases to be created in the ClickHouse cluster. |
|
user_specs[] |
Descriptions of database users to be created in the ClickHouse cluster. |
|
host_specs[] |
Individual configurations for hosts that should be created for the ClickHouse cluster. |
|
network_id |
string Required field. ID of the network to create the cluster in. |
|
shard_name |
string Name of the first shard in cluster. If not set, defaults to the value 'shard1'. |
|
service_account_id |
string ID of the service account used for access to Object Storage. |
|
security_group_ids[] |
string User security groups |
|
deletion_protection |
bool Deletion Protection inhibits deletion of the cluster |
|
maintenance_window |
Window of maintenance operations. |
|
shard_specs[] |
Configuration(s) of the shard(s) to be created. |
|
disk_encryption_key_id |
ID of the key to encrypt cluster disks. |
ConfigSpec
|
Field |
Description |
|
version |
string Version of the ClickHouse server software. |
|
clickhouse |
Configuration and resources for a ClickHouse server. |
|
zookeeper |
Configuration and resources for a ZooKeeper server. |
|
backup_window_start |
Time to start the daily backup, in the UTC timezone. |
|
access |
Access policy for external services. If you want a specific service to access the ClickHouse cluster, then set the necessary values in this policy. |
|
cloud_storage |
|
|
sql_database_management |
Whether database management through SQL commands is enabled. |
|
sql_user_management |
Whether user management through SQL commands is enabled. |
|
admin_password |
string Password for user 'admin' that has SQL user management access. |
|
embedded_keeper |
Whether cluster should use embedded Keeper instead of Zookeeper |
|
backup_retain_period_days |
Retain period of automatically created backup in days |
Clickhouse
|
Field |
Description |
|
config |
Configuration for a ClickHouse server. |
|
resources |
Resources allocated to ClickHouse hosts. |
|
disk_size_autoscaling |
Disk size autoscaling settings. |
ClickhouseConfig
ClickHouse configuration settings. Supported settings are a subset of settings described
in ClickHouse documentation
|
Field |
Description |
|
background_pool_size |
Sets the number of threads performing background merges and mutations for MergeTree-engine tables. Default value: 16. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_merges_mutations_concurrency_ratio |
Sets a ratio between the number of threads and the number of background merges and mutations that can be executed concurrently. For example, if the ratio equals to 2 and background_pool_size is set to 16 then ClickHouse can execute 32 background merges concurrently. Default value: 2. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_schedule_pool_size |
The maximum number of threads that will be used for constantly executing some lightweight periodic operations Default value: 512. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_fetches_pool_size |
The maximum number of threads that will be used for fetching data parts from another replica for MergeTree-engine tables in a background. Default value: 32 for versions 25.1 and higher, 16 for versions 24.12 and lower. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
background_move_pool_size |
The maximum number of threads that will be used for moving data parts to another disk or volume for MergeTree-engine tables in a background. Default value: 8. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_distributed_schedule_pool_size |
The maximum number of threads that will be used for executing distributed sends. Default value: 16. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_buffer_flush_schedule_pool_size |
The maximum number of threads that will be used for performing flush operations for Buffer-engine tables in the background. Default value: 16. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_message_broker_schedule_pool_size |
The maximum number of threads that will be used for executing background operations for message streaming. Default value: 16. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_common_pool_size |
The maximum number of threads that will be used for performing a variety of operations (mostly garbage collection) for MergeTree-engine tables in a background. Default value: 8. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
dictionaries_lazy_load |
Lazy loading of dictionaries. If enabled, then each dictionary is loaded on the first use. Otherwise, the server loads all dictionaries at startup. Default value: true for versions 25.1 and higher, false for versions 24.12 and lower. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
log_level |
enum LogLevel Logging level.
|
|
query_log_retention_size |
The maximum size that query_log can grow to before old data will be removed. If set to 0, Default value: 1073741824 (1 GiB). |
|
query_log_retention_time |
The maximum time that query_log records will be retained before removal. If set to 0, automatic removal of query_log data based on time is disabled. Default value: 2592000000 (30 days). |
|
query_thread_log_enabled |
Enables or disables query_thread_log system table. Default value: true. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
query_thread_log_retention_size |
The maximum size that query_thread_log can grow to before old data will be removed. If set to 0, Default value: 536870912 (512 MiB). |
|
query_thread_log_retention_time |
The maximum time that query_thread_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
part_log_retention_size |
The maximum size that part_log can grow to before old data will be removed. If set to 0, Default value: 536870912 (512 MiB). |
|
part_log_retention_time |
The maximum time that part_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
metric_log_enabled |
Enables or disables metric_log system table. Default value: false for versions 25.1 and higher, true for versions 24.12 and lower. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
metric_log_retention_size |
The maximum size that metric_log can grow to before old data will be removed. If set to 0, Default value: 536870912 (512 MiB). |
|
metric_log_retention_time |
The maximum time that metric_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
trace_log_enabled |
Enables or disables trace_log system table. Default value: true for versions 25.2 and higher, false for versions 25.1 and lower. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
trace_log_retention_size |
The maximum size that trace_log can grow to before old data will be removed. If set to 0, Default value: 536870912 (512 MiB). |
|
trace_log_retention_time |
The maximum time that trace_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
text_log_enabled |
Enables or disables text_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
text_log_retention_size |
The maximum size that text_log can grow to before old data will be removed. If set to 0, Default value: 536870912 (512 MiB). |
|
text_log_retention_time |
The maximum time that text_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
text_log_level |
enum LogLevel Logging level for text_log system table. Default value: TRACE. Change of the setting is applied with restart.
|
|
opentelemetry_span_log_enabled |
Enables or disables opentelemetry_span_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
opentelemetry_span_log_retention_size |
The maximum size that opentelemetry_span_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
opentelemetry_span_log_retention_time |
The maximum time that opentelemetry_span_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
query_views_log_enabled |
Enables or disables query_views_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
query_views_log_retention_size |
The maximum size that query_views_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
query_views_log_retention_time |
The maximum time that query_views_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
asynchronous_metric_log_enabled |
Enables or disables asynchronous_metric_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
asynchronous_metric_log_retention_size |
The maximum size that asynchronous_metric_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
asynchronous_metric_log_retention_time |
The maximum time that asynchronous_metric_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
session_log_enabled |
Enables or disables session_log system table. Default value: true for versions 25.3 and higher, false for versions 25.2 and lower. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
session_log_retention_size |
The maximum size that session_log can grow to before old data will be removed. If set to 0, Default value: 536870912 (512 MiB) for versions 25.3 and higher, 0 for versions 25.2 and lower. |
|
session_log_retention_time |
The maximum time that session_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
zookeeper_log_enabled |
Enables or disables zookeeper_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
zookeeper_log_retention_size |
The maximum size that zookeeper_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
zookeeper_log_retention_time |
The maximum time that zookeeper_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
asynchronous_insert_log_enabled |
Enables or disables asynchronous_insert_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
asynchronous_insert_log_retention_size |
The maximum size that asynchronous_insert_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
asynchronous_insert_log_retention_time |
The maximum time that asynchronous_insert_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
processors_profile_log_enabled |
Enables or disables processors_profile_log system table. Default value: true for versions 25.2 and higher, false for versions 25.1 and lower. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
processors_profile_log_retention_size |
The maximum size that processors_profile_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
processors_profile_log_retention_time |
The maximum time that processors_profile_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
error_log_enabled |
Enables or disables error_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
error_log_retention_size |
The maximum size that error_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
error_log_retention_time |
The maximum time that error_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
access_control_improvements |
Access control settings. |
|
max_connections |
Maximum number of inbound connections. Default value: 4096. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
max_concurrent_queries |
Maximum number of concurrently executed queries. Default value: 500. For details, see ClickHouse documentation |
|
max_table_size_to_drop |
Maximum size of the table that can be deleted using DROP or TRUNCATE query. Default value: 50000000000 (48828125 KiB). For details, see ClickHouse documentation |
|
max_partition_size_to_drop |
Maximum size of the partition that can be deleted using DROP or TRUNCATE query. Default value: 50000000000 (48828125 KiB). For details, see ClickHouse documentation |
|
keep_alive_timeout |
The number of seconds that ClickHouse waits for incoming requests for HTTP protocol before closing the connection. Default value: 30. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
uncompressed_cache_size |
Cache size (in bytes) for uncompressed data used by table engines from the MergeTree family. 0 means disabled. For details, see ClickHouse documentation |
|
mark_cache_size |
Maximum size (in bytes) of the cache of "marks" used by MergeTree tables. For details, see ClickHouse documentation |
|
timezone |
string The server's time zone to be used in DateTime fields conversions. Specified as an IANA identifier. Default value: Europe/Moscow. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
geobase_enabled |
Enables or disables geobase. Default value: false for versions 25.8 and higher, true for versions 25.7 and lower. Change of the setting is applied with restart. |
|
geobase_uri |
string Address of the archive with the user geobase in Object Storage. Change of the setting is applied with restart. |
|
default_database |
The default database. Default value: default. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
total_memory_profiler_step |
Whenever server memory usage becomes larger than every next step in number of bytes the memory profiler will collect Default value: 0. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
total_memory_tracker_sample_probability |
Allows to collect random allocations and de-allocations and writes them in the system.trace_log system table Default value: 0. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
async_insert_threads |
Maximum number of threads to parse and insert data in background. If set to 0, asynchronous mode is disabled. Default value: 16. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
backup_threads |
The maximum number of threads to execute BACKUP requests. Default value: 16. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
restore_threads |
The maximum number of threads to execute RESTORE requests. Default value: 16. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
merge_tree |
Settings for the MergeTree table engine family. Change of the settings of merge_tree is applied with restart. |
|
compression[] |
Data compression settings for MergeTree engine tables. Change of the settings of compression is applied with restart. For details, see ClickHouse documentation |
|
dictionaries[] |
Configuration of external dictionaries. Change of the settings of dictionaries is applied with restart. For details, see ClickHouse documentation |
|
graphite_rollup[] |
Rollup settings for the GraphiteMergeTree engine tables. Change of the settings of graphite_rollup is applied with restart. For details, see ClickHouse documentation |
|
kafka |
Kafka integration settings. Change of the settings of kafka is applied with restart. |
|
kafka_topics[] |
Per-topic Kafka integration settings. Change of the settings of kafka_topics is applied with restart. |
|
rabbitmq |
RabbitMQ integration settings. Change of the settings of rabbitmq is applied with restart. |
|
query_masking_rules[] |
Regexp-based rules, which will be applied to queries as well as all log messages before storing them in server logs, Change of the settings of query_masking_rules is applied with restart. For details, see ClickHouse documentation |
|
query_cache |
Query cache Change of the settings of query_cache is applied with restart. |
|
jdbc_bridge |
JDBC bridge configuration for queries to external databases. Change of the settings of jdbc_bridge is applied with restart. For details, see ClickHouse documentation |
|
mysql_protocol |
Enables or disables MySQL interface on ClickHouse server Default value: false. For details, see ClickHouse documentation |
|
custom_macros[] |
Custom ClickHouse macros. |
|
builtin_dictionaries_reload_interval |
The interval in seconds before reloading built-in dictionaries. Default value: 3600. For details, see ClickHouse documentation |
AccessControlImprovements
Access control settings.
For details, see ClickHouse documentation
|
Field |
Description |
|
select_from_system_db_requires_grant |
Sets whether SELECT * FROM system.<table> requires any grants and can be executed by any user. Default value: false. |
|
select_from_information_schema_requires_grant |
Sets whether SELECT * FROM information_schema.<table> requires any grants and can be executed by any user. Default value: false. |
MergeTree
Settings for the MergeTree table engine family.
|
Field |
Description |
|
parts_to_delay_insert |
If the number of active parts in a single partition exceeds the parts_to_delay_insert value, an INSERT artificially slows down. Default value: 1000 for versions 25.1 and higher, 150 for versions 24.12 and lower. For details, see ClickHouse documentation |
|
parts_to_throw_insert |
If the number of active parts in a single partition exceeds the parts_to_throw_insert value, an INSERT Default value: 3000 for versions 25.1 and higher, 300 for versions 24.12 and lower. For details, see ClickHouse documentation |
|
inactive_parts_to_delay_insert |
If the number of inactive parts in a single partition in the table exceeds the inactive_parts_to_delay_insert value, Default value: 0. For details, see ClickHouse documentation |
|
inactive_parts_to_throw_insert |
If the number of inactive parts in a single partition more than the inactive_parts_to_throw_insert value, Default value: 0. For details, see ClickHouse documentation |
|
max_avg_part_size_for_too_many_parts |
The "Too many parts" check according to parts_to_delay_insert and parts_to_throw_insert will be active only if the average Default value: 1073741824 (1 GiB). For details, see ClickHouse documentation |
|
max_parts_in_total |
If the total number of active parts in all partitions of a table exceeds the max_parts_in_total value, Default value: 20000 for versions 25.2 and higher, 100000 for versions 25.1 and lower. For details, see ClickHouse documentation |
|
max_replicated_merges_in_queue |
How many tasks of merging and mutating parts are allowed simultaneously in ReplicatedMergeTree queue. Default value: 32 for versions 25.8 and higher, 16 for versions 25.7 and lower. For details, see ClickHouse documentation |
|
number_of_free_entries_in_pool_to_lower_max_size_of_merge |
When there is less than the specified number of free entries in pool (or replicated queue), start to lower maximum size of Default value: 8. For details, see ClickHouse documentation |
|
number_of_free_entries_in_pool_to_execute_mutation |
When there is less than specified number of free entries in pool, do not execute part mutations. Default value: 20. For details, see ClickHouse documentation |
|
max_bytes_to_merge_at_min_space_in_pool |
The maximum total part size (in bytes) to be merged into one part, with the minimum available resources in the background pool. Default value: 1048576 (1 MiB). For details, see ClickHouse documentation |
|
max_bytes_to_merge_at_max_space_in_pool |
The maximum total parts size (in bytes) to be merged into one part, if there are enough resources available. Default value: 161061273600 (150 GiB). For details, see ClickHouse documentation |
|
min_bytes_for_wide_part |
Minimum number of bytes in a data part that can be stored in Wide format. Default value: 10485760 (10 MiB). For details, see ClickHouse documentation |
|
min_rows_for_wide_part |
Minimum number of rows in a data part that can be stored in Wide format. Default value: 0. For details, see ClickHouse documentation |
|
cleanup_delay_period |
Minimum period to clean old queue logs, blocks hashes and parts. Default value: 30. For details, see ClickHouse documentation |
|
max_cleanup_delay_period |
Maximum period to clean old queue logs, blocks hashes and parts. Default value: 300 (5 minutes). For details, see ClickHouse documentation |
|
merge_selecting_sleep_ms |
Minimum time to wait before trying to select parts to merge again after no parts were selected. A lower setting value will trigger Default value: 5000 (5 seconds). For details, see ClickHouse documentation |
|
max_merge_selecting_sleep_ms |
Maximum time to wait before trying to select parts to merge again after no parts were selected. A lower setting value will trigger Default value: 60000 (1 minute). For details, see ClickHouse documentation |
|
min_age_to_force_merge_seconds |
Merge parts if every part in the range is older than the specified value. 0 means disabled. Default value: 0. For details, see ClickHouse documentation |
|
min_age_to_force_merge_on_partition_only |
Whether min_age_to_force_merge_seconds should be applied only on the entire partition and not on subset. Default value: false. For details, see ClickHouse documentation |
|
merge_max_block_size |
The number of rows that are read from the merged parts into memory. Default value: 8192. For details, see ClickHouse documentation |
|
deduplicate_merge_projection_mode |
enum DeduplicateMergeProjectionMode Determines the behavior of background merges for MergeTree tables with projections. Default value: DEDUPLICATE_MERGE_PROJECTION_MODE_THROW. For details, see ClickHouse documentation
|
|
lightweight_mutation_projection_mode |
enum LightweightMutationProjectionMode Determines the behavior of lightweight deletes for MergeTree tables with projections. Default value: LIGHTWEIGHT_MUTATION_PROJECTION_MODE_THROW. For details, see ClickHouse documentation
|
|
replicated_deduplication_window |
The number of most recently inserted blocks for which ClickHouse Keeper stores hash sums to check for duplicates. Default value: 10000 for versions 25.9 and higher, 1000 for versions from 23.11 to 25.8, 100 for versions 23.10 and lower. For details, see ClickHouse documentation |
|
replicated_deduplication_window_seconds |
The number of seconds after which the hash sums of the inserted blocks are removed from ClickHouse Keeper. Default value: 604800 (7 days). For details, see ClickHouse documentation |
|
fsync_after_insert |
Do fsync for every inserted part. Significantly decreases performance of inserts, not recommended to use with wide parts. Default value: false. For details, see ClickHouse documentation |
|
fsync_part_directory |
Do fsync for part directory after all part operations (writes, renames, etc.). Default value: false. For details, see ClickHouse documentation |
|
min_compressed_bytes_to_fsync_after_fetch |
Minimal number of compressed bytes to do fsync for part after fetch. 0 means disabled. Default value: 0. For details, see ClickHouse documentation |
|
min_compressed_bytes_to_fsync_after_merge |
Minimal number of compressed bytes to do fsync for part after merge. 0 means disabled. Default value: 0. For details, see ClickHouse documentation |
|
min_rows_to_fsync_after_merge |
Minimal number of rows to do fsync for part after merge. 0 means disabled. Default value: 0. For details, see ClickHouse documentation |
|
ttl_only_drop_parts |
Controls whether data parts are fully dropped in MergeTree tables when all rows in that part have expired according to their TTL settings.
Default value: false. For details, see ClickHouse documentation |
|
merge_with_ttl_timeout |
Minimum delay in seconds before repeating a merge with delete TTL. Default value: 14400 (4 hours). For details, see ClickHouse documentation |
|
merge_with_recompression_ttl_timeout |
Minimum delay in seconds before repeating a merge with recompression TTL. Default value: 14400 (4 hours). For details, see ClickHouse documentation |
|
max_number_of_merges_with_ttl_in_pool |
When there is more than specified number of merges with TTL entries in pool, do not assign new merge with TTL. Default value: 2. For details, see ClickHouse documentation |
|
materialize_ttl_recalculate_only |
Only recalculate ttl info when MATERIALIZE TTL. Default value: true for versions 25.2 and higher, false for versions 25.1 and lower. For details, see ClickHouse documentation |
|
check_sample_column_is_correct |
Enables the check at table creation, that the data type of a column for sampling or sampling expression is correct. Default value: true. For details, see ClickHouse documentation |
|
allow_remote_fs_zero_copy_replication |
Setting is automatically enabled if cloud storage is enabled, disabled otherwise. Default value: true. |
Compression
Compression settings.
For details, see ClickHouse documentation
|
Field |
Description |
|
method |
enum Method Required field. Compression method to use for the specified combination of min_part_size and min_part_size_ratio.
|
|
min_part_size |
int64 The minimum size of a data part. |
|
min_part_size_ratio |
double The ratio of the data part size to the table size. |
|
level |
Compression level. |
ExternalDictionary
External dictionary configuration.
|
Field |
Description |
|
name |
string Required field. Name of the external dictionary. |
|
structure |
Required field. Structure of the external dictionary. |
|
layout |
Required field. Layout determining how to store the dictionary in memory. For details, see https://clickhouse.com/docs/sql-reference/dictionaries#ways-to-store-dictionaries-in-memory. |
|
fixed_lifetime |
int64 Fixed interval between dictionary updates. Includes only one of the fields |
|
lifetime_range |
Range of intervals between dictionary updates for ClickHouse to choose from. Includes only one of the fields |
|
http_source |
HTTP source for the dictionary. Includes only one of the fields |
|
mysql_source |
MySQL source for the dictionary. Includes only one of the fields |
|
clickhouse_source |
ClickHouse source for the dictionary. Includes only one of the fields |
|
mongodb_source |
MongoDB source for the dictionary. Includes only one of the fields |
|
postgresql_source |
PostgreSQL source for the dictionary. Includes only one of the fields |
Structure
Configuration of external dictionary structure.
|
Field |
Description |
|
id |
Single numeric key column for the dictionary. |
|
key |
Composite key for the dictionary, containing of one or more key columns. For details, see ClickHouse documentation |
|
range_min |
Field holding the beginning of the range for dictionaries with RANGE_HASHED layout. For details, see ClickHouse documentation |
|
range_max |
Field holding the end of the range for dictionaries with RANGE_HASHED layout. For details, see ClickHouse documentation |
|
attributes[] |
Description of the fields available for database queries. For details, see ClickHouse documentation |
Id
Numeric key.
|
Field |
Description |
|
name |
string Required field. Name of the numeric key. |
Key
Complex key.
|
Field |
Description |
|
attributes[] |
Attributes of a complex key. |
Attribute
|
Field |
Description |
|
name |
string Required field. Name of the column. |
|
type |
string Required field. Type of the column. |
|
null_value |
string Default value for an element without data (for example, an empty string). |
|
expression |
string Expression, describing the attribute, if applicable. |
|
hierarchical |
bool Indication of hierarchy support. Default value: false. |
|
injective |
bool Indication of injective mapping "id -> attribute". Default value: false. |
Layout
|
Field |
Description |
|
type |
enum Type Required field. Layout type. For details, see ClickHouse documentation
|
|
size_in_cells |
int64 Number of cells in the cache. Rounded up to a power of two. Default value: 1000000000. For details, see ClickHouse documentation |
|
allow_read_expired_keys |
Allows to read expired keys. Default value: false. For details, see ClickHouse documentation |
|
max_update_queue_size |
int64 Max size of update queue. Default value: 100000. For details, see ClickHouse documentation |
|
update_queue_push_timeout_milliseconds |
int64 Max timeout in milliseconds for push update task into queue. Default value: 10. For details, see ClickHouse documentation |
|
query_wait_timeout_milliseconds |
int64 Max wait timeout in milliseconds for update task to complete. Default value: 60000 (1 minute). For details, see ClickHouse documentation |
|
max_threads_for_updates |
int64 Max threads for cache dictionary update. Default value: 4. For details, see ClickHouse documentation |
|
initial_array_size |
int64 Initial dictionary key size. Default value: 1024. For details, see ClickHouse documentation |
|
max_array_size |
int64 Maximum dictionary key size. Default value: 500000. For details, see ClickHouse documentation |
|
access_to_key_from_attributes |
Allows to retrieve key attribute using dictGetString function. For details, see ClickHouse documentation |
Range
|
Field |
Description |
|
min |
int64 Minimum dictionary lifetime. |
|
max |
int64 Maximum dictionary lifetime. |
HttpSource
|
Field |
Description |
|
url |
string Required field. URL of the source dictionary available over HTTP. |
|
format |
string Required field. The data format. Valid values are all formats supported by ClickHouse SQL dialect |
|
headers[] |
HTTP headers. |
Header
|
Field |
Description |
|
name |
string Required field. Header name. |
|
value |
string Required field. Header value. |
MysqlSource
|
Field |
Description |
|
db |
string Required field. Database name. |
|
table |
string Required field. Table name. |
|
port |
int64 Port to use when connecting to a replica of the dictionary source. |
|
user |
string Required field. Name of the user for replicas of the dictionary source. |
|
password |
string Password of the user for replicas of the dictionary source. |
|
replicas[] |
List of MySQL replicas of the database used as dictionary source. |
|
where |
string Selection criteria for the data in the specified MySQL table. |
|
invalidate_query |
string Query for checking the dictionary status, to pull only updated data. |
|
close_connection |
Should a connection be closed after each request. |
|
share_connection |
Should a connection be shared for some requests. |
Replica
|
Field |
Description |
|
host |
string Required field. MySQL host of the replica. |
|
priority |
int64 The priority of the replica that ClickHouse takes into account when connecting. |
|
port |
int64 Port to use when connecting to the replica. |
|
user |
string Name of the MySQL database user. |
|
password |
string Password of the MySQL database user. |
ClickhouseSource
|
Field |
Description |
|
db |
string Required field. Database name. |
|
table |
string Required field. Table name. |
|
host |
string ClickHouse host. |
|
port |
int64 Port to use when connecting to the host. |
|
user |
string Required field. Name of the ClickHouse database user. |
|
password |
string Password of the ClickHouse database user. |
|
where |
string Selection criteria for the data in the specified ClickHouse table. |
|
secure |
Determines whether to use TLS for connection. |
MongodbSource
|
Field |
Description |
|
db |
string Required field. Database name. |
|
collection |
string Required field. Collection name. |
|
host |
string Required field. MongoDB host. |
|
port |
int64 Port to use when connecting to the host. |
|
user |
string Required field. Name of the MongoDB database user. |
|
password |
string Password of the MongoDB database user. |
|
options |
string Dictionary source options. |
PostgresqlSource
|
Field |
Description |
|
db |
string Required field. Database name. |
|
table |
string Required field. Table name. |
|
hosts[] |
string PostgreSQL hosts. |
|
port |
int64 Port to use when connecting to the PostgreSQL hosts. |
|
user |
string Required field. Name of the PostrgreSQL database user. |
|
password |
string Password of the PostrgreSQL database user. |
|
invalidate_query |
string Query for checking the dictionary status, to pull only updated data. |
|
ssl_mode |
enum SslMode Mode of SSL TCP/IP connection to the PostgreSQL host.
|
GraphiteRollup
Rollup settings for the GraphiteMergeTree table engine.
For details, see ClickHouse documentation
|
Field |
Description |
|
name |
string Required field. Name for the specified combination of settings for Graphite rollup. |
|
patterns[] |
Pattern to use for the rollup. |
|
path_column_name |
string The name of the column storing the metric name (Graphite sensor). Default value: Path. |
|
time_column_name |
string The name of the column storing the time of measuring the metric. Default value: Time. |
|
value_column_name |
string The name of the column storing the value of the metric at the time set in time_column_name. Default value: Value. |
|
version_column_name |
string The name of the column storing the version of the metric. Default value: Timestamp. |
Pattern
|
Field |
Description |
|
regexp |
string A pattern for the metric name (a regular or DSL). |
|
function |
string The name of the aggregating function to apply to data whose age falls within the range [age, age + precision]. |
|
retention[] |
Retention rules. |
Retention
|
Field |
Description |
|
age |
int64 The minimum age of the data in seconds. |
|
precision |
int64 Precision of determining the age of the data, in seconds. Should be a divisor for 86400 (seconds in a day). |
Kafka
Kafka configuration settings.
For details, see librdkafka documentation
|
Field |
Description |
|
security_protocol |
enum SecurityProtocol Protocol used to communicate with brokers. Default value: SECURITY_PROTOCOL_PLAINTEXT.
|
|
sasl_mechanism |
enum SaslMechanism SASL mechanism to use for authentication. Default value: SASL_MECHANISM_GSSAPI.
|
|
sasl_username |
string SASL username for use with the PLAIN and SASL-SCRAM mechanisms. |
|
sasl_password |
string SASL password for use with the PLAIN and SASL-SCRAM mechanisms. |
|
enable_ssl_certificate_verification |
Enable OpenSSL's builtin broker (server) certificate verification. Default value: true. |
|
max_poll_interval_ms |
Maximum allowed time between calls to consume messages for high-level consumers. Default value: 300000 (5 minutes). |
|
session_timeout_ms |
Client group session and failure detection timeout. The consumer sends periodic heartbeats (heartbeat.interval.ms) Default value: 45000 (45 seconds). |
|
debug |
enum Debug Debug context to enable.
|
|
auto_offset_reset |
enum AutoOffsetReset Action to take when there is no initial offset in offset store or the desired offset is out of range. Default value: AUTO_OFFSET_RESET_LARGEST.
|
KafkaTopic
|
Field |
Description |
|
name |
string Required field. Kafka topic name. |
|
settings |
Required field. Kafka topic settings. |
Rabbitmq
RabbitMQ integration settings.
For details, see ClickHouse documentation
|
Field |
Description |
|
username |
string RabbitMQ username. |
|
password |
string RabbitMQ password. |
|
vhost |
string RabbitMQ virtual host. |
QueryMaskingRule
|
Field |
Description |
|
name |
string Name for the rule. |
|
regexp |
string Required field. RE2 compatible regular expression. |
|
replace |
string Substitution string for sensitive data. Default value: six asterisks. |
QueryCache
Query cache configuration.
|
Field |
Description |
|
max_size_in_bytes |
The maximum cache size in bytes. Default value: 1073741824 (1 GiB). |
|
max_entries |
The maximum number of SELECT query results stored in the cache. Default value: 1024. |
|
max_entry_size_in_bytes |
The maximum size in bytes SELECT query results may have to be saved in the cache. Default value: 1048576 (1 MiB). |
|
max_entry_size_in_rows |
The maximum number of rows SELECT query results may have to be saved in the cache. Default value: 30000000. |
JdbcBridge
JDBC bridge configuration for queries to external databases.
|
Field |
Description |
|
host |
string Host of jdbc bridge. |
|
port |
Port of jdbc bridge. Default value: 9019. |
Macro
ClickHouse macro.
|
Field |
Description |
|
name |
string Required field. Name of the macro. |
|
value |
string Required field. Value of the macro. |
Resources
|
Field |
Description |
|
resource_preset_id |
string ID of the preset for computational resources available to a host (CPU, memory etc.). |
|
disk_size |
int64 Volume of the storage available to a host, in bytes. |
|
disk_type_id |
string Type of the storage environment for the host.
|
DiskSizeAutoscaling
|
Field |
Description |
|
planned_usage_threshold |
Amount of used storage for automatic disk scaling in the maintenance window, 0 means disabled, in percent. |
|
emergency_usage_threshold |
Amount of used storage for immediately automatic disk scaling, 0 means disabled, in percent. |
|
disk_size_limit |
Limit on how large the storage for database instances can automatically grow, in bytes. |
Zookeeper
|
Field |
Description |
|
resources |
Resources allocated to ZooKeeper hosts. If not set, minimal available resources will be used. |
|
disk_size_autoscaling |
Disk size autoscaling settings. |
Access
|
Field |
Description |
|
data_lens |
bool Allow to export data from the cluster to DataLens. |
|
web_sql |
bool Allow SQL queries to the cluster databases from the management console. See SQL queries in the management console for more details. |
|
metrika |
bool Allow to import data from Yandex Metrica and AppMetrica to the cluster. See AppMetrica documentation |
|
serverless |
bool Allow access to cluster for Serverless. |
|
data_transfer |
bool Allow access for DataTransfer |
|
yandex_query |
bool Allow access for Query |
CloudStorage
|
Field |
Description |
|
enabled |
bool Whether to use Object Storage for storing ClickHouse data. |
|
move_factor |
|
|
data_cache_enabled |
|
|
data_cache_max_size |
|
|
prefer_not_to_merge |
DatabaseSpec
|
Field |
Description |
|
name |
string Required field. Name of the ClickHouse database. 1-63 characters long. |
|
engine |
enum DatabaseEngine Database engine. For details, see ClickHouse documentation
|
UserSpec
|
Field |
Description |
|
name |
string Required field. User name. |
|
password |
string User password. |
|
generate_password |
Enable or disable password generation using Connection Manager. Default value: false. |
|
permissions[] |
Set of permissions to grant to the user. If not set, it's granted permissions to access all databases. |
|
settings |
User settings |
|
quotas[] |
Quotas assigned to the user. |
Permission
|
Field |
Description |
|
database_name |
string Name of the database that the permission grants access to. |
UserSettings
ClickHouse user settings. Supported settings are a subset of settings described
in ClickHouse documentation
|
Field |
Description |
|
readonly |
Restricts permissions for non-DDL queries. To restrict permissions for DDL queries, use allow_ddl instead.
Default value: 0. For details, see ClickHouse documentation |
|
allow_ddl |
Allows or denies DDL queries (e.g., CREATE, ALTER, RENAME, etc). Default value: true. For details, see ClickHouse documentation |
|
allow_introspection_functions |
Enables or disables introspection functions for query profiling. Default value: false. For details, see ClickHouse documentation |
|
connect_timeout |
Connection timeout in milliseconds. Default value: 10000 (10 seconds). For details, see ClickHouse documentation |
|
connect_timeout_with_failover |
The timeout in milliseconds for connecting to a remote server for a Distributed table engine. Applies only if the cluster uses sharding and replication. If unsuccessful, several attempts are made to connect to various replicas. Default value: 1000 (1 second). For details, see ClickHouse documentation |
|
receive_timeout |
Receive timeout in milliseconds. Default value: 300000 (5 minutes). For details, see ClickHouse documentation |
|
send_timeout |
Send timeout in milliseconds. Default value: 300000 (5 minutes). For details, see ClickHouse documentation |
|
idle_connection_timeout |
Timeout to close idle TCP connections after specified time has elapsed, in milliseconds. Default value: 3600000 (1 hour). For details, see ClickHouse documentation |
|
timeout_before_checking_execution_speed |
Checks that the speed is not too low after the specified time has elapsed, in milliseconds. It is checked that execution speed Default value: 60000 (1 minute). For details, see ClickHouse documentation |
|
insert_quorum |
Enables or disables the quorum writes. If the value is less than 2, then the quorum writes is disabled, otherwise it is enabled. When used, write quorum guarantees that ClickHouse has written data to the quorum of insert_quorum replicas with no errors You can use select_sequential_consistency setting to read the data written with write quorum. Default value: 0. For details, see ClickHouse documentation |
|
insert_quorum_timeout |
Quorum write timeout in milliseconds. If the write quorum is enabled in the cluster, this timeout expires and some data is not written to the insert_quorum replicas, Default value: 600000 (10 minutes). For details, see ClickHouse documentation |
|
insert_quorum_parallel |
Enables or disables parallelism for quorum INSERT queries. Default value: true. For details, see ClickHouse documentation |
|
select_sequential_consistency |
Determines the behavior of SELECT queries from replicated tables. If enabled, ClickHouse will terminate a query with error message in case Default value: true. For details, see ClickHouse documentation |
|
replication_alter_partitions_sync |
Wait mode for asynchronous actions in ALTER queries on replicated tables.
Default value: 1. For details, see ClickHouse documentation |
|
max_replica_delay_for_distributed_queries |
Max replica delay in milliseconds. If a replica lags more than the set value, this replica is not used and becomes a stale one. Default value: 300000 (5 minutes). For details, see ClickHouse documentation |
|
fallback_to_stale_replicas_for_distributed_queries |
Enables or disables query forcing to a stale replica in case the actual data is unavailable. Default value: true. For details, see ClickHouse documentation |
|
distributed_product_mode |
enum DistributedProductMode Determines the behavior of distributed subqueries. Default value: DISTRIBUTED_PRODUCT_MODE_DENY. For details, see ClickHouse documentation
|
|
distributed_aggregation_memory_efficient |
Enables of disables memory saving mode when doing distributed aggregation. When ClickHouse works with a distributed query, external aggregation is done on remote servers. Default value: true. For details, see ClickHouse documentation |
|
distributed_ddl_task_timeout |
Timeout for DDL queries, in milliseconds. Default value: 180000 (3 minutes). For details, see ClickHouse documentation |
|
distributed_ddl_output_mode |
enum DistributedDdlOutputMode Determines the format of distributed DDL query result. Default value: DISTRIBUTED_DDL_OUTPUT_MODE_THROW. For details, see ClickHouse documentation
|
|
skip_unavailable_shards |
Enables or disables silent skipping of unavailable shards. A shard is considered unavailable if all its replicas are also unavailable. Default value: false. For details, see ClickHouse documentation |
|
use_hedged_requests |
Enables or disables hedged requests logic for remote queries. It allows to establish many connections with different replicas for query. New connection is enabled in case existent connection(s) with replica(s) Default value: true. For details, see ClickHouse documentation |
|
hedged_connection_timeout_ms |
Connection timeout for establishing connection with replica for Hedged requests. Default value: 50. For details, see ClickHouse documentation |
|
load_balancing |
enum LoadBalancing Algorithm of replicas selection that is used for distributed query processing. Default value: LOAD_BALANCING_RANDOM. For details, see ClickHouse documentation
|
|
prefer_localhost_replica |
Enable or disable preferable using the localhost replica when processing distributed queries. Default value: true. For details, see ClickHouse documentation |
|
compile_expressions |
Enable or disable expression compilation to native code. If you execute a lot of queries that contain identical expressions, then enable this setting. Use this setting in combination with min_count_to_compile_expression setting. Default value: true for versions 25.5 and higher, false for versions 25.4 and lower. For details, see ClickHouse documentation |
|
min_count_to_compile_expression |
How many identical expressions ClickHouse has to encounter before they are compiled. For the 0 value compilation is synchronous: a query waits for expression compilation process to complete prior to continuing execution. For all other values, compilation is asynchronous: the compilation process executes in a separate thread. Default value: 3. For details, see ClickHouse documentation |
|
max_block_size |
Sets the recommended maximum number of rows to include in a single block when loading data from tables. Blocks the size of max_block_size are not always loaded from the table: if ClickHouse determines that less data needs to be retrieved, The block size should not be too small to avoid noticeable costs when processing each block. It should also not be too large to ensure that Default value: 65409. For details, see ClickHouse documentation |
|
min_insert_block_size_rows |
Limits the minimum number of rows in a block to be inserted in a table by INSERT query. Blocks that are smaller than the specified value, Default value: 1048449. For details, see ClickHouse documentation |
|
min_insert_block_size_bytes |
Limits the minimum number of bytes in a block to be inserted in a table by INSERT query. Blocks that are smaller than the specified value, Default value: 268402944. For details, see ClickHouse documentation |
|
max_insert_block_size |
The size of blocks (in a count of rows) to form for insertion into a table. This setting only applies in cases when the server forms the blocks. For example, for an INSERT via the HTTP interface, the server parses Default value: 1048449. For details, see ClickHouse documentation |
|
max_partitions_per_insert_block |
When inserting data, ClickHouse calculates the number of partitions in the inserted block. Default value: 100. For details, see ClickHouse documentation |
|
min_bytes_to_use_direct_io |
Limits the minimum number of bytes to enable unbuffered direct reads from disk (Direct I/O). If set to 0, Direct I/O is disabled. By default, ClickHouse does not read data directly from disk, but relies on the filesystem and its cache instead. Such reading strategy Default value: 0. For details, see ClickHouse documentation |
|
use_uncompressed_cache |
Determines whether to use the cache of uncompressed blocks, or not. Using this cache can significantly reduce latency and increase the throughput when a huge amount of small queries is to be processed. This setting has effect only for tables of the MergeTree family. Default value: false. For details, see ClickHouse documentation |
|
merge_tree_max_rows_to_use_cache |
Limits the maximum size in rows of the request that can use the cache of uncompressed data. The cache is not used for requests larger than the specified value. Use this setting in combination with use_uncompressed_cache setting. Default value: 1048576. For details, see ClickHouse documentation |
|
merge_tree_max_bytes_to_use_cache |
Limits the maximum size in bytes of the request that can use the cache of uncompressed data. The cache is not used for requests larger than the specified value. Use this setting in combination with use_uncompressed_cache setting. Default value: 2013265920 (1920 MiB). For details, see ClickHouse documentation |
|
merge_tree_min_rows_for_concurrent_read |
Limits the minimum number of rows to be read from a file to enable concurrent read. This setting has effect only for tables of the MergeTree family. Default value: 163840. For details, see ClickHouse documentation |
|
merge_tree_min_bytes_for_concurrent_read |
Limits the number of bytes to be read from a file to enable concurrent read. This setting has effect only for tables of the MergeTree family. Default value: 251658240 (240 MiB). For details, see ClickHouse documentation |
|
max_bytes_before_external_group_by |
Sets the threshold of RAM consumption (in bytes) after that the temporary data, collected during the GROUP BY operation, By default, aggregation is done by employing hash table that resides in RAM. A query can result in aggregation of huge data Default value: 0. For details, see ClickHouse documentation |
|
max_bytes_before_external_sort |
Sets the threshold of RAM consumption (in bytes) after that the temporary data, collected during the ORDER BY operation, Default value: 0. For details, see ClickHouse documentation |
|
group_by_two_level_threshold |
Sets the threshold of the number of keys, after that the two-level aggregation should be used. 0 means threshold is not set. Default value: 100000. For details, see ClickHouse documentation |
|
group_by_two_level_threshold_bytes |
Sets the threshold of the number of bytes, after that the two-level aggregation should be used. 0 means threshold is not set. Default value: 50000000. For details, see ClickHouse documentation |
|
deduplicate_blocks_in_dependent_materialized_views |
Enables or disables the deduplication check for materialized views that receive data from replicated tables. Default value: false. For details, see ClickHouse documentation |
|
local_filesystem_read_method |
enum LocalFilesystemReadMethod Method of reading data from local filesystem. The LOCAL_FILESYSTEM_READ_METHOD_IO_URING is experimental and does not work for Log, TinyLog, StripeLog, File, Set and Join, and For details, see ClickHouse documentation
|
|
remote_filesystem_read_method |
enum RemoteFilesystemReadMethod Method of reading data from remote filesystem. Default value: REMOTE_FILESYSTEM_READ_METHOD_THREADPOOL. For details, see ClickHouse documentation
|
|
priority |
Sets the priority of a query.
If ClickHouse is working with the high-priority queries, and a low-priority query enters, then the low-priority query Default value: 0. For details, see ClickHouse documentation |
|
max_threads |
Limits the maximum number of threads to process the request. If set to 0, the number of threads is calculated automatically based on the number of available CPU cores. The setting applies to threads that perform the same stages of the query processing pipeline in parallel. It does not take threads that read data from remote servers into account. For details, see ClickHouse documentation |
|
max_insert_threads |
The maximum number of threads to execute the INSERT SELECT query. Default value: 0. For details, see ClickHouse documentation |
|
max_memory_usage |
Limits the maximum memory usage (in bytes) for processing of a single user's query on a single server. 0 means unlimited. This limitation is enforced for any user's single query on a single server. If you use max_bytes_before_external_group_by or max_bytes_before_external_sort setting, then it is recommended to set Default value: 0. For details, see ClickHouse documentation |
|
max_memory_usage_for_user |
Limits the maximum memory usage (in bytes) for processing of user's queries on a single server. 0 means unlimited. This limitation is enforced for all queries that belong to one user and run simultaneously on a single server. Default value: 0. For details, see ClickHouse documentation |
|
memory_overcommit_ratio_denominator |
It represents the soft memory limit when the hard limit is reached on the global level. Default value: 1073741824 (1 GiB). For details, see ClickHouse documentation |
|
memory_overcommit_ratio_denominator_for_user |
It represents the soft memory limit when the hard limit is reached on the user level. Default value: 1073741824 (1 GiB). For details, see ClickHouse documentation |
|
memory_usage_overcommit_max_wait_microseconds |
Maximum time thread will wait for memory to be freed in the case of memory overcommit. If the timeout is reached and memory is not freed, an exception is thrown. Default value: 5000000 (5 seconds). For details, see ClickHouse documentation |
|
max_network_bandwidth |
The maximum speed of data exchange over the network in bytes per second for a query. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
max_network_bandwidth_for_user |
The maximum speed of data exchange over the network in bytes per second for all concurrently running user queries. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
max_temporary_data_on_disk_size_for_query |
The maximum amount of data consumed by temporary files on disk in bytes for all concurrently running queries. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
max_temporary_data_on_disk_size_for_user |
The maximum amount of data consumed by temporary files on disk in bytes for all concurrently running user queries. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
max_concurrent_queries_for_user |
The maximum number of simultaneously processed queries per user. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
force_index_by_date |
Disables query execution if the index cannot be used by date. This setting has effect only for tables of the MergeTree family. Default value: false. For details, see ClickHouse documentation |
|
force_primary_key |
Disables query execution if indexing by the primary key cannot be used. This setting has effect only for tables of the MergeTree family. Default value: false. For details, see ClickHouse documentation |
|
max_rows_to_read |
Limits the maximum number of rows that can be read from a table when running a query. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
max_bytes_to_read |
Limits the maximum number of bytes (uncompressed data) that can be read from a table when running a query. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
read_overflow_mode |
enum OverflowMode Determines the behavior on exceeding limits while reading the data. Default value: OVERFLOW_MODE_THROW. For details, see ClickHouse documentation
|
|
max_rows_to_group_by |
Limits the maximum number of unique keys received from aggregation. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
group_by_overflow_mode |
enum GroupByOverflowMode Determines the behavior on exceeding limits while doing aggregation. Default value: GROUP_BY_OVERFLOW_MODE_THROW. For details, see ClickHouse documentation
|
|
max_rows_to_sort |
Limits the maximum number of rows that can be read from a table for sorting. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
max_bytes_to_sort |
Limits the maximum number of bytes (uncompressed data) that can be read from a table for sorting. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
sort_overflow_mode |
enum OverflowMode Determines the behavior on exceeding limits while sorting. Default value: OVERFLOW_MODE_THROW. For details, see ClickHouse documentation
|
|
max_result_rows |
Limits the number of rows in the result. 0 means unlimited. This limitation is also checked for subqueries and parts of distributed queries that run on remote servers. Default value: 0. For details, see ClickHouse documentation |
|
max_result_bytes |
Limits the result size in bytes (uncompressed data). 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
result_overflow_mode |
enum OverflowMode Determines the behavior on exceeding limits while forming result. Default value: OVERFLOW_MODE_THROW. For details, see ClickHouse documentation
|
|
max_rows_in_distinct |
Limits the maximum number of different rows in the state, which is used for performing DISTINCT. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
max_bytes_in_distinct |
Limits the maximum number of bytes (uncompressed data) in the state, which is used for performing DISTINCT. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
distinct_overflow_mode |
enum OverflowMode Determines the behavior on exceeding limits while performing DISTINCT. Default value: OVERFLOW_MODE_THROW. For details, see ClickHouse documentation
|
|
max_rows_to_transfer |
Limits the maximum number of rows that can be passed to a remote server or saved in a temporary table when using GLOBAL IN|JOIN. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
max_bytes_to_transfer |
Limits the maximum number of bytes (uncompressed data) that can be passed to a remote server or saved in a temporary table when using GLOBAL IN|JOIN. Default value: 0. For details, see ClickHouse documentation |
|
transfer_overflow_mode |
enum OverflowMode Determines the behavior on exceeding limits while transfering data. Default value: OVERFLOW_MODE_THROW. For details, see ClickHouse documentation
|
|
max_execution_time |
Limits the maximum query execution time in milliseconds. 0 means unlimited. The timeout is checked and the query can stop only in designated places during data processing. Default value: 0. For details, see ClickHouse documentation |
|
timeout_overflow_mode |
enum OverflowMode Determines the behavior on exceeding limits of execution time. Default value: OVERFLOW_MODE_THROW. For details, see ClickHouse documentation
|
|
max_rows_in_set |
Limits on the maximum number of rows in the set resulting from the execution of the IN section. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
max_bytes_in_set |
Limits on the maximum number of bytes (uncompressed data) in the set resulting from the execution of the IN section. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
set_overflow_mode |
enum OverflowMode Determines the behavior on exceeding max_rows_in_set or max_bytes_in_set limit. Default value: OVERFLOW_MODE_THROW. For details, see ClickHouse documentation
|
|
max_rows_in_join |
Limits the maximum number of rows in the hash table that is used when joining tables. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
max_bytes_in_join |
Limits the maximum number of bytes in the hash table that is used when joining tables. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
join_overflow_mode |
enum OverflowMode Determines the behavior on exceeding max_rows_in_join or max_bytes_in_join limit. Default value: OVERFLOW_MODE_THROW. For details, see ClickHouse documentation
|
|
max_columns_to_read |
Limits the maximum number of columns that can be read from a table in a single query. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
max_temporary_columns |
Limits the maximum number of temporary columns that must be kept in RAM simultaneously when running a query, including constant columns. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
max_temporary_non_const_columns |
Limits the maximum number of temporary columns that must be kept in RAM simultaneously when running a query, not including constant columns. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
max_query_size |
Limits the size of the part of a query that can be transferred to RAM for parsing with the SQL parser, in bytes. Data in the VALUES clause of INSERT queries is processed by a separate stream parser (that consumes O(1) RAM) and not affected by this restriction. Default value: 262144 (256 KiB). For details, see ClickHouse documentation |
|
max_ast_depth |
Limits the maximum depth of query syntax tree. Executing a big and complex query may result in building a syntax tree of enormous depth. Default value: 1000. For details, see ClickHouse documentation |
|
max_ast_elements |
Limits the maximum size of query syntax tree in number of nodes. Executing a big and complex query may result in building a syntax tree of enormous size. Default value: 50000. For details, see ClickHouse documentation |
|
max_expanded_ast_elements |
Limits the maximum size of query syntax tree in number of nodes after expansion of aliases and the asterisk values. Executing a big and complex query may result in building a syntax tree of enormous size. Default value: 500000. For details, see ClickHouse documentation |
|
max_parser_depth |
Limits maximum recursion depth in the recursive descent parser. Allows controlling the stack size. If set to 0, recursion depth is unlimited. Default value: 1000. For details, see ClickHouse documentation |
|
min_execution_speed |
Minimal execution speed in rows per second. Checked on every data block when timeout_before_checking_execution_speed expires. Default value: 0. For details, see ClickHouse documentation |
|
min_execution_speed_bytes |
Minimal execution speed in bytes per second. Checked on every data block when timeout_before_checking_execution_speed expires. Default value: 0. For details, see ClickHouse documentation |
|
input_format_values_interpret_expressions |
Enables or disables SQL parser if the fast stream parser cannot parse the data. Enable this setting, if the data that you want to insert into a table contains SQL expressions. For example, the stream parser is unable to parse a value that contains now() expression; therefore an INSERT query for this value This setting has effect only if you use Values Default value: true. For details, see ClickHouse documentation |
|
input_format_defaults_for_omitted_fields |
Enables or disables replacing omitted input values with default values of the respective columns when performing INSERT queries. Default value: true. For details, see ClickHouse documentation |
|
input_format_null_as_default |
Enables or disables the initialization of NULL fields with default values, if data type of these fields is not nullable. Default value: true. For details, see ClickHouse documentation |
|
input_format_with_names_use_header |
Enables or disables checking the column order when inserting data. Default value: true. For details, see ClickHouse documentation |
|
output_format_json_quote_64bit_integers |
Enables or disables quoting of 64-bit integers in JSON output format. If this setting is enabled, then 64-bit integers (UInt64 and Int64) will be quoted when written to JSON output Default value: false for versions 25.8 and higher, true for versions 25.7 and lower. For details, see ClickHouse documentation |
|
output_format_json_quote_denormals |
Enables special floating-point values (+nan, -nan, +inf and -inf) in JSON output format. Default value: false. For details, see ClickHouse documentation |
|
date_time_input_format |
enum DateTimeInputFormat Specifies which of date time parsers to use. Default value: DATE_TIME_INPUT_FORMAT_BASIC. For details, see ClickHouse documentation
|
|
date_time_output_format |
enum DateTimeOutputFormat Specifies which of date time output formats to use. Default value: DATE_TIME_OUTPUT_FORMAT_SIMPLE. For details, see ClickHouse documentation
|
|
low_cardinality_allow_in_native_format |
Allows or restricts using the LowCardinality data type with the Native format. LowCardinality columns (aka sparse columns) store data in more effective way, compared to regular columns, by using hash tables. If you use a third-party ClickHouse client that can't work with LowCardinality columns, then this client will not be able to correctly interpret Official ClickHouse client works with LowCardinality columns out-of-the-box. Default value: true. For details, see ClickHouse documentation |
|
empty_result_for_aggregation_by_empty_set |
Enables or disables returning of empty result when aggregating without keys (with GROUP BY operation absent) on empty set (e.g., SELECT count(*) FROM table WHERE 0).
Default value: false. For details, see ClickHouse documentation |
|
format_regexp |
string Regular expression (for Regexp format). For details, see ClickHouse documentation |
|
format_regexp_escaping_rule |
enum FormatRegexpEscapingRule Field escaping rule (for Regexp format). Default value: FORMAT_REGEXP_ESCAPING_RULE_RAW. For details, see ClickHouse documentation
|
|
format_regexp_skip_unmatched |
Skip lines unmatched by regular expression (for Regexp format) Default value: false. For details, see ClickHouse documentation |
|
input_format_parallel_parsing |
Enables or disables order-preserving parallel parsing of data formats. Supported only for TSV, TSKV, CSV and JSONEachRow formats. Default value: true. For details, see ClickHouse documentation |
|
input_format_import_nested_json |
Enables or disables the insertion of JSON data with nested objects. Default value: false. For details, see ClickHouse documentation |
|
format_avro_schema_registry_url |
string Avro schema registry URL. For details, see ClickHouse documentation |
|
data_type_default_nullable |
Allows data types without explicit modifiers NULL or NOT NULL in column definition will be Nullable. Default value: false. For details, see ClickHouse documentation |
|
http_connection_timeout |
HTTP connection timeout, in milliseconds. Default value: 1000 (1 second). For details, see ClickHouse documentation |
|
http_receive_timeout |
HTTP receive timeout, in milliseconds. Default value: 30000 (30 seconds). For details, see ClickHouse documentation |
|
http_send_timeout |
HTTP send timeout, in milliseconds. Default value: 30000 (30 seconds). For details, see ClickHouse documentation |
|
enable_http_compression |
Enables or disables data compression in HTTP responses. By default, ClickHouse stores data compressed. When executing a query, its result is uncompressed. Enable this setting and add the Accept-Encoding: <compression method> HTTP header in a HTTP request to force compression of HTTP response from ClickHouse. ClickHouse support the following compression methods: gzip, br and deflate. Default value: false. For details, see ClickHouse documentation |
|
send_progress_in_http_headers |
Enables or disables progress notifications using X-ClickHouse-Progress HTTP header. Default value: false. For details, see ClickHouse documentation |
|
http_headers_progress_interval |
Minimum interval between progress notifications with X-ClickHouse-Progress HTTP header, in milliseconds. Default value: 100. For details, see ClickHouse documentation |
|
add_http_cors_header |
Adds CORS header in HTTP responses. Default value: false. For details, see ClickHouse documentation |
|
cancel_http_readonly_queries_on_client_close |
Cancels HTTP read-only queries (e.g. SELECT) when a client closes the connection without waiting for the response. Default value: false. For details, see ClickHouse documentation |
|
max_http_get_redirects |
Limits the maximum number of HTTP GET redirect hops. If set to 0, no hops is allowed. Default value: 0. For details, see ClickHouse documentation |
|
http_max_field_name_size |
Maximum length of field name in HTTP header. Default value: 131072. For details, see ClickHouse documentation |
|
http_max_field_value_size |
Maximum length of field value in HTTP header. Default value: 131072. For details, see ClickHouse documentation |
|
quota_mode |
enum QuotaMode Quota accounting mode. Default value: QUOTA_MODE_DEFAULT.
|
|
async_insert |
If enabled, data from INSERT query is stored in queue and later flushed to table in background. Default value: false. For details, see ClickHouse documentation |
|
wait_for_async_insert |
Enables or disables waiting for processing of asynchronous insertion. If enabled, server returns OK only after the data is inserted. Default value: true. For details, see ClickHouse documentation |
|
wait_for_async_insert_timeout |
Timeout for waiting for processing asynchronous inserts, in seconds. Default value: 120 (2 minutes). For details, see ClickHouse documentation |
|
async_insert_max_data_size |
The maximum size of the unparsed data in bytes collected per query before being inserted. Default value: 10485760 (10 MiB). For details, see ClickHouse documentation |
|
async_insert_busy_timeout |
Maximum time to wait before dumping collected data per query since the first data appeared. Default value: 200. For details, see ClickHouse documentation |
|
async_insert_use_adaptive_busy_timeout |
Enables of disables adaptive busy timeout for asynchronous inserts. Default value: true. For details, see ClickHouse documentation |
|
log_query_threads |
Enables or disables query threads logging to the the system.query_thread_log table. Default value: false. For details, see ClickHouse documentation |
|
log_query_views |
Enables or disables query views logging to the the system.query_views_log table. Default value: true. For details, see ClickHouse documentation |
|
log_queries_probability |
Log queries with the specified probability. Default value: 1. For details, see ClickHouse documentation |
|
log_processors_profiles |
Enables or disables logging of processors level profiling data to the the system.processors_profile_log table. Default value: false. For details, see ClickHouse documentation |
|
use_query_cache |
If turned on, SELECT queries may utilize the query cache. Default value: false. For details, see ClickHouse documentation |
|
enable_reads_from_query_cache |
If turned on, results of SELECT queries are retrieved from the query cache. Default value: true. For details, see ClickHouse documentation |
|
enable_writes_to_query_cache |
If turned on, results of SELECT queries are stored in the query cache. Default value: true. For details, see ClickHouse documentation |
|
query_cache_min_query_runs |
Minimum number of times a SELECT query must run before its result is stored in the query cache. Default value: 0. For details, see ClickHouse documentation |
|
query_cache_min_query_duration |
Minimum duration in milliseconds a query needs to run for its result to be stored in the query cache. Default value: 0. For details, see ClickHouse documentation |
|
query_cache_ttl |
After this time in seconds entries in the query cache become stale. Default value: 60 (1 minute). For details, see ClickHouse documentation |
|
query_cache_max_entries |
The maximum number of query results the current user may store in the query cache. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
query_cache_max_size_in_bytes |
The maximum amount of memory (in bytes) the current user may allocate in the query cache. 0 means unlimited. Default value: 0. For details, see ClickHouse documentation |
|
query_cache_tag |
string A string which acts as a label for query cache entries. The same queries with different tags are considered different by the query cache. For details, see ClickHouse documentation |
|
query_cache_share_between_users |
If turned on, the result of SELECT queries cached in the query cache can be read by other users. It is not recommended to enable this setting due to security reasons. Default value: false. For details, see ClickHouse documentation |
|
query_cache_nondeterministic_function_handling |
enum QueryCacheNondeterministicFunctionHandling Controls how the query cache handles SELECT queries with non-deterministic functions like rand() or now(). Default value: QUERY_CACHE_NONDETERMINISTIC_FUNCTION_HANDLING_THROW. For details, see ClickHouse documentation
|
|
query_cache_system_table_handling |
enum QueryCacheSystemTableHandling Controls how the query cache handles SELECT queries against system tables. Default value: QUERY_CACHE_SYSTEM_TABLE_HANDLING_THROW. For details, see ClickHouse documentation
|
|
count_distinct_implementation |
enum CountDistinctImplementation Specifies which of the uniq* functions should be used to perform the COUNT(DISTINCT ...) construction. Default value: COUNT_DISTINCT_IMPLEMENTATION_UNIQ_EXACT. For details, see ClickHouse documentation
|
|
joined_subquery_requires_alias |
Force joined subqueries and table functions to have aliases for correct name qualification. Default value: true. For details, see ClickHouse documentation |
|
join_use_nulls |
Determines JOIN behavior on filling empty cells when merging tables. If enabled, the empty cells are filled with NULL. Default value: false. For details, see ClickHouse documentation |
|
transform_null_in |
Enables equality of NULL values for IN operator. By default, NULL values can't be compared because NULL means undefined value. Thus, comparison expr = NULL must always return false. Default value: false. For details, see ClickHouse documentation |
|
insert_null_as_default |
Enables or disables the insertion of default values instead of NULL into columns with not nullable data type. If column type is not nullable and this setting is disabled, then inserting NULL causes an exception. Default value: true. For details, see ClickHouse documentation |
|
join_algorithm[] |
enum JoinAlgorithm Specifies which JOIN algorithm to use. Default value: JOIN_ALGORITHM_DIRECT,JOIN_ALGORITHM_PARALLEL_HASH,JOIN_ALGORITHM_HASH for versions 24.12 and higher, JOIN_ALGORITHM_DIRECT,JOIN_ALGORITHM_AUTO for versions from 23.8 to 24.11, JOIN_ALGORITHM_AUTO for versions 23.7 and lower. For details, see ClickHouse documentation
|
|
any_join_distinct_right_table_keys |
Enables legacy ClickHouse server behaviour in ANY INNER|LEFT JOIN operations. Default value: false. For details, see ClickHouse documentation |
|
allow_suspicious_low_cardinality_types |
Allows or restricts using LowCardinality with data types with fixed size of 8 bytes or less. Default value: false. For details, see ClickHouse documentation |
|
flatten_nested |
Sets the data format of nested columns. Default value: true. For details, see ClickHouse documentation |
|
memory_profiler_step |
Sets the step of memory profiler. Whenever query memory usage becomes larger than every next step in number of bytes the memory profiler Default value: 4194304. For details, see ClickHouse documentation |
|
memory_profiler_sample_probability |
Collect random allocations and deallocations and write them into system.trace_log with MemorySample trace_type. Default value: 0. For details, see ClickHouse documentation |
|
max_final_threads |
Sets the maximum number of parallel threads for the SELECT query data read phase with the FINAL modifier. For details, see ClickHouse documentation |
|
max_read_buffer_size |
The maximum size of the buffer to read from the filesystem. Default value: 1048576 (1 MiB). For details, see ClickHouse documentation |
|
insert_keeper_max_retries |
The setting sets the maximum number of retries for ClickHouse Keeper (or ZooKeeper) requests during insert into replicated MergeTree tables. Default value: 20. For details, see ClickHouse documentation |
|
do_not_merge_across_partitions_select_final |
Enable or disable independent processing of partitions for SELECT queries with FINAL. Default value: false. For details, see ClickHouse documentation |
|
ignore_materialized_views_with_dropped_target_table |
Ignore materialized views with dropped target table during pushing to views. Default value: false. For details, see ClickHouse documentation |
|
enable_analyzer |
Enables or disables new query analyzer. Default value: true for versions 25.9 and higher, false for version 25.8, true for versions from 25.5 to 25.7, false for versions 25.4 and lower. For details, see ClickHouse documentation |
|
s3_use_adaptive_timeouts |
Enables or disables adaptive timeouts for S3 requests.
Default value: true. For details, see ClickHouse documentation |
|
final |
If enabled, automatically applies FINAL modifier to all tables in a query, to tables where FINAL is applicable, Default value: false. For details, see ClickHouse documentation |
|
compile |
The setting is deprecated and has no effect. |
|
min_count_to_compile |
The setting is deprecated and has no effect. |
|
async_insert_threads |
The setting is deprecated and has no effect. |
|
async_insert_stale_timeout |
The setting is deprecated and has no effect. |
UserQuota
ClickHouse quota representation. Each quota associated with an user and limits it resource usage for an interval.
For details, see ClickHouse documentation
|
Field |
Description |
|
interval_duration |
Duration of interval for quota in milliseconds. |
|
queries |
The total number of queries. 0 means unlimited. |
|
errors |
The number of queries that threw exception. 0 means unlimited. |
|
result_rows |
The total number of rows given as the result. 0 means unlimited. |
|
read_rows |
The total number of source rows read from tables for running the query, on all remote servers. 0 means unlimited. |
|
execution_time |
The total query execution time, in milliseconds (wall time). 0 means unlimited. |
HostSpec
|
Field |
Description |
|
zone_id |
string ID of the availability zone where the host resides. |
|
type |
enum Type Required field. Type of the host to be deployed.
|
|
subnet_id |
string ID of the subnet that the host should belong to. This subnet should be a part |
|
assign_public_ip |
bool Whether the host should get a public IP address on creation. After a host has been created, this setting cannot be changed. To remove an assigned public IP, or to assign Possible values:
|
|
shard_name |
string Name of the shard that the host is assigned to. |
MaintenanceWindow
A maintenance window settings.
|
Field |
Description |
|
anytime |
Maintenance operation can be scheduled anytime. Includes only one of the fields The maintenance policy in effect. |
|
weekly_maintenance_window |
Maintenance operation can be scheduled on a weekly basis. Includes only one of the fields The maintenance policy in effect. |
AnytimeMaintenanceWindow
|
Field |
Description |
|
Empty |
|
WeeklyMaintenanceWindow
Weelky maintenance window settings.
|
Field |
Description |
|
day |
enum WeekDay Day of the week (in
|
|
hour |
int64 Hour of the day in UTC (in |
ShardSpec
|
Field |
Description |
|
name |
string Required field. Name of the shard to be created. |
|
config_spec |
Configuration of the shard to be created. |
|
shard_group_names[] |
string Shard groups that contain the shard. |
ShardConfigSpec
|
Field |
Description |
|
clickhouse |
ClickHouse configuration for a shard. |
Clickhouse
|
Field |
Description |
|
config |
ClickHouse settings for the shard. |
|
resources |
Computational resources for the shard. |
|
weight |
Relative weight of the shard considered when writing data to the cluster. |
|
disk_size_autoscaling |
Disk size autoscaling settings. |
operation.Operation
{
"id": "string",
"description": "string",
"created_at": "google.protobuf.Timestamp",
"created_by": "string",
"modified_at": "google.protobuf.Timestamp",
"done": "bool",
"metadata": {
"cluster_id": "string"
},
// Includes only one of the fields `error`, `response`
"error": "google.rpc.Status",
"response": {
"id": "string",
"folder_id": "string",
"created_at": "google.protobuf.Timestamp",
"name": "string",
"description": "string",
"labels": "map<string, string>",
"environment": "Environment",
"monitoring": [
{
"name": "string",
"description": "string",
"link": "string"
}
],
"config": {
"version": "string",
"clickhouse": {
"config": {
"effective_config": {
"background_pool_size": "google.protobuf.Int64Value",
"background_merges_mutations_concurrency_ratio": "google.protobuf.Int64Value",
"background_schedule_pool_size": "google.protobuf.Int64Value",
"background_fetches_pool_size": "google.protobuf.Int64Value",
"background_move_pool_size": "google.protobuf.Int64Value",
"background_distributed_schedule_pool_size": "google.protobuf.Int64Value",
"background_buffer_flush_schedule_pool_size": "google.protobuf.Int64Value",
"background_message_broker_schedule_pool_size": "google.protobuf.Int64Value",
"background_common_pool_size": "google.protobuf.Int64Value",
"dictionaries_lazy_load": "google.protobuf.BoolValue",
"log_level": "LogLevel",
"query_log_retention_size": "google.protobuf.Int64Value",
"query_log_retention_time": "google.protobuf.Int64Value",
"query_thread_log_enabled": "google.protobuf.BoolValue",
"query_thread_log_retention_size": "google.protobuf.Int64Value",
"query_thread_log_retention_time": "google.protobuf.Int64Value",
"part_log_retention_size": "google.protobuf.Int64Value",
"part_log_retention_time": "google.protobuf.Int64Value",
"metric_log_enabled": "google.protobuf.BoolValue",
"metric_log_retention_size": "google.protobuf.Int64Value",
"metric_log_retention_time": "google.protobuf.Int64Value",
"trace_log_enabled": "google.protobuf.BoolValue",
"trace_log_retention_size": "google.protobuf.Int64Value",
"trace_log_retention_time": "google.protobuf.Int64Value",
"text_log_enabled": "google.protobuf.BoolValue",
"text_log_retention_size": "google.protobuf.Int64Value",
"text_log_retention_time": "google.protobuf.Int64Value",
"text_log_level": "LogLevel",
"opentelemetry_span_log_enabled": "google.protobuf.BoolValue",
"opentelemetry_span_log_retention_size": "google.protobuf.Int64Value",
"opentelemetry_span_log_retention_time": "google.protobuf.Int64Value",
"query_views_log_enabled": "google.protobuf.BoolValue",
"query_views_log_retention_size": "google.protobuf.Int64Value",
"query_views_log_retention_time": "google.protobuf.Int64Value",
"asynchronous_metric_log_enabled": "google.protobuf.BoolValue",
"asynchronous_metric_log_retention_size": "google.protobuf.Int64Value",
"asynchronous_metric_log_retention_time": "google.protobuf.Int64Value",
"session_log_enabled": "google.protobuf.BoolValue",
"session_log_retention_size": "google.protobuf.Int64Value",
"session_log_retention_time": "google.protobuf.Int64Value",
"zookeeper_log_enabled": "google.protobuf.BoolValue",
"zookeeper_log_retention_size": "google.protobuf.Int64Value",
"zookeeper_log_retention_time": "google.protobuf.Int64Value",
"asynchronous_insert_log_enabled": "google.protobuf.BoolValue",
"asynchronous_insert_log_retention_size": "google.protobuf.Int64Value",
"asynchronous_insert_log_retention_time": "google.protobuf.Int64Value",
"processors_profile_log_enabled": "google.protobuf.BoolValue",
"processors_profile_log_retention_size": "google.protobuf.Int64Value",
"processors_profile_log_retention_time": "google.protobuf.Int64Value",
"error_log_enabled": "google.protobuf.BoolValue",
"error_log_retention_size": "google.protobuf.Int64Value",
"error_log_retention_time": "google.protobuf.Int64Value",
"access_control_improvements": {
"select_from_system_db_requires_grant": "google.protobuf.BoolValue",
"select_from_information_schema_requires_grant": "google.protobuf.BoolValue"
},
"max_connections": "google.protobuf.Int64Value",
"max_concurrent_queries": "google.protobuf.Int64Value",
"max_table_size_to_drop": "google.protobuf.Int64Value",
"max_partition_size_to_drop": "google.protobuf.Int64Value",
"keep_alive_timeout": "google.protobuf.Int64Value",
"uncompressed_cache_size": "google.protobuf.Int64Value",
"mark_cache_size": "google.protobuf.Int64Value",
"timezone": "string",
"geobase_enabled": "google.protobuf.BoolValue",
"geobase_uri": "string",
"default_database": "google.protobuf.StringValue",
"total_memory_profiler_step": "google.protobuf.Int64Value",
"total_memory_tracker_sample_probability": "google.protobuf.DoubleValue",
"async_insert_threads": "google.protobuf.Int64Value",
"backup_threads": "google.protobuf.Int64Value",
"restore_threads": "google.protobuf.Int64Value",
"merge_tree": {
"parts_to_delay_insert": "google.protobuf.Int64Value",
"parts_to_throw_insert": "google.protobuf.Int64Value",
"inactive_parts_to_delay_insert": "google.protobuf.Int64Value",
"inactive_parts_to_throw_insert": "google.protobuf.Int64Value",
"max_avg_part_size_for_too_many_parts": "google.protobuf.Int64Value",
"max_parts_in_total": "google.protobuf.Int64Value",
"max_replicated_merges_in_queue": "google.protobuf.Int64Value",
"number_of_free_entries_in_pool_to_lower_max_size_of_merge": "google.protobuf.Int64Value",
"number_of_free_entries_in_pool_to_execute_mutation": "google.protobuf.Int64Value",
"max_bytes_to_merge_at_min_space_in_pool": "google.protobuf.Int64Value",
"max_bytes_to_merge_at_max_space_in_pool": "google.protobuf.Int64Value",
"min_bytes_for_wide_part": "google.protobuf.Int64Value",
"min_rows_for_wide_part": "google.protobuf.Int64Value",
"cleanup_delay_period": "google.protobuf.Int64Value",
"max_cleanup_delay_period": "google.protobuf.Int64Value",
"merge_selecting_sleep_ms": "google.protobuf.Int64Value",
"max_merge_selecting_sleep_ms": "google.protobuf.Int64Value",
"min_age_to_force_merge_seconds": "google.protobuf.Int64Value",
"min_age_to_force_merge_on_partition_only": "google.protobuf.BoolValue",
"merge_max_block_size": "google.protobuf.Int64Value",
"deduplicate_merge_projection_mode": "DeduplicateMergeProjectionMode",
"lightweight_mutation_projection_mode": "LightweightMutationProjectionMode",
"replicated_deduplication_window": "google.protobuf.Int64Value",
"replicated_deduplication_window_seconds": "google.protobuf.Int64Value",
"fsync_after_insert": "google.protobuf.BoolValue",
"fsync_part_directory": "google.protobuf.BoolValue",
"min_compressed_bytes_to_fsync_after_fetch": "google.protobuf.Int64Value",
"min_compressed_bytes_to_fsync_after_merge": "google.protobuf.Int64Value",
"min_rows_to_fsync_after_merge": "google.protobuf.Int64Value",
"ttl_only_drop_parts": "google.protobuf.BoolValue",
"merge_with_ttl_timeout": "google.protobuf.Int64Value",
"merge_with_recompression_ttl_timeout": "google.protobuf.Int64Value",
"max_number_of_merges_with_ttl_in_pool": "google.protobuf.Int64Value",
"materialize_ttl_recalculate_only": "google.protobuf.BoolValue",
"check_sample_column_is_correct": "google.protobuf.BoolValue",
"allow_remote_fs_zero_copy_replication": "google.protobuf.BoolValue"
},
"compression": [
{
"method": "Method",
"min_part_size": "int64",
"min_part_size_ratio": "double",
"level": "google.protobuf.Int64Value"
}
],
"dictionaries": [
{
"name": "string",
"structure": {
"id": {
"name": "string"
},
"key": {
"attributes": [
{
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
}
]
},
"range_min": {
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
},
"range_max": {
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
},
"attributes": [
{
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
}
]
},
"layout": {
"type": "Type",
"size_in_cells": "int64",
"allow_read_expired_keys": "google.protobuf.BoolValue",
"max_update_queue_size": "int64",
"update_queue_push_timeout_milliseconds": "int64",
"query_wait_timeout_milliseconds": "int64",
"max_threads_for_updates": "int64",
"initial_array_size": "int64",
"max_array_size": "int64",
"access_to_key_from_attributes": "google.protobuf.BoolValue"
},
// Includes only one of the fields `fixed_lifetime`, `lifetime_range`
"fixed_lifetime": "int64",
"lifetime_range": {
"min": "int64",
"max": "int64"
},
// end of the list of possible fields
// Includes only one of the fields `http_source`, `mysql_source`, `clickhouse_source`, `mongodb_source`, `postgresql_source`
"http_source": {
"url": "string",
"format": "string",
"headers": [
{
"name": "string",
"value": "string"
}
]
},
"mysql_source": {
"db": "string",
"table": "string",
"port": "int64",
"user": "string",
"password": "string",
"replicas": [
{
"host": "string",
"priority": "int64",
"port": "int64",
"user": "string",
"password": "string"
}
],
"where": "string",
"invalidate_query": "string",
"close_connection": "google.protobuf.BoolValue",
"share_connection": "google.protobuf.BoolValue"
},
"clickhouse_source": {
"db": "string",
"table": "string",
"host": "string",
"port": "int64",
"user": "string",
"password": "string",
"where": "string",
"secure": "google.protobuf.BoolValue"
},
"mongodb_source": {
"db": "string",
"collection": "string",
"host": "string",
"port": "int64",
"user": "string",
"password": "string",
"options": "string"
},
"postgresql_source": {
"db": "string",
"table": "string",
"hosts": [
"string"
],
"port": "int64",
"user": "string",
"password": "string",
"invalidate_query": "string",
"ssl_mode": "SslMode"
}
// end of the list of possible fields
}
],
"graphite_rollup": [
{
"name": "string",
"patterns": [
{
"regexp": "string",
"function": "string",
"retention": [
{
"age": "int64",
"precision": "int64"
}
]
}
],
"path_column_name": "string",
"time_column_name": "string",
"value_column_name": "string",
"version_column_name": "string"
}
],
"kafka": {
"security_protocol": "SecurityProtocol",
"sasl_mechanism": "SaslMechanism",
"sasl_username": "string",
"sasl_password": "string",
"enable_ssl_certificate_verification": "google.protobuf.BoolValue",
"max_poll_interval_ms": "google.protobuf.Int64Value",
"session_timeout_ms": "google.protobuf.Int64Value",
"debug": "Debug",
"auto_offset_reset": "AutoOffsetReset"
},
"kafka_topics": [
{
"name": "string",
"settings": {
"security_protocol": "SecurityProtocol",
"sasl_mechanism": "SaslMechanism",
"sasl_username": "string",
"sasl_password": "string",
"enable_ssl_certificate_verification": "google.protobuf.BoolValue",
"max_poll_interval_ms": "google.protobuf.Int64Value",
"session_timeout_ms": "google.protobuf.Int64Value",
"debug": "Debug",
"auto_offset_reset": "AutoOffsetReset"
}
}
],
"rabbitmq": {
"username": "string",
"password": "string",
"vhost": "string"
},
"query_masking_rules": [
{
"name": "string",
"regexp": "string",
"replace": "string"
}
],
"query_cache": {
"max_size_in_bytes": "google.protobuf.Int64Value",
"max_entries": "google.protobuf.Int64Value",
"max_entry_size_in_bytes": "google.protobuf.Int64Value",
"max_entry_size_in_rows": "google.protobuf.Int64Value"
},
"jdbc_bridge": {
"host": "string",
"port": "google.protobuf.Int64Value"
},
"mysql_protocol": "google.protobuf.BoolValue",
"custom_macros": [
{
"name": "string",
"value": "string"
}
],
"builtin_dictionaries_reload_interval": "google.protobuf.Int64Value"
},
"user_config": {
"background_pool_size": "google.protobuf.Int64Value",
"background_merges_mutations_concurrency_ratio": "google.protobuf.Int64Value",
"background_schedule_pool_size": "google.protobuf.Int64Value",
"background_fetches_pool_size": "google.protobuf.Int64Value",
"background_move_pool_size": "google.protobuf.Int64Value",
"background_distributed_schedule_pool_size": "google.protobuf.Int64Value",
"background_buffer_flush_schedule_pool_size": "google.protobuf.Int64Value",
"background_message_broker_schedule_pool_size": "google.protobuf.Int64Value",
"background_common_pool_size": "google.protobuf.Int64Value",
"dictionaries_lazy_load": "google.protobuf.BoolValue",
"log_level": "LogLevel",
"query_log_retention_size": "google.protobuf.Int64Value",
"query_log_retention_time": "google.protobuf.Int64Value",
"query_thread_log_enabled": "google.protobuf.BoolValue",
"query_thread_log_retention_size": "google.protobuf.Int64Value",
"query_thread_log_retention_time": "google.protobuf.Int64Value",
"part_log_retention_size": "google.protobuf.Int64Value",
"part_log_retention_time": "google.protobuf.Int64Value",
"metric_log_enabled": "google.protobuf.BoolValue",
"metric_log_retention_size": "google.protobuf.Int64Value",
"metric_log_retention_time": "google.protobuf.Int64Value",
"trace_log_enabled": "google.protobuf.BoolValue",
"trace_log_retention_size": "google.protobuf.Int64Value",
"trace_log_retention_time": "google.protobuf.Int64Value",
"text_log_enabled": "google.protobuf.BoolValue",
"text_log_retention_size": "google.protobuf.Int64Value",
"text_log_retention_time": "google.protobuf.Int64Value",
"text_log_level": "LogLevel",
"opentelemetry_span_log_enabled": "google.protobuf.BoolValue",
"opentelemetry_span_log_retention_size": "google.protobuf.Int64Value",
"opentelemetry_span_log_retention_time": "google.protobuf.Int64Value",
"query_views_log_enabled": "google.protobuf.BoolValue",
"query_views_log_retention_size": "google.protobuf.Int64Value",
"query_views_log_retention_time": "google.protobuf.Int64Value",
"asynchronous_metric_log_enabled": "google.protobuf.BoolValue",
"asynchronous_metric_log_retention_size": "google.protobuf.Int64Value",
"asynchronous_metric_log_retention_time": "google.protobuf.Int64Value",
"session_log_enabled": "google.protobuf.BoolValue",
"session_log_retention_size": "google.protobuf.Int64Value",
"session_log_retention_time": "google.protobuf.Int64Value",
"zookeeper_log_enabled": "google.protobuf.BoolValue",
"zookeeper_log_retention_size": "google.protobuf.Int64Value",
"zookeeper_log_retention_time": "google.protobuf.Int64Value",
"asynchronous_insert_log_enabled": "google.protobuf.BoolValue",
"asynchronous_insert_log_retention_size": "google.protobuf.Int64Value",
"asynchronous_insert_log_retention_time": "google.protobuf.Int64Value",
"processors_profile_log_enabled": "google.protobuf.BoolValue",
"processors_profile_log_retention_size": "google.protobuf.Int64Value",
"processors_profile_log_retention_time": "google.protobuf.Int64Value",
"error_log_enabled": "google.protobuf.BoolValue",
"error_log_retention_size": "google.protobuf.Int64Value",
"error_log_retention_time": "google.protobuf.Int64Value",
"access_control_improvements": {
"select_from_system_db_requires_grant": "google.protobuf.BoolValue",
"select_from_information_schema_requires_grant": "google.protobuf.BoolValue"
},
"max_connections": "google.protobuf.Int64Value",
"max_concurrent_queries": "google.protobuf.Int64Value",
"max_table_size_to_drop": "google.protobuf.Int64Value",
"max_partition_size_to_drop": "google.protobuf.Int64Value",
"keep_alive_timeout": "google.protobuf.Int64Value",
"uncompressed_cache_size": "google.protobuf.Int64Value",
"mark_cache_size": "google.protobuf.Int64Value",
"timezone": "string",
"geobase_enabled": "google.protobuf.BoolValue",
"geobase_uri": "string",
"default_database": "google.protobuf.StringValue",
"total_memory_profiler_step": "google.protobuf.Int64Value",
"total_memory_tracker_sample_probability": "google.protobuf.DoubleValue",
"async_insert_threads": "google.protobuf.Int64Value",
"backup_threads": "google.protobuf.Int64Value",
"restore_threads": "google.protobuf.Int64Value",
"merge_tree": {
"parts_to_delay_insert": "google.protobuf.Int64Value",
"parts_to_throw_insert": "google.protobuf.Int64Value",
"inactive_parts_to_delay_insert": "google.protobuf.Int64Value",
"inactive_parts_to_throw_insert": "google.protobuf.Int64Value",
"max_avg_part_size_for_too_many_parts": "google.protobuf.Int64Value",
"max_parts_in_total": "google.protobuf.Int64Value",
"max_replicated_merges_in_queue": "google.protobuf.Int64Value",
"number_of_free_entries_in_pool_to_lower_max_size_of_merge": "google.protobuf.Int64Value",
"number_of_free_entries_in_pool_to_execute_mutation": "google.protobuf.Int64Value",
"max_bytes_to_merge_at_min_space_in_pool": "google.protobuf.Int64Value",
"max_bytes_to_merge_at_max_space_in_pool": "google.protobuf.Int64Value",
"min_bytes_for_wide_part": "google.protobuf.Int64Value",
"min_rows_for_wide_part": "google.protobuf.Int64Value",
"cleanup_delay_period": "google.protobuf.Int64Value",
"max_cleanup_delay_period": "google.protobuf.Int64Value",
"merge_selecting_sleep_ms": "google.protobuf.Int64Value",
"max_merge_selecting_sleep_ms": "google.protobuf.Int64Value",
"min_age_to_force_merge_seconds": "google.protobuf.Int64Value",
"min_age_to_force_merge_on_partition_only": "google.protobuf.BoolValue",
"merge_max_block_size": "google.protobuf.Int64Value",
"deduplicate_merge_projection_mode": "DeduplicateMergeProjectionMode",
"lightweight_mutation_projection_mode": "LightweightMutationProjectionMode",
"replicated_deduplication_window": "google.protobuf.Int64Value",
"replicated_deduplication_window_seconds": "google.protobuf.Int64Value",
"fsync_after_insert": "google.protobuf.BoolValue",
"fsync_part_directory": "google.protobuf.BoolValue",
"min_compressed_bytes_to_fsync_after_fetch": "google.protobuf.Int64Value",
"min_compressed_bytes_to_fsync_after_merge": "google.protobuf.Int64Value",
"min_rows_to_fsync_after_merge": "google.protobuf.Int64Value",
"ttl_only_drop_parts": "google.protobuf.BoolValue",
"merge_with_ttl_timeout": "google.protobuf.Int64Value",
"merge_with_recompression_ttl_timeout": "google.protobuf.Int64Value",
"max_number_of_merges_with_ttl_in_pool": "google.protobuf.Int64Value",
"materialize_ttl_recalculate_only": "google.protobuf.BoolValue",
"check_sample_column_is_correct": "google.protobuf.BoolValue",
"allow_remote_fs_zero_copy_replication": "google.protobuf.BoolValue"
},
"compression": [
{
"method": "Method",
"min_part_size": "int64",
"min_part_size_ratio": "double",
"level": "google.protobuf.Int64Value"
}
],
"dictionaries": [
{
"name": "string",
"structure": {
"id": {
"name": "string"
},
"key": {
"attributes": [
{
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
}
]
},
"range_min": {
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
},
"range_max": {
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
},
"attributes": [
{
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
}
]
},
"layout": {
"type": "Type",
"size_in_cells": "int64",
"allow_read_expired_keys": "google.protobuf.BoolValue",
"max_update_queue_size": "int64",
"update_queue_push_timeout_milliseconds": "int64",
"query_wait_timeout_milliseconds": "int64",
"max_threads_for_updates": "int64",
"initial_array_size": "int64",
"max_array_size": "int64",
"access_to_key_from_attributes": "google.protobuf.BoolValue"
},
// Includes only one of the fields `fixed_lifetime`, `lifetime_range`
"fixed_lifetime": "int64",
"lifetime_range": {
"min": "int64",
"max": "int64"
},
// end of the list of possible fields
// Includes only one of the fields `http_source`, `mysql_source`, `clickhouse_source`, `mongodb_source`, `postgresql_source`
"http_source": {
"url": "string",
"format": "string",
"headers": [
{
"name": "string",
"value": "string"
}
]
},
"mysql_source": {
"db": "string",
"table": "string",
"port": "int64",
"user": "string",
"password": "string",
"replicas": [
{
"host": "string",
"priority": "int64",
"port": "int64",
"user": "string",
"password": "string"
}
],
"where": "string",
"invalidate_query": "string",
"close_connection": "google.protobuf.BoolValue",
"share_connection": "google.protobuf.BoolValue"
},
"clickhouse_source": {
"db": "string",
"table": "string",
"host": "string",
"port": "int64",
"user": "string",
"password": "string",
"where": "string",
"secure": "google.protobuf.BoolValue"
},
"mongodb_source": {
"db": "string",
"collection": "string",
"host": "string",
"port": "int64",
"user": "string",
"password": "string",
"options": "string"
},
"postgresql_source": {
"db": "string",
"table": "string",
"hosts": [
"string"
],
"port": "int64",
"user": "string",
"password": "string",
"invalidate_query": "string",
"ssl_mode": "SslMode"
}
// end of the list of possible fields
}
],
"graphite_rollup": [
{
"name": "string",
"patterns": [
{
"regexp": "string",
"function": "string",
"retention": [
{
"age": "int64",
"precision": "int64"
}
]
}
],
"path_column_name": "string",
"time_column_name": "string",
"value_column_name": "string",
"version_column_name": "string"
}
],
"kafka": {
"security_protocol": "SecurityProtocol",
"sasl_mechanism": "SaslMechanism",
"sasl_username": "string",
"sasl_password": "string",
"enable_ssl_certificate_verification": "google.protobuf.BoolValue",
"max_poll_interval_ms": "google.protobuf.Int64Value",
"session_timeout_ms": "google.protobuf.Int64Value",
"debug": "Debug",
"auto_offset_reset": "AutoOffsetReset"
},
"kafka_topics": [
{
"name": "string",
"settings": {
"security_protocol": "SecurityProtocol",
"sasl_mechanism": "SaslMechanism",
"sasl_username": "string",
"sasl_password": "string",
"enable_ssl_certificate_verification": "google.protobuf.BoolValue",
"max_poll_interval_ms": "google.protobuf.Int64Value",
"session_timeout_ms": "google.protobuf.Int64Value",
"debug": "Debug",
"auto_offset_reset": "AutoOffsetReset"
}
}
],
"rabbitmq": {
"username": "string",
"password": "string",
"vhost": "string"
},
"query_masking_rules": [
{
"name": "string",
"regexp": "string",
"replace": "string"
}
],
"query_cache": {
"max_size_in_bytes": "google.protobuf.Int64Value",
"max_entries": "google.protobuf.Int64Value",
"max_entry_size_in_bytes": "google.protobuf.Int64Value",
"max_entry_size_in_rows": "google.protobuf.Int64Value"
},
"jdbc_bridge": {
"host": "string",
"port": "google.protobuf.Int64Value"
},
"mysql_protocol": "google.protobuf.BoolValue",
"custom_macros": [
{
"name": "string",
"value": "string"
}
],
"builtin_dictionaries_reload_interval": "google.protobuf.Int64Value"
},
"default_config": {
"background_pool_size": "google.protobuf.Int64Value",
"background_merges_mutations_concurrency_ratio": "google.protobuf.Int64Value",
"background_schedule_pool_size": "google.protobuf.Int64Value",
"background_fetches_pool_size": "google.protobuf.Int64Value",
"background_move_pool_size": "google.protobuf.Int64Value",
"background_distributed_schedule_pool_size": "google.protobuf.Int64Value",
"background_buffer_flush_schedule_pool_size": "google.protobuf.Int64Value",
"background_message_broker_schedule_pool_size": "google.protobuf.Int64Value",
"background_common_pool_size": "google.protobuf.Int64Value",
"dictionaries_lazy_load": "google.protobuf.BoolValue",
"log_level": "LogLevel",
"query_log_retention_size": "google.protobuf.Int64Value",
"query_log_retention_time": "google.protobuf.Int64Value",
"query_thread_log_enabled": "google.protobuf.BoolValue",
"query_thread_log_retention_size": "google.protobuf.Int64Value",
"query_thread_log_retention_time": "google.protobuf.Int64Value",
"part_log_retention_size": "google.protobuf.Int64Value",
"part_log_retention_time": "google.protobuf.Int64Value",
"metric_log_enabled": "google.protobuf.BoolValue",
"metric_log_retention_size": "google.protobuf.Int64Value",
"metric_log_retention_time": "google.protobuf.Int64Value",
"trace_log_enabled": "google.protobuf.BoolValue",
"trace_log_retention_size": "google.protobuf.Int64Value",
"trace_log_retention_time": "google.protobuf.Int64Value",
"text_log_enabled": "google.protobuf.BoolValue",
"text_log_retention_size": "google.protobuf.Int64Value",
"text_log_retention_time": "google.protobuf.Int64Value",
"text_log_level": "LogLevel",
"opentelemetry_span_log_enabled": "google.protobuf.BoolValue",
"opentelemetry_span_log_retention_size": "google.protobuf.Int64Value",
"opentelemetry_span_log_retention_time": "google.protobuf.Int64Value",
"query_views_log_enabled": "google.protobuf.BoolValue",
"query_views_log_retention_size": "google.protobuf.Int64Value",
"query_views_log_retention_time": "google.protobuf.Int64Value",
"asynchronous_metric_log_enabled": "google.protobuf.BoolValue",
"asynchronous_metric_log_retention_size": "google.protobuf.Int64Value",
"asynchronous_metric_log_retention_time": "google.protobuf.Int64Value",
"session_log_enabled": "google.protobuf.BoolValue",
"session_log_retention_size": "google.protobuf.Int64Value",
"session_log_retention_time": "google.protobuf.Int64Value",
"zookeeper_log_enabled": "google.protobuf.BoolValue",
"zookeeper_log_retention_size": "google.protobuf.Int64Value",
"zookeeper_log_retention_time": "google.protobuf.Int64Value",
"asynchronous_insert_log_enabled": "google.protobuf.BoolValue",
"asynchronous_insert_log_retention_size": "google.protobuf.Int64Value",
"asynchronous_insert_log_retention_time": "google.protobuf.Int64Value",
"processors_profile_log_enabled": "google.protobuf.BoolValue",
"processors_profile_log_retention_size": "google.protobuf.Int64Value",
"processors_profile_log_retention_time": "google.protobuf.Int64Value",
"error_log_enabled": "google.protobuf.BoolValue",
"error_log_retention_size": "google.protobuf.Int64Value",
"error_log_retention_time": "google.protobuf.Int64Value",
"access_control_improvements": {
"select_from_system_db_requires_grant": "google.protobuf.BoolValue",
"select_from_information_schema_requires_grant": "google.protobuf.BoolValue"
},
"max_connections": "google.protobuf.Int64Value",
"max_concurrent_queries": "google.protobuf.Int64Value",
"max_table_size_to_drop": "google.protobuf.Int64Value",
"max_partition_size_to_drop": "google.protobuf.Int64Value",
"keep_alive_timeout": "google.protobuf.Int64Value",
"uncompressed_cache_size": "google.protobuf.Int64Value",
"mark_cache_size": "google.protobuf.Int64Value",
"timezone": "string",
"geobase_enabled": "google.protobuf.BoolValue",
"geobase_uri": "string",
"default_database": "google.protobuf.StringValue",
"total_memory_profiler_step": "google.protobuf.Int64Value",
"total_memory_tracker_sample_probability": "google.protobuf.DoubleValue",
"async_insert_threads": "google.protobuf.Int64Value",
"backup_threads": "google.protobuf.Int64Value",
"restore_threads": "google.protobuf.Int64Value",
"merge_tree": {
"parts_to_delay_insert": "google.protobuf.Int64Value",
"parts_to_throw_insert": "google.protobuf.Int64Value",
"inactive_parts_to_delay_insert": "google.protobuf.Int64Value",
"inactive_parts_to_throw_insert": "google.protobuf.Int64Value",
"max_avg_part_size_for_too_many_parts": "google.protobuf.Int64Value",
"max_parts_in_total": "google.protobuf.Int64Value",
"max_replicated_merges_in_queue": "google.protobuf.Int64Value",
"number_of_free_entries_in_pool_to_lower_max_size_of_merge": "google.protobuf.Int64Value",
"number_of_free_entries_in_pool_to_execute_mutation": "google.protobuf.Int64Value",
"max_bytes_to_merge_at_min_space_in_pool": "google.protobuf.Int64Value",
"max_bytes_to_merge_at_max_space_in_pool": "google.protobuf.Int64Value",
"min_bytes_for_wide_part": "google.protobuf.Int64Value",
"min_rows_for_wide_part": "google.protobuf.Int64Value",
"cleanup_delay_period": "google.protobuf.Int64Value",
"max_cleanup_delay_period": "google.protobuf.Int64Value",
"merge_selecting_sleep_ms": "google.protobuf.Int64Value",
"max_merge_selecting_sleep_ms": "google.protobuf.Int64Value",
"min_age_to_force_merge_seconds": "google.protobuf.Int64Value",
"min_age_to_force_merge_on_partition_only": "google.protobuf.BoolValue",
"merge_max_block_size": "google.protobuf.Int64Value",
"deduplicate_merge_projection_mode": "DeduplicateMergeProjectionMode",
"lightweight_mutation_projection_mode": "LightweightMutationProjectionMode",
"replicated_deduplication_window": "google.protobuf.Int64Value",
"replicated_deduplication_window_seconds": "google.protobuf.Int64Value",
"fsync_after_insert": "google.protobuf.BoolValue",
"fsync_part_directory": "google.protobuf.BoolValue",
"min_compressed_bytes_to_fsync_after_fetch": "google.protobuf.Int64Value",
"min_compressed_bytes_to_fsync_after_merge": "google.protobuf.Int64Value",
"min_rows_to_fsync_after_merge": "google.protobuf.Int64Value",
"ttl_only_drop_parts": "google.protobuf.BoolValue",
"merge_with_ttl_timeout": "google.protobuf.Int64Value",
"merge_with_recompression_ttl_timeout": "google.protobuf.Int64Value",
"max_number_of_merges_with_ttl_in_pool": "google.protobuf.Int64Value",
"materialize_ttl_recalculate_only": "google.protobuf.BoolValue",
"check_sample_column_is_correct": "google.protobuf.BoolValue",
"allow_remote_fs_zero_copy_replication": "google.protobuf.BoolValue"
},
"compression": [
{
"method": "Method",
"min_part_size": "int64",
"min_part_size_ratio": "double",
"level": "google.protobuf.Int64Value"
}
],
"dictionaries": [
{
"name": "string",
"structure": {
"id": {
"name": "string"
},
"key": {
"attributes": [
{
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
}
]
},
"range_min": {
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
},
"range_max": {
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
},
"attributes": [
{
"name": "string",
"type": "string",
"null_value": "string",
"expression": "string",
"hierarchical": "bool",
"injective": "bool"
}
]
},
"layout": {
"type": "Type",
"size_in_cells": "int64",
"allow_read_expired_keys": "google.protobuf.BoolValue",
"max_update_queue_size": "int64",
"update_queue_push_timeout_milliseconds": "int64",
"query_wait_timeout_milliseconds": "int64",
"max_threads_for_updates": "int64",
"initial_array_size": "int64",
"max_array_size": "int64",
"access_to_key_from_attributes": "google.protobuf.BoolValue"
},
// Includes only one of the fields `fixed_lifetime`, `lifetime_range`
"fixed_lifetime": "int64",
"lifetime_range": {
"min": "int64",
"max": "int64"
},
// end of the list of possible fields
// Includes only one of the fields `http_source`, `mysql_source`, `clickhouse_source`, `mongodb_source`, `postgresql_source`
"http_source": {
"url": "string",
"format": "string",
"headers": [
{
"name": "string",
"value": "string"
}
]
},
"mysql_source": {
"db": "string",
"table": "string",
"port": "int64",
"user": "string",
"password": "string",
"replicas": [
{
"host": "string",
"priority": "int64",
"port": "int64",
"user": "string",
"password": "string"
}
],
"where": "string",
"invalidate_query": "string",
"close_connection": "google.protobuf.BoolValue",
"share_connection": "google.protobuf.BoolValue"
},
"clickhouse_source": {
"db": "string",
"table": "string",
"host": "string",
"port": "int64",
"user": "string",
"password": "string",
"where": "string",
"secure": "google.protobuf.BoolValue"
},
"mongodb_source": {
"db": "string",
"collection": "string",
"host": "string",
"port": "int64",
"user": "string",
"password": "string",
"options": "string"
},
"postgresql_source": {
"db": "string",
"table": "string",
"hosts": [
"string"
],
"port": "int64",
"user": "string",
"password": "string",
"invalidate_query": "string",
"ssl_mode": "SslMode"
}
// end of the list of possible fields
}
],
"graphite_rollup": [
{
"name": "string",
"patterns": [
{
"regexp": "string",
"function": "string",
"retention": [
{
"age": "int64",
"precision": "int64"
}
]
}
],
"path_column_name": "string",
"time_column_name": "string",
"value_column_name": "string",
"version_column_name": "string"
}
],
"kafka": {
"security_protocol": "SecurityProtocol",
"sasl_mechanism": "SaslMechanism",
"sasl_username": "string",
"sasl_password": "string",
"enable_ssl_certificate_verification": "google.protobuf.BoolValue",
"max_poll_interval_ms": "google.protobuf.Int64Value",
"session_timeout_ms": "google.protobuf.Int64Value",
"debug": "Debug",
"auto_offset_reset": "AutoOffsetReset"
},
"kafka_topics": [
{
"name": "string",
"settings": {
"security_protocol": "SecurityProtocol",
"sasl_mechanism": "SaslMechanism",
"sasl_username": "string",
"sasl_password": "string",
"enable_ssl_certificate_verification": "google.protobuf.BoolValue",
"max_poll_interval_ms": "google.protobuf.Int64Value",
"session_timeout_ms": "google.protobuf.Int64Value",
"debug": "Debug",
"auto_offset_reset": "AutoOffsetReset"
}
}
],
"rabbitmq": {
"username": "string",
"password": "string",
"vhost": "string"
},
"query_masking_rules": [
{
"name": "string",
"regexp": "string",
"replace": "string"
}
],
"query_cache": {
"max_size_in_bytes": "google.protobuf.Int64Value",
"max_entries": "google.protobuf.Int64Value",
"max_entry_size_in_bytes": "google.protobuf.Int64Value",
"max_entry_size_in_rows": "google.protobuf.Int64Value"
},
"jdbc_bridge": {
"host": "string",
"port": "google.protobuf.Int64Value"
},
"mysql_protocol": "google.protobuf.BoolValue",
"custom_macros": [
{
"name": "string",
"value": "string"
}
],
"builtin_dictionaries_reload_interval": "google.protobuf.Int64Value"
}
},
"resources": {
"resource_preset_id": "string",
"disk_size": "int64",
"disk_type_id": "string"
},
"disk_size_autoscaling": {
"planned_usage_threshold": "google.protobuf.Int64Value",
"emergency_usage_threshold": "google.protobuf.Int64Value",
"disk_size_limit": "google.protobuf.Int64Value"
}
},
"zookeeper": {
"resources": {
"resource_preset_id": "string",
"disk_size": "int64",
"disk_type_id": "string"
},
"disk_size_autoscaling": {
"planned_usage_threshold": "google.protobuf.Int64Value",
"emergency_usage_threshold": "google.protobuf.Int64Value",
"disk_size_limit": "google.protobuf.Int64Value"
}
},
"backup_window_start": "google.type.TimeOfDay",
"access": {
"data_lens": "bool",
"web_sql": "bool",
"metrika": "bool",
"serverless": "bool",
"data_transfer": "bool",
"yandex_query": "bool"
},
"cloud_storage": {
"enabled": "bool",
"move_factor": "google.protobuf.DoubleValue",
"data_cache_enabled": "google.protobuf.BoolValue",
"data_cache_max_size": "google.protobuf.Int64Value",
"prefer_not_to_merge": "google.protobuf.BoolValue"
},
"sql_database_management": "google.protobuf.BoolValue",
"sql_user_management": "google.protobuf.BoolValue",
"embedded_keeper": "google.protobuf.BoolValue",
"backup_retain_period_days": "google.protobuf.Int64Value"
},
"network_id": "string",
"health": "Health",
"status": "Status",
"service_account_id": "string",
"maintenance_window": {
// Includes only one of the fields `anytime`, `weekly_maintenance_window`
"anytime": "AnytimeMaintenanceWindow",
"weekly_maintenance_window": {
"day": "WeekDay",
"hour": "int64"
}
// end of the list of possible fields
},
"planned_operation": {
"info": "string",
"delayed_until": "google.protobuf.Timestamp"
},
"security_group_ids": [
"string"
],
"deletion_protection": "bool",
"disk_encryption_key_id": "google.protobuf.StringValue"
}
// end of the list of possible fields
}
An Operation resource. For more information, see Operation.
|
Field |
Description |
|
id |
string ID of the operation. |
|
description |
string Description of the operation. 0-256 characters long. |
|
created_at |
Creation timestamp. |
|
created_by |
string ID of the user or service account who initiated the operation. |
|
modified_at |
The time when the Operation resource was last modified. |
|
done |
bool If the value is |
|
metadata |
Service-specific metadata associated with the operation. |
|
error |
The error result of the operation in case of failure or cancellation. Includes only one of the fields The operation result. |
|
response |
The normal response of the operation in case of success. Includes only one of the fields The operation result. |
CreateClusterMetadata
|
Field |
Description |
|
cluster_id |
string ID of the ClickHouse cluster that is being created. |
Cluster
A ClickHouse Cluster resource. For more information, see the
Cluster section in the Developer's Guide.
|
Field |
Description |
|
id |
string ID of the ClickHouse cluster. |
|
folder_id |
string ID of the folder that the ClickHouse cluster belongs to. |
|
created_at |
Creation timestamp in RFC3339 |
|
name |
string Name of the ClickHouse cluster. |
|
description |
string Description of the ClickHouse cluster. 0-256 characters long. |
|
labels |
object (map<string, string>) Custom labels for the ClickHouse cluster as |
|
environment |
enum Environment Deployment environment of the ClickHouse cluster.
|
|
monitoring[] |
Description of monitoring systems relevant to the ClickHouse cluster. |
|
config |
Configuration of the ClickHouse cluster. |
|
network_id |
string ID of the network that the cluster belongs to. |
|
health |
enum Health Aggregated cluster health.
|
|
status |
enum Status Current state of the cluster.
|
|
service_account_id |
string ID of the service account used for access to Object Storage. |
|
maintenance_window |
Maintenance window for the cluster. |
|
planned_operation |
Planned maintenance operation to be started for the cluster within the nearest |
|
security_group_ids[] |
string User security groups |
|
deletion_protection |
bool Deletion Protection inhibits deletion of the cluster |
|
disk_encryption_key_id |
ID of the key to encrypt cluster disks. |
Monitoring
Monitoring system metadata.
|
Field |
Description |
|
name |
string Name of the monitoring system. |
|
description |
string Description of the monitoring system. |
|
link |
string Link to the monitoring system charts for the ClickHouse cluster. |
ClusterConfig
|
Field |
Description |
|
version |
string Version of the ClickHouse server software. |
|
clickhouse |
Configuration and resource allocation for ClickHouse hosts. |
|
zookeeper |
Configuration and resource allocation for ZooKeeper hosts. |
|
backup_window_start |
Time to start the daily backup, in the UTC timezone. |
|
access |
Access policy for external services. |
|
cloud_storage |
|
|
sql_database_management |
Whether database management through SQL commands is enabled. |
|
sql_user_management |
Whether user management through SQL commands is enabled. |
|
embedded_keeper |
Whether cluster should use embedded Keeper instead of Zookeeper. |
|
backup_retain_period_days |
Retain period of automatically created backup in days |
Clickhouse
|
Field |
Description |
|
config |
Configuration settings of a ClickHouse server. |
|
resources |
Resources allocated to ClickHouse hosts. |
|
disk_size_autoscaling |
Disk size autoscaling settings. |
ClickhouseConfigSet
|
Field |
Description |
|
effective_config |
Required field. Effective configuration (a combination of user-defined configuration and default configuration). |
|
user_config |
Required field. User-defined configuration. |
|
default_config |
Required field. Default configuration. |
ClickhouseConfig
ClickHouse configuration settings. Supported settings are a subset of settings described
in ClickHouse documentation
|
Field |
Description |
|
background_pool_size |
Sets the number of threads performing background merges and mutations for MergeTree-engine tables. Default value: 16. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_merges_mutations_concurrency_ratio |
Sets a ratio between the number of threads and the number of background merges and mutations that can be executed concurrently. For example, if the ratio equals to 2 and background_pool_size is set to 16 then ClickHouse can execute 32 background merges concurrently. Default value: 2. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_schedule_pool_size |
The maximum number of threads that will be used for constantly executing some lightweight periodic operations Default value: 512. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_fetches_pool_size |
The maximum number of threads that will be used for fetching data parts from another replica for MergeTree-engine tables in a background. Default value: 32 for versions 25.1 and higher, 16 for versions 24.12 and lower. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
background_move_pool_size |
The maximum number of threads that will be used for moving data parts to another disk or volume for MergeTree-engine tables in a background. Default value: 8. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_distributed_schedule_pool_size |
The maximum number of threads that will be used for executing distributed sends. Default value: 16. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_buffer_flush_schedule_pool_size |
The maximum number of threads that will be used for performing flush operations for Buffer-engine tables in the background. Default value: 16. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_message_broker_schedule_pool_size |
The maximum number of threads that will be used for executing background operations for message streaming. Default value: 16. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
background_common_pool_size |
The maximum number of threads that will be used for performing a variety of operations (mostly garbage collection) for MergeTree-engine tables in a background. Default value: 8. Change of the setting is applied with restart on value decrease and without restart on value increase. For details, see ClickHouse documentation |
|
dictionaries_lazy_load |
Lazy loading of dictionaries. If enabled, then each dictionary is loaded on the first use. Otherwise, the server loads all dictionaries at startup. Default value: true for versions 25.1 and higher, false for versions 24.12 and lower. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
log_level |
enum LogLevel Logging level.
|
|
query_log_retention_size |
The maximum size that query_log can grow to before old data will be removed. If set to 0, Default value: 1073741824 (1 GiB). |
|
query_log_retention_time |
The maximum time that query_log records will be retained before removal. If set to 0, automatic removal of query_log data based on time is disabled. Default value: 2592000000 (30 days). |
|
query_thread_log_enabled |
Enables or disables query_thread_log system table. Default value: true. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
query_thread_log_retention_size |
The maximum size that query_thread_log can grow to before old data will be removed. If set to 0, Default value: 536870912 (512 MiB). |
|
query_thread_log_retention_time |
The maximum time that query_thread_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
part_log_retention_size |
The maximum size that part_log can grow to before old data will be removed. If set to 0, Default value: 536870912 (512 MiB). |
|
part_log_retention_time |
The maximum time that part_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
metric_log_enabled |
Enables or disables metric_log system table. Default value: false for versions 25.1 and higher, true for versions 24.12 and lower. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
metric_log_retention_size |
The maximum size that metric_log can grow to before old data will be removed. If set to 0, Default value: 536870912 (512 MiB). |
|
metric_log_retention_time |
The maximum time that metric_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
trace_log_enabled |
Enables or disables trace_log system table. Default value: true for versions 25.2 and higher, false for versions 25.1 and lower. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
trace_log_retention_size |
The maximum size that trace_log can grow to before old data will be removed. If set to 0, Default value: 536870912 (512 MiB). |
|
trace_log_retention_time |
The maximum time that trace_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
text_log_enabled |
Enables or disables text_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
text_log_retention_size |
The maximum size that text_log can grow to before old data will be removed. If set to 0, Default value: 536870912 (512 MiB). |
|
text_log_retention_time |
The maximum time that text_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
text_log_level |
enum LogLevel Logging level for text_log system table. Default value: TRACE. Change of the setting is applied with restart.
|
|
opentelemetry_span_log_enabled |
Enables or disables opentelemetry_span_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
opentelemetry_span_log_retention_size |
The maximum size that opentelemetry_span_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
opentelemetry_span_log_retention_time |
The maximum time that opentelemetry_span_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
query_views_log_enabled |
Enables or disables query_views_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
query_views_log_retention_size |
The maximum size that query_views_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
query_views_log_retention_time |
The maximum time that query_views_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
asynchronous_metric_log_enabled |
Enables or disables asynchronous_metric_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
asynchronous_metric_log_retention_size |
The maximum size that asynchronous_metric_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
asynchronous_metric_log_retention_time |
The maximum time that asynchronous_metric_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
session_log_enabled |
Enables or disables session_log system table. Default value: true for versions 25.3 and higher, false for versions 25.2 and lower. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
session_log_retention_size |
The maximum size that session_log can grow to before old data will be removed. If set to 0, Default value: 536870912 (512 MiB) for versions 25.3 and higher, 0 for versions 25.2 and lower. |
|
session_log_retention_time |
The maximum time that session_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
zookeeper_log_enabled |
Enables or disables zookeeper_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
zookeeper_log_retention_size |
The maximum size that zookeeper_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
zookeeper_log_retention_time |
The maximum time that zookeeper_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
asynchronous_insert_log_enabled |
Enables or disables asynchronous_insert_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
asynchronous_insert_log_retention_size |
The maximum size that asynchronous_insert_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
asynchronous_insert_log_retention_time |
The maximum time that asynchronous_insert_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
processors_profile_log_enabled |
Enables or disables processors_profile_log system table. Default value: true for versions 25.2 and higher, false for versions 25.1 and lower. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
processors_profile_log_retention_size |
The maximum size that processors_profile_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
processors_profile_log_retention_time |
The maximum time that processors_profile_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
error_log_enabled |
Enables or disables error_log system table. Default value: false. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
error_log_retention_size |
The maximum size that error_log can grow to before old data will be removed. If set to 0, Default value: 0. |
|
error_log_retention_time |
The maximum time that error_log records will be retained before removal. If set to 0, Default value: 2592000000 (30 days). |
|
access_control_improvements |
Access control settings. |
|
max_connections |
Maximum number of inbound connections. Default value: 4096. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
max_concurrent_queries |
Maximum number of concurrently executed queries. Default value: 500. For details, see ClickHouse documentation |
|
max_table_size_to_drop |
Maximum size of the table that can be deleted using DROP or TRUNCATE query. Default value: 50000000000 (48828125 KiB). For details, see ClickHouse documentation |
|
max_partition_size_to_drop |
Maximum size of the partition that can be deleted using DROP or TRUNCATE query. Default value: 50000000000 (48828125 KiB). For details, see ClickHouse documentation |
|
keep_alive_timeout |
The number of seconds that ClickHouse waits for incoming requests for HTTP protocol before closing the connection. Default value: 30. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
uncompressed_cache_size |
Cache size (in bytes) for uncompressed data used by table engines from the MergeTree family. 0 means disabled. For details, see ClickHouse documentation |
|
mark_cache_size |
Maximum size (in bytes) of the cache of "marks" used by MergeTree tables. For details, see ClickHouse documentation |
|
timezone |
string The server's time zone to be used in DateTime fields conversions. Specified as an IANA identifier. Default value: Europe/Moscow. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
geobase_enabled |
Enables or disables geobase. Default value: false for versions 25.8 and higher, true for versions 25.7 and lower. Change of the setting is applied with restart. |
|
geobase_uri |
string Address of the archive with the user geobase in Object Storage. Change of the setting is applied with restart. |
|
default_database |
The default database. Default value: default. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
total_memory_profiler_step |
Whenever server memory usage becomes larger than every next step in number of bytes the memory profiler will collect Default value: 0. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
total_memory_tracker_sample_probability |
Allows to collect random allocations and de-allocations and writes them in the system.trace_log system table Default value: 0. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
async_insert_threads |
Maximum number of threads to parse and insert data in background. If set to 0, asynchronous mode is disabled. Default value: 16. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
backup_threads |
The maximum number of threads to execute BACKUP requests. Default value: 16. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
restore_threads |
The maximum number of threads to execute RESTORE requests. Default value: 16. Change of the setting is applied with restart. For details, see ClickHouse documentation |
|
merge_tree |
Settings for the MergeTree table engine family. Change of the settings of merge_tree is applied with restart. |
|
compression[] |
Data compression settings for MergeTree engine tables. Change of the settings of compression is applied with restart. For details, see ClickHouse documentation |
|
dictionaries[] |
Configuration of external dictionaries. Change of the settings of dictionaries is applied with restart. For details, see ClickHouse documentation |
|
graphite_rollup[] |
Rollup settings for the GraphiteMergeTree engine tables. Change of the settings of graphite_rollup is applied with restart. For details, see ClickHouse documentation |
|
kafka |
Kafka integration settings. Change of the settings of kafka is applied with restart. |
|
kafka_topics[] |
Per-topic Kafka integration settings. Change of the settings of kafka_topics is applied with restart. |
|
rabbitmq |
RabbitMQ integration settings. Change of the settings of rabbitmq is applied with restart. |
|
query_masking_rules[] |
Regexp-based rules, which will be applied to queries as well as all log messages before storing them in server logs, Change of the settings of query_masking_rules is applied with restart. For details, see ClickHouse documentation |
|
query_cache |
Query cache Change of the settings of query_cache is applied with restart. |
|
jdbc_bridge |
JDBC bridge configuration for queries to external databases. Change of the settings of jdbc_bridge is applied with restart. For details, see ClickHouse documentation |
|
mysql_protocol |
Enables or disables MySQL interface on ClickHouse server Default value: false. For details, see ClickHouse documentation |
|
custom_macros[] |
Custom ClickHouse macros. |
|
builtin_dictionaries_reload_interval |
The interval in seconds before reloading built-in dictionaries. Default value: 3600. For details, see ClickHouse documentation |
AccessControlImprovements
Access control settings.
For details, see ClickHouse documentation
|
Field |
Description |
|
select_from_system_db_requires_grant |
Sets whether SELECT * FROM system.<table> requires any grants and can be executed by any user. Default value: false. |
|
select_from_information_schema_requires_grant |
Sets whether SELECT * FROM information_schema.<table> requires any grants and can be executed by any user. Default value: false. |
MergeTree
Settings for the MergeTree table engine family.
|
Field |
Description |
|
parts_to_delay_insert |
If the number of active parts in a single partition exceeds the parts_to_delay_insert value, an INSERT artificially slows down. Default value: 1000 for versions 25.1 and higher, 150 for versions 24.12 and lower. For details, see ClickHouse documentation |
|
parts_to_throw_insert |
If the number of active parts in a single partition exceeds the parts_to_throw_insert value, an INSERT Default value: 3000 for versions 25.1 and higher, 300 for versions 24.12 and lower. For details, see ClickHouse documentation |
|
inactive_parts_to_delay_insert |
If the number of inactive parts in a single partition in the table exceeds the inactive_parts_to_delay_insert value, Default value: 0. For details, see ClickHouse documentation |
|
inactive_parts_to_throw_insert |
If the number of inactive parts in a single partition more than the inactive_parts_to_throw_insert value, Default value: 0. For details, see ClickHouse documentation |
|
max_avg_part_size_for_too_many_parts |
The "Too many parts" check according to parts_to_delay_insert and parts_to_throw_insert will be active only if the average Default value: 1073741824 (1 GiB). For details, see ClickHouse documentation |
|
max_parts_in_total |
If the total number of active parts in all partitions of a table exceeds the max_parts_in_total value, Default value: 20000 for versions 25.2 and higher, 100000 for versions 25.1 and lower. For details, see ClickHouse documentation |
|
max_replicated_merges_in_queue |
How many tasks of merging and mutating parts are allowed simultaneously in ReplicatedMergeTree queue. Default value: 32 for versions 25.8 and higher, 16 for versions 25.7 and lower. For details, see ClickHouse documentation |
|
number_of_free_entries_in_pool_to_lower_max_size_of_merge |
When there is less than the specified number of free entries in pool (or replicated queue), start to lower maximum size of Default value: 8. For details, see ClickHouse documentation |
|
number_of_free_entries_in_pool_to_execute_mutation |
When there is less than specified number of free entries in pool, do not execute part mutations. Default value: 20. For details, see ClickHouse documentation |
|
max_bytes_to_merge_at_min_space_in_pool |
The maximum total part size (in bytes) to be merged into one part, with the minimum available resources in the background pool. Default value: 1048576 (1 MiB). For details, see ClickHouse documentation |
|
max_bytes_to_merge_at_max_space_in_pool |
The maximum total parts size (in bytes) to be merged into one part, if there are enough resources available. Default value: 161061273600 (150 GiB). For details, see ClickHouse documentation |
|
min_bytes_for_wide_part |
Minimum number of bytes in a data part that can be stored in Wide format. Default value: 10485760 (10 MiB). For details, see ClickHouse documentation |
|
min_rows_for_wide_part |
Minimum number of rows in a data part that can be stored in Wide format. Default value: 0. For details, see ClickHouse documentation |
|
cleanup_delay_period |
Minimum period to clean old queue logs, blocks hashes and parts. Default value: 30. For details, see ClickHouse documentation |
|
max_cleanup_delay_period |
Maximum period to clean old queue logs, blocks hashes and parts. Default value: 300 (5 minutes). For details, see ClickHouse documentation |
|
merge_selecting_sleep_ms |
Minimum time to wait before trying to select parts to merge again after no parts were selected. A lower setting value will trigger Default value: 5000 (5 seconds). For details, see ClickHouse documentation |
|
max_merge_selecting_sleep_ms |
Maximum time to wait before trying to select parts to merge again after no parts were selected. A lower setting value will trigger Default value: 60000 (1 minute). For details, see ClickHouse documentation |
|
min_age_to_force_merge_seconds |
Merge parts if every part in the range is older than the specified value. 0 means disabled. Default value: 0. For details, see ClickHouse documentation |
|
min_age_to_force_merge_on_partition_only |
Whether min_age_to_force_merge_seconds should be applied only on the entire partition and not on subset. Default value: false. For details, see ClickHouse documentation |
|
merge_max_block_size |
The number of rows that are read from the merged parts into memory. Default value: 8192. For details, see ClickHouse documentation |
|
deduplicate_merge_projection_mode |
enum DeduplicateMergeProjectionMode Determines the behavior of background merges for MergeTree tables with projections. Default value: DEDUPLICATE_MERGE_PROJECTION_MODE_THROW. For details, see ClickHouse documentation
|
|
lightweight_mutation_projection_mode |
enum LightweightMutationProjectionMode Determines the behavior of lightweight deletes for MergeTree tables with projections. Default value: LIGHTWEIGHT_MUTATION_PROJECTION_MODE_THROW. For details, see ClickHouse documentation
|
|
replicated_deduplication_window |
The number of most recently inserted blocks for which ClickHouse Keeper stores hash sums to check for duplicates. Default value: 10000 for versions 25.9 and higher, 1000 for versions from 23.11 to 25.8, 100 for versions 23.10 and lower. For details, see ClickHouse documentation |
|
replicated_deduplication_window_seconds |
The number of seconds after which the hash sums of the inserted blocks are removed from ClickHouse Keeper. Default value: 604800 (7 days). For details, see ClickHouse documentation |
|
fsync_after_insert |
Do fsync for every inserted part. Significantly decreases performance of inserts, not recommended to use with wide parts. Default value: false. For details, see ClickHouse documentation |
|
fsync_part_directory |
Do fsync for part directory after all part operations (writes, renames, etc.). Default value: false. For details, see ClickHouse documentation |
|
min_compressed_bytes_to_fsync_after_fetch |
Minimal number of compressed bytes to do fsync for part after fetch. 0 means disabled. Default value: 0. For details, see ClickHouse documentation |
|
min_compressed_bytes_to_fsync_after_merge |
Minimal number of compressed bytes to do fsync for part after merge. 0 means disabled. Default value: 0. For details, see ClickHouse documentation |
|
min_rows_to_fsync_after_merge |
Minimal number of rows to do fsync for part after merge. 0 means disabled. Default value: 0. For details, see ClickHouse documentation |
|
ttl_only_drop_parts |
Controls whether data parts are fully dropped in MergeTree tables when all rows in that part have expired according to their TTL settings.
Default value: false. For details, see ClickHouse documentation |
|
merge_with_ttl_timeout |
Minimum delay in seconds before repeating a merge with delete TTL. Default value: 14400 (4 hours). For details, see ClickHouse documentation |
|
merge_with_recompression_ttl_timeout |
Minimum delay in seconds before repeating a merge with recompression TTL. Default value: 14400 (4 hours). For details, see ClickHouse documentation |
|
max_number_of_merges_with_ttl_in_pool |
When there is more than specified number of merges with TTL entries in pool, do not assign new merge with TTL. Default value: 2. For details, see ClickHouse documentation |
|
materialize_ttl_recalculate_only |
Only recalculate ttl info when MATERIALIZE TTL. Default value: true for versions 25.2 and higher, false for versions 25.1 and lower. For details, see ClickHouse documentation |
|
check_sample_column_is_correct |
Enables the check at table creation, that the data type of a column for sampling or sampling expression is correct. Default value: true. For details, see ClickHouse documentation |
|
allow_remote_fs_zero_copy_replication |
Setting is automatically enabled if cloud storage is enabled, disabled otherwise. Default value: true. |
Compression
Compression settings.
For details, see ClickHouse documentation
|
Field |
Description |
|
method |
enum Method Required field. Compression method to use for the specified combination of min_part_size and min_part_size_ratio.
|
|
min_part_size |
int64 The minimum size of a data part. |
|
min_part_size_ratio |
double The ratio of the data part size to the table size. |
|
level |
Compression level. |
ExternalDictionary
External dictionary configuration.
|
Field |
Description |
|
name |
string Required field. Name of the external dictionary. |
|
structure |
Required field. Structure of the external dictionary. |
|
layout |
Required field. Layout determining how to store the dictionary in memory. For details, see https://clickhouse.com/docs/sql-reference/dictionaries#ways-to-store-dictionaries-in-memory. |
|
fixed_lifetime |
int64 Fixed interval between dictionary updates. Includes only one of the fields |
|
lifetime_range |
Range of intervals between dictionary updates for ClickHouse to choose from. Includes only one of the fields |
|
http_source |
HTTP source for the dictionary. Includes only one of the fields |
|
mysql_source |
MySQL source for the dictionary. Includes only one of the fields |
|
clickhouse_source |
ClickHouse source for the dictionary. Includes only one of the fields |
|
mongodb_source |
MongoDB source for the dictionary. Includes only one of the fields |
|
postgresql_source |
PostgreSQL source for the dictionary. Includes only one of the fields |
Structure
Configuration of external dictionary structure.
|
Field |
Description |
|
id |
Single numeric key column for the dictionary. |
|
key |
Composite key for the dictionary, containing of one or more key columns. For details, see ClickHouse documentation |
|
range_min |
Field holding the beginning of the range for dictionaries with RANGE_HASHED layout. For details, see ClickHouse documentation |
|
range_max |
Field holding the end of the range for dictionaries with RANGE_HASHED layout. For details, see ClickHouse documentation |
|
attributes[] |
Description of the fields available for database queries. For details, see ClickHouse documentation |
Id
Numeric key.
|
Field |
Description |
|
name |
string Required field. Name of the numeric key. |
Key
Complex key.
|
Field |
Description |
|
attributes[] |
Attributes of a complex key. |
Attribute
|
Field |
Description |
|
name |
string Required field. Name of the column. |
|
type |
string Required field. Type of the column. |
|
null_value |
string Default value for an element without data (for example, an empty string). |
|
expression |
string Expression, describing the attribute, if applicable. |
|
hierarchical |
bool Indication of hierarchy support. Default value: false. |
|
injective |
bool Indication of injective mapping "id -> attribute". Default value: false. |
Layout
|
Field |
Description |
|
type |
enum Type Required field. Layout type. For details, see ClickHouse documentation
|
|
size_in_cells |
int64 Number of cells in the cache. Rounded up to a power of two. Default value: 1000000000. For details, see ClickHouse documentation |
|
allow_read_expired_keys |
Allows to read expired keys. Default value: false. For details, see ClickHouse documentation |
|
max_update_queue_size |
int64 Max size of update queue. Default value: 100000. For details, see ClickHouse documentation |
|
update_queue_push_timeout_milliseconds |
int64 Max timeout in milliseconds for push update task into queue. Default value: 10. For details, see ClickHouse documentation |
|
query_wait_timeout_milliseconds |
int64 Max wait timeout in milliseconds for update task to complete. Default value: 60000 (1 minute). For details, see ClickHouse documentation |
|
max_threads_for_updates |
int64 Max threads for cache dictionary update. Default value: 4. For details, see ClickHouse documentation |
|
initial_array_size |
int64 Initial dictionary key size. Default value: 1024. For details, see ClickHouse documentation |
|
max_array_size |
int64 Maximum dictionary key size. Default value: 500000. For details, see ClickHouse documentation |
|
access_to_key_from_attributes |
Allows to retrieve key attribute using dictGetString function. For details, see ClickHouse documentation |
Range
|
Field |
Description |
|
min |
int64 Minimum dictionary lifetime. |
|
max |
int64 Maximum dictionary lifetime. |
HttpSource
|
Field |
Description |
|
url |
string Required field. URL of the source dictionary available over HTTP. |
|
format |
string Required field. The data format. Valid values are all formats supported by ClickHouse SQL dialect |
|
headers[] |
HTTP headers. |
Header
|
Field |
Description |
|
name |
string Required field. Header name. |
|
value |
string Required field. Header value. |
MysqlSource
|
Field |
Description |
|
db |
string Required field. Database name. |
|
table |
string Required field. Table name. |
|
port |
int64 Port to use when connecting to a replica of the dictionary source. |
|
user |
string Required field. Name of the user for replicas of the dictionary source. |
|
password |
string Password of the user for replicas of the dictionary source. |
|
replicas[] |
List of MySQL replicas of the database used as dictionary source. |
|
where |
string Selection criteria for the data in the specified MySQL table. |
|
invalidate_query |
string Query for checking the dictionary status, to pull only updated data. |
|
close_connection |
Should a connection be closed after each request. |
|
share_connection |
Should a connection be shared for some requests. |
Replica
|
Field |
Description |
|
host |
string Required field. MySQL host of the replica. |
|
priority |
int64 The priority of the replica that ClickHouse takes into account when connecting. |
|
port |
int64 Port to use when connecting to the replica. |
|
user |
string Name of the MySQL database user. |
|
password |
string Password of the MySQL database user. |
ClickhouseSource
|
Field |
Description |
|
db |
string Required field. Database name. |
|
table |
string Required field. Table name. |
|
host |
string ClickHouse host. |
|
port |
int64 Port to use when connecting to the host. |
|
user |
string Required field. Name of the ClickHouse database user. |
|
password |
string Password of the ClickHouse database user. |
|
where |
string Selection criteria for the data in the specified ClickHouse table. |
|
secure |
Determines whether to use TLS for connection. |
MongodbSource
|
Field |
Description |
|
db |
string Required field. Database name. |
|
collection |
string Required field. Collection name. |
|
host |
string Required field. MongoDB host. |
|
port |
int64 Port to use when connecting to the host. |
|
user |
string Required field. Name of the MongoDB database user. |
|
password |
string Password of the MongoDB database user. |
|
options |
string Dictionary source options. |
PostgresqlSource
|
Field |
Description |
|
db |
string Required field. Database name. |
|
table |
string Required field. Table name. |
|
hosts[] |
string PostgreSQL hosts. |
|
port |
int64 Port to use when connecting to the PostgreSQL hosts. |
|
user |
string Required field. Name of the PostrgreSQL database user. |
|
password |
string Password of the PostrgreSQL database user. |
|
invalidate_query |
string Query for checking the dictionary status, to pull only updated data. |
|
ssl_mode |
enum SslMode Mode of SSL TCP/IP connection to the PostgreSQL host.
|
GraphiteRollup
Rollup settings for the GraphiteMergeTree table engine.
For details, see ClickHouse documentation
|
Field |
Description |
|
name |
string Required field. Name for the specified combination of settings for Graphite rollup. |
|
patterns[] |
Pattern to use for the rollup. |
|
path_column_name |
string The name of the column storing the metric name (Graphite sensor). Default value: Path. |
|
time_column_name |
string The name of the column storing the time of measuring the metric. Default value: Time. |
|
value_column_name |
string The name of the column storing the value of the metric at the time set in time_column_name. Default value: Value. |
|
version_column_name |
string The name of the column storing the version of the metric. Default value: Timestamp. |
Pattern
|
Field |
Description |
|
regexp |
string A pattern for the metric name (a regular or DSL). |
|
function |
string The name of the aggregating function to apply to data whose age falls within the range [age, age + precision]. |
|
retention[] |
Retention rules. |
Retention
|
Field |
Description |
|
age |
int64 The minimum age of the data in seconds. |
|
precision |
int64 Precision of determining the age of the data, in seconds. Should be a divisor for 86400 (seconds in a day). |
Kafka
Kafka configuration settings.
For details, see librdkafka documentation
|
Field |
Description |
|
security_protocol |
enum SecurityProtocol Protocol used to communicate with brokers. Default value: SECURITY_PROTOCOL_PLAINTEXT.
|
|
sasl_mechanism |
enum SaslMechanism SASL mechanism to use for authentication. Default value: SASL_MECHANISM_GSSAPI.
|
|
sasl_username |
string SASL username for use with the PLAIN and SASL-SCRAM mechanisms. |
|
sasl_password |
string SASL password for use with the PLAIN and SASL-SCRAM mechanisms. |
|
enable_ssl_certificate_verification |
Enable OpenSSL's builtin broker (server) certificate verification. Default value: true. |
|
max_poll_interval_ms |
Maximum allowed time between calls to consume messages for high-level consumers. Default value: 300000 (5 minutes). |
|
session_timeout_ms |
Client group session and failure detection timeout. The consumer sends periodic heartbeats (heartbeat.interval.ms) Default value: 45000 (45 seconds). |
|
debug |
enum Debug Debug context to enable.
|
|
auto_offset_reset |
enum AutoOffsetReset Action to take when there is no initial offset in offset store or the desired offset is out of range. Default value: AUTO_OFFSET_RESET_LARGEST.
|
KafkaTopic
|
Field |
Description |
|
name |
string Required field. Kafka topic name. |
|
settings |
Required field. Kafka topic settings. |
Rabbitmq
RabbitMQ integration settings.
For details, see ClickHouse documentation
|
Field |
Description |
|
username |
string RabbitMQ username. |
|
password |
string RabbitMQ password. |
|
vhost |
string RabbitMQ virtual host. |
QueryMaskingRule
|
Field |
Description |
|
name |
string Name for the rule. |
|
regexp |
string Required field. RE2 compatible regular expression. |
|
replace |
string Substitution string for sensitive data. Default value: six asterisks. |
QueryCache
Query cache configuration.
|
Field |
Description |
|
max_size_in_bytes |
The maximum cache size in bytes. Default value: 1073741824 (1 GiB). |
|
max_entries |
The maximum number of SELECT query results stored in the cache. Default value: 1024. |
|
max_entry_size_in_bytes |
The maximum size in bytes SELECT query results may have to be saved in the cache. Default value: 1048576 (1 MiB). |
|
max_entry_size_in_rows |
The maximum number of rows SELECT query results may have to be saved in the cache. Default value: 30000000. |
JdbcBridge
JDBC bridge configuration for queries to external databases.
|
Field |
Description |
|
host |
string Host of jdbc bridge. |
|
port |
Port of jdbc bridge. Default value: 9019. |
Macro
ClickHouse macro.
|
Field |
Description |
|
name |
string Required field. Name of the macro. |
|
value |
string Required field. Value of the macro. |
Resources
|
Field |
Description |
|
resource_preset_id |
string ID of the preset for computational resources available to a host (CPU, memory etc.). |
|
disk_size |
int64 Volume of the storage available to a host, in bytes. |
|
disk_type_id |
string Type of the storage environment for the host.
|
DiskSizeAutoscaling
|
Field |
Description |
|
planned_usage_threshold |
Amount of used storage for automatic disk scaling in the maintenance window, 0 means disabled, in percent. |
|
emergency_usage_threshold |
Amount of used storage for immediately automatic disk scaling, 0 means disabled, in percent. |
|
disk_size_limit |
Limit on how large the storage for database instances can automatically grow, in bytes. |
Zookeeper
|
Field |
Description |
|
resources |
Resources allocated to ZooKeeper hosts. |
|
disk_size_autoscaling |
Disk size autoscaling settings. |
Access
|
Field |
Description |
|
data_lens |
bool Allow to export data from the cluster to DataLens. |
|
web_sql |
bool Allow SQL queries to the cluster databases from the management console. See SQL queries in the management console for more details. |
|
metrika |
bool Allow to import data from Yandex Metrica and AppMetrica to the cluster. See AppMetrica documentation |
|
serverless |
bool Allow access to cluster for Serverless. |
|
data_transfer |
bool Allow access for DataTransfer |
|
yandex_query |
bool Allow access for Query |
CloudStorage
|
Field |
Description |
|
enabled |
bool Whether to use Object Storage for storing ClickHouse data. |
|
move_factor |
|
|
data_cache_enabled |
|
|
data_cache_max_size |
|
|
prefer_not_to_merge |
MaintenanceWindow
A maintenance window settings.
|
Field |
Description |
|
anytime |
Maintenance operation can be scheduled anytime. Includes only one of the fields The maintenance policy in effect. |
|
weekly_maintenance_window |
Maintenance operation can be scheduled on a weekly basis. Includes only one of the fields The maintenance policy in effect. |
AnytimeMaintenanceWindow
|
Field |
Description |
|
Empty |
|
WeeklyMaintenanceWindow
Weelky maintenance window settings.
|
Field |
Description |
|
day |
enum WeekDay Day of the week (in
|
|
hour |
int64 Hour of the day in UTC (in |
MaintenanceOperation
A planned maintenance operation.
|
Field |
Description |
|
info |
string Information about this maintenance operation. |
|
delayed_until |
Time until which this maintenance operation is delayed. |