Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Yandex Managed Service for ClickHouse®
  • Getting started
  • Access management
    • Overview
      • Overview
      • add-external-dictionary
      • add-graphite-rollup
      • add-labels
      • add-zookeeper
      • backup
      • clear-compression
      • clear-query-masking-rules
      • create
      • delete
      • get
      • list
      • list-backups
      • list-external-dictionaries
      • list-logs
      • list-operations
      • move
      • remove-external-dictionary
      • remove-graphite-rollup
      • remove-labels
      • reschedule-maintenance
      • restore
      • set-compression
      • set-query-masking-rules
      • start
      • stop
      • update
      • update-config
  • Terraform reference
  • Yandex Monitoring metrics
  • Audit Trails events
  • Public materials
  • Release notes
  1. CLI reference
  2. cluster
  3. update-config

yc managed-clickhouse cluster update-config

Written by
Yandex Cloud
Updated at March 5, 2025

Update the configuration of a ClickHouse cluster.

Command UsageCommand Usage

Syntax:

yc managed-clickhouse cluster update-config <CLUSTER-NAME>|<CLUSTER-ID> [Flags...] [Global Flags...]

FlagsFlags

Flag Description
--id string
ID of the ClickHouse cluster.
--name string
Name of the ClickHouse cluster.
--async Display information about the operation in progress, without waiting for the operation to complete.
--set key1=value1[,key2=value2][,"key3=val3a,val3b"]
Set a parameter for a ClickHouse cluster. Can be specified multiple times. Acceptable keys:
  • log_level: Logging level for the ClickHouse cluster. Possible values: TRACE, DEBUG, INFORMATION, WARNING, ERROR.

  • merge_tree.replicated_deduplication_window: Number of blocks of hashes to keep in ZooKeeper.

  • merge_tree.replicated_deduplication_window_seconds: Period of time to keep blocks of hashes for.

  • merge_tree.parts_to_delay_insert: If table contains at least that many active parts in single partition, artificially slow down insert into table.

  • merge_tree.parts_to_throw_insert: If more than this number active parts in single partition, throw 'Too many parts ...' exception.

  • merge_tree.inactive_parts_to_delay_insert:

  • merge_tree.inactive_parts_to_throw_insert:

  • merge_tree.max_replicated_merges_in_queue: How many tasks of merging and mutating parts are allowed simultaneously in ReplicatedMergeTree queue.

  • merge_tree.number_of_free_entries_in_pool_to_lower_max_size_of_merge: If there is less than specified number of free entries in background pool (or replicated queue), start to lower maximum size of merge to process.

  • merge_tree.max_bytes_to_merge_at_min_space_in_pool: Maximum in total size of parts to merge, when there are minimum free threads in background pool (or entries in replication queue).

  • merge_tree.max_bytes_to_merge_at_max_space_in_pool:

  • merge_tree.min_bytes_for_wide_part: Minimum number of bytes in a data part that can be stored in Wide format.

    More info see in ClickHouse documentation.

  • merge_tree.min_rows_for_wide_part: Minimum number of rows in a data part that can be stored in Wide format.

    More info see in ClickHouse documentation.

  • merge_tree.ttl_only_drop_parts: Enables or disables complete dropping of data parts where all rows are expired in MergeTree tables.

    More info see in ClickHouse documentation.

  • merge_tree.allow_remote_fs_zero_copy_replication:

  • merge_tree.merge_with_ttl_timeout:

  • merge_tree.merge_with_recompression_ttl_timeout:

  • merge_tree.max_parts_in_total:

  • merge_tree.max_number_of_merges_with_ttl_in_pool:

  • merge_tree.cleanup_delay_period:

  • merge_tree.number_of_free_entries_in_pool_to_execute_mutation:

  • merge_tree.max_avg_part_size_for_too_many_parts: The 'too many parts' check according to 'parts_to_delay_insert' and 'parts_to_throw_insert' will be active only if the average part size (in the relevant partition) is not larger than the specified threshold. If it is larger than the specified threshold, the INSERTs will be neither delayed or rejected. This allows to have hundreds of terabytes in a single table on a single server if the parts are successfully merged to larger parts. This does not affect the thresholds on inactive parts or total parts. Default: 1 GiB Min version: 22.10 See in-depth description in ClickHouse GitHub

  • merge_tree.min_age_to_force_merge_seconds: Merge parts if every part in the range is older than the value of min_age_to_force_merge_seconds. Default: 0 - disabled Min_version: 22.10 See in-depth description in ClickHouse documentation

  • merge_tree.min_age_to_force_merge_on_partition_only: Whether min_age_to_force_merge_seconds should be applied only on the entire partition and not on subset. Default: false Min_version: 22.11 See in-depth description in ClickHouse documentation

  • merge_tree.merge_selecting_sleep_ms: Sleep time for merge selecting when no part is selected. A lower setting triggers selecting tasks in background_schedule_pool frequently, which results in a large number of requests to ClickHouse Keeper in large-scale clusters. Default: 5000 Min_version: 21.10 See in-depth description in ClickHouse documentation

  • merge_tree.merge_max_block_size: The number of rows that are read from the merged parts into memory. Default: 8192 See in-depth description in ClickHouse documentation

  • merge_tree.check_sample_column_is_correct: Enables the check at table creation, that the data type of a column for sampling or sampling expression is correct. The data type must be one of unsigned integer types: UInt8, UInt16, UInt32, UInt64. Default: true See in-depth description in ClickHouse documentation

  • merge_tree.max_merge_selecting_sleep_ms: Maximum sleep time for merge selecting, a lower setting will trigger selecting tasks in background_schedule_pool frequently which result in large amount of requests to zookeeper in large-scale clusters. Default: 60000 Min_version: 23.6 See in-depth description in ClickHouse GitHub

  • merge_tree.max_cleanup_delay_period: Maximum period to clean old queue logs, blocks hashes and parts. Default: 300 Min_version: 23.6 See in-depth description in ClickHouse GitHub

  • merge_tree.deduplicate_merge_projection_mode: Determines the behavior of background merges for MergeTree tables with projections. https://clickhouse.com/docs/en/operations/settings/merge-tree-settings#deduplicate_merge_projection_mode

  • merge_tree.lightweight_mutation_projection_mode: Determines the behavior of lightweight deletes for MergeTree tables with projections.

  • merge_tree.materialize_ttl_recalculate_only: Only recalculate ttl info when MATERIALIZE TTL.

  • kafka.security_protocol:

  • kafka.sasl_mechanism:

  • kafka.sasl_username:

  • kafka.sasl_password:

  • kafka.enable_ssl_certificate_verification:

  • kafka.max_poll_interval_ms:

  • kafka.session_timeout_ms:

  • kafka.debug:

  • kafka.auto_offset_reset:

  • kafka_topics.name:

  • kafka_topics.settings.security_protocol:

  • kafka_topics.settings.sasl_mechanism:

  • kafka_topics.settings.sasl_username:

  • kafka_topics.settings.sasl_password:

  • kafka_topics.settings.enable_ssl_certificate_verification:

  • kafka_topics.settings.max_poll_interval_ms:

  • kafka_topics.settings.session_timeout_ms:

  • kafka_topics.settings.debug:

  • kafka_topics.settings.auto_offset_reset:

  • rabbitmq.username: RabbitMQ username

  • rabbitmq.password: RabbitMQ password

  • rabbitmq.vhost: RabbitMQ virtual host

  • max_connections: Maximum number of inbound connections.

  • max_concurrent_queries: Maximum number of simultaneously processed requests.

  • keep_alive_timeout: Number of milliseconds that ClickHouse waits for incoming requests before closing the connection.

  • uncompressed_cache_size: Cache size (in bytes) for uncompressed data used by MergeTree tables.

  • mark_cache_size: Approximate size (in bytes) of the cache of "marks" used by MergeTree tables.

  • max_table_size_to_drop: Maximum size of the table that can be deleted using a DROP query.

  • max_partition_size_to_drop: Maximum size of the partition that can be deleted using a DROP query.

  • builtin_dictionaries_reload_interval: The setting is deprecated and has no effect.

  • timezone: The server's time zone to be used in DateTime fields conversions. Specified as an IANA identifier.

  • geobase_enabled: Enable or disable geobase.

  • geobase_uri: Address of the archive with the user geobase in Object Storage.

  • query_log_retention_size: The maximum size that query_log can grow to before old data will be removed. If set to 0, automatic removal of query_log data based on size is disabled.

  • query_log_retention_time: The maximum time that query_log records will be retained before removal. If set to 0, automatic removal of query_log data based on time is disabled.

  • query_thread_log_enabled: Whether query_thread_log system table is enabled.

  • query_thread_log_retention_size: The maximum size that query_thread_log can grow to before old data will be removed. If set to 0, automatic removal of query_thread_log data based on size is disabled.

  • query_thread_log_retention_time: The maximum time that query_thread_log records will be retained before removal. If set to 0, automatic removal of query_thread_log data based on time is disabled.

  • part_log_retention_size: The maximum size that part_log can grow to before old data will be removed. If set to 0, automatic removal of part_log data based on size is disabled.

  • part_log_retention_time: The maximum time that part_log records will be retained before removal. If set to 0, automatic removal of part_log data based on time is disabled.

  • metric_log_enabled: Whether metric_log system table is enabled.

  • metric_log_retention_size: The maximum size that metric_log can grow to before old data will be removed. If set to 0, automatic removal of metric_log data based on size is disabled.

  • metric_log_retention_time: The maximum time that metric_log records will be retained before removal. If set to 0, automatic removal of metric_log data based on time is disabled.

  • trace_log_enabled: Whether trace_log system table is enabled.

  • trace_log_retention_size: The maximum size that trace_log can grow to before old data will be removed. If set to 0, automatic removal of trace_log data based on size is disabled.

  • trace_log_retention_time: The maximum time that trace_log records will be retained before removal. If set to 0, automatic removal of trace_log data based on time is disabled.

  • text_log_enabled: Whether text_log system table is enabled.

  • text_log_retention_size: The maximum size that text_log can grow to before old data will be removed. If set to 0, automatic removal of text_log data based on size is disabled.

  • text_log_retention_time: The maximum time that text_log records will be retained before removal. If set to 0, automatic removal of text_log data based on time is disabled.

  • text_log_level: Logging level for text_log system table. Possible values: TRACE, DEBUG, INFORMATION, WARNING, ERROR.

  • opentelemetry_span_log_enabled: Enable or disable opentelemetry_span_log system table. Default value: false.

  • opentelemetry_span_log_retention_size: The maximum size that opentelemetry_span_log can grow to before old data will be removed. If set to 0 (default), automatic removal of opentelemetry_span_log data based on size is disabled.

  • opentelemetry_span_log_retention_time: The maximum time that opentelemetry_span_log records will be retained before removal. If set to 0, automatic removal of opentelemetry_span_log data based on time is disabled.

  • query_views_log_enabled: Enable or disable query_views_log system table. Default value: false.

  • query_views_log_retention_size: The maximum size that query_views_log can grow to before old data will be removed. If set to 0 (default), automatic removal of query_views_log data based on size is disabled.

  • query_views_log_retention_time: The maximum time that query_views_log records will be retained before removal. If set to 0, automatic removal of query_views_log data based on time is disabled.

  • asynchronous_metric_log_enabled: Enable or disable asynchronous_metric_log system table. Default value: false.

  • asynchronous_metric_log_retention_size: The maximum size that asynchronous_metric_log can grow to before old data will be removed. If set to 0 (default), automatic removal of asynchronous_metric_log data based on size is disabled.

  • asynchronous_metric_log_retention_time: The maximum time that asynchronous_metric_log records will be retained before removal. If set to 0, automatic removal of asynchronous_metric_log data based on time is disabled.

  • session_log_enabled: Enable or disable session_log system table. Default value: false.

  • session_log_retention_size: The maximum size that session_log can grow to before old data will be removed. If set to 0 (default), automatic removal of session_log data based on size is disabled.

  • session_log_retention_time: The maximum time that session_log records will be retained before removal. If set to 0, automatic removal of session_log data based on time is disabled.

  • zookeeper_log_enabled: Enable or disable zookeeper_log system table. Default value: false.

  • zookeeper_log_retention_size: The maximum size that zookeeper_log can grow to before old data will be removed. If set to 0 (default), automatic removal of zookeeper_log data based on size is disabled.

  • zookeeper_log_retention_time: The maximum time that zookeeper_log records will be retained before removal. If set to 0, automatic removal of zookeeper_log data based on time is disabled.

  • asynchronous_insert_log_enabled: Enable or disable asynchronous_insert_log system table. Default value: false. Minimal required ClickHouse version: 22.10.

  • asynchronous_insert_log_retention_size: The maximum size that asynchronous_insert_log can grow to before old data will be removed. If set to 0 (default), automatic removal of asynchronous_insert_log data based on size is disabled.

  • asynchronous_insert_log_retention_time: The maximum time that asynchronous_insert_log records will be retained before removal. If set to 0, automatic removal of asynchronous_insert_log data based on time is disabled.

  • processors_profile_log_enabled: Enable or disable processors_profile_log system table.

  • processors_profile_log_retention_size: The maximum size that processors_profile_log can grow to before old data will be removed. If set to 0 (default), automatic removal of processors_profile_log data based on size is disabled.

  • processors_profile_log_retention_time: The maximum time that processors_profile_log records will be retained before removal. If set to 0, automatic removal of processors_profile_log data based on time is disabled.

  • background_pool_size:

  • background_merges_mutations_concurrency_ratio: Sets a ratio between the number of threads and the number of background merges and mutations that can be executed concurrently. For example, if the ratio equals to 2 and background_pool_size is set to 16 then ClickHouse can execute 32 background merges concurrently. This is possible, because background operations could be suspended and postponed. This is needed to give small merges more execution priority. You can only increase this ratio at runtime. To lower it you have to restart the server. The same as for background_pool_size setting background_merges_mutations_concurrency_ratio could be applied from the default profile for backward compatibility. Default: 2 See in-depth description in ClickHouse documentation

  • background_schedule_pool_size:

  • background_fetches_pool_size: Sets the number of threads performing background fetches for tables with ReplicatedMergeTree engines. Default value: 8.

    More info see in ClickHouse documentation.

  • background_move_pool_size:

  • background_distributed_schedule_pool_size:

  • background_buffer_flush_schedule_pool_size:

  • background_message_broker_schedule_pool_size:

  • background_common_pool_size: The maximum number of threads that will be used for performing a variety of operations (mostly garbage collection) for *MergeTree-engine tables in a background. Default: 8 See in-depth description in ClickHouse documentation

  • default_database: The default database.

    To get a list of cluster databases, see Yandex Managed ClickHouse documentation.

  • total_memory_profiler_step: Sets the memory size (in bytes) for a stack trace at every peak allocation step. Default value: 4194304.

    More info see in ClickHouse documentation.

  • total_memory_tracker_sample_probability:

  • query_masking_rules.name: Name for the rule.

  • query_masking_rules.regexp: RE2 compatible regular expression. Required.

  • query_masking_rules.replace: Substitution string for sensitive data. Default: six asterisks

  • dictionaries_lazy_load: Lazy loading of dictionaries. Default: true See in-depth description in ClickHouse documentation

  • query_cache.max_size_in_bytes: The maximum cache size in bytes. Default: 1073741824 (1 GiB)

  • query_cache.max_entries: The maximum number of SELECT query results stored in the cache. Default: 1024

  • query_cache.max_entry_size_in_bytes: The maximum size in bytes SELECT query results may have to be saved in the cache. Dafault: 1048576 (1 MiB)

  • query_cache.max_entry_size_in_rows: The maximum number of rows SELECT query results may have to be saved in the cache. Default: 30000000 (30 mil)

  • jdbc_bridge.host: Host of jdbc bridge.

  • jdbc_bridge.port: Port of jdbc bridge.

Global FlagsGlobal Flags

Flag Description
--profile string
Set the custom configuration file.
--debug Debug logging.
--debug-grpc Debug gRPC logging. Very verbose, used for debugging connection problems.
--no-user-output Disable printing user intended output to stderr.
--retry int
Enable gRPC retries. By default, retries are enabled with maximum 5 attempts.
Pass 0 to disable retries. Pass any negative value for infinite retries.
Even infinite retries are capped with 2 minutes timeout.
--cloud-id string
Set the ID of the cloud to use.
--folder-id string
Set the ID of the folder to use.
--folder-name string
Set the name of the folder to use (will be resolved to id).
--endpoint string
Set the Cloud API endpoint (host:port).
--token string
Set the OAuth token to use.
--impersonate-service-account-id string
Set the ID of the service account to impersonate.
--no-browser Disable opening browser for authentication.
--format string
Set the output format: text (default), yaml, json, json-rest.
--jq string
Query to select values from the response using jq syntax
-h,--help Display help for the command.

Was the article helpful?

Previous
update
Next
Overview
Yandex project
© 2025 Yandex.Cloud LLC