Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Managed Service for Apache Kafka®
  • Getting started
    • All guides
    • Managing topics
    • Managing users
    • Managing connectors
    • Kafka UI for Apache Kafka®
      • Viewing cluster logs
      • Monitoring the state of clusters and hosts
  • Access management
  • Pricing policy
  • Terraform reference
  • Yandex Monitoring metrics
  • Audit Trails events
  • Public materials
  • Release notes
  • FAQ

In this article:

  • Getting a cluster log
  • Getting a log entry stream
  1. Step-by-step guides
  2. Logs and monitoring
  3. Viewing cluster logs

Viewing Apache Kafka® cluster logs

Written by
Yandex Cloud
Updated at January 22, 2026
  • Getting a cluster log
  • Getting a log entry stream

Managed Service for Apache Kafka® allows you to get a cluster log snippet for the selected period and view logs in real time.

Note

Here, the log is the system log of the cluster and its hosts. This log is not related to the partition log for the Apache Kafka® topic where the broker writes messages received from message producers.

Note

Cluster logs are kept for 30 days.

Getting a cluster logGetting a cluster log

Management console
CLI
REST API
gRPC API
  1. In the management console, navigate to the relevant folder.
  2. Go to Managed Service for Kafka.
  3. Click the name of your cluster and select the Logs tab.
  4. Select Origin, Hosts, and ** Severity**.
  5. Specify a time period for the log entries you want to view.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

  1. See the description of the CLI command for viewing cluster logs:

    yc managed-kafka cluster list-logs --help
    
  2. Run the following command to get cluster logs (our example only shows some of the available parameters):

    yc managed-kafka cluster list-logs <cluster_name_or_ID> \
       --limit <entry_number_limit> \
       --columns <log_columns_list> \
       --filter <entry_filtration_settings> \
       --since <time_range_left_boundary> \
       --until <time_range_right_boundary>
    

    Where:

    • --limit: limits on the number of entries to output.

    • --columns: List of log columns to draw data from.

      • hostname: Host name.
      • message: Message output by the component.
      • severity: Logging level. Output example: INFO.
      • origin: Message origin. Output examples: kafka_server or kafka_controller.
    • --filter: record filter settings, for example, message.hostname='node1.mdb.yandexcloud.net'.

    • --since: Left boundary of a time range in RFC-3339, HH:MM:SS format or a time interval relative to the current time. Examples: 2006-01-02T15:04:05Z, 15:04:05, 2h, 3h30m ago.

    • --until: right boundary of a time range, the format is similar to that of --since.

You can get the cluster name and ID with the list of clusters in the folder.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Call the Cluster.listLogs method, e.g., via the following cURL request:

    curl \
        --request GET \
        --header "Authorization: Bearer $IAM_TOKEN" \
        --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>:logs' \
        --url-query columnFilter=<list_of_output_data_columns> \
        --url-query fromTime=<time_range_left_boundary> \
        --url-query toTime=<time_range_right_boundary>
    

    Where:

    • columnFilter: List of output data columns:

      • hostname: Host name.
      • component: Type of component to log, Example: HTTP-Session.
      • message: Message output by the component.
      • query_id: Request ID.
      • severity: Logging level, e.g., Debug.
      • thread: ID of the thread involved in query handling.

      You can specify only one column in the columnFilter parameter. If you want to filter logs by more than one column, provide a list of the columns in several parameters.

    • fromTime: Left boundary of a time range in RFC-3339 format, Example: 2006-01-02T15:04:05Z.
    • toTime: End of the time range in the same format as fromTime.

    You can get the cluster ID with the list of clusters in the folder.

  3. View the server response to make sure your request was successful.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume that the repository contents reside in the ~/cloudapi/ directory.

  3. Call the ClusterService/ListLogs method, e.g., via the following gRPCurl request:

    grpcurl \
        -format json \
        -import-path ~/cloudapi/ \
        -import-path ~/cloudapi/third_party/googleapis/ \
        -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/cluster_service.proto \
        -rpc-header "Authorization: Bearer $IAM_TOKEN" \
        -d '{
                "cluster_id": "<cluster_ID>",
                "column_filter": [<list_of_output_data_columns>],
                "from_time": "<time_range_left_boundary>" \
                "to_time": "<time_range_right_boundary>"
            }' \
        mdb.api.cloud.yandex.net:443 \
        yandex.cloud.mdb.kafka.v1.ClusterService.ListLogs
    

    Where:

    • service_type: Type of service to request logs for. The only valid value is CLICKHOUSE.

    • column_filter: List of output data columns:

      • hostname: Host name.
      • component: Type of component to log, Example: HTTP-Session.
      • message: Message output by the component.
      • query_id: Request ID.
      • severity: Logging level, e.g., Debug.
      • thread: ID of the thread involved in query handling.

      You can specify more than one column in the column_filter parameter if you want to filter logs by multiple columns.

    • from_time: Left boundary of a time range in RFC-3339 format, Example: 2006-01-02T15:04:05Z.
    • to_time: End of the time range in the same format as from_time.

    You can get the cluster ID with the list of clusters in the folder.

  4. Check the server response to make sure your request was successful.

Getting a log entry streamGetting a log entry stream

This method allows you to get cluster logs in real time.

CLI
REST API
gRPC API

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

To view cluster logs in real time, run this command:

yc managed-kafka cluster list-logs <cluster_name_or_ID> --follow

You can get the cluster name and ID with the list of clusters in the folder.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Call the Cluster.streamLogs method, e.g., via the following cURL request:

    curl \
        --request GET \
        --header "Authorization: Bearer $IAM_TOKEN" \
        --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>:stream_logs' \
        --url-query columnFilter=<list_of_output_data_columns> \
        --url-query fromTime=<time_range_left_boundary> \
        --url-query toTime=<time_range_right_boundary> \
        --url-query filter=<log_filter>
    

    Where:

    • columnFilter: List of output data columns:

      • hostname: Host name.
      • component: Type of component to log, Example: HTTP-Session.
      • message: Message output by the component.
      • query_id: Request ID.
      • severity: Logging level, e.g., Debug.
      • thread: ID of the thread involved in query handling.

      You can specify only one column in the columnFilter parameter. If you want to filter logs by more than one column, provide a list of the columns in several parameters.

    • fromTime: Left boundary of a time range in RFC-3339 format, Example: 2006-01-02T15:04:05Z.
    • toTime: End of the time range in the same format as fromTime.

      If you omit this parameter, new logs will be sent to the log stream as they arrive. Semantically, this behavior is similar to tail -f.

    • filter: Log filter. You can filter logs so that the stream contains only the logs you need.

      For more information about filters and their syntax, see the API reference.

      Tip

      A filter can contain quotation marks and other characters. Escape them if you need to.

      Supported filters:

      • message.hostname: Filtering by host name.
      • message.severity: Filtering by logging level.

    You can get the cluster ID with the list of clusters in the folder.

  3. View the server response to make sure your request was successful.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume that the repository contents reside in the ~/cloudapi/ directory.

  3. Call the ClusterService/StreamLogs method, e.g., via the following gRPCurl request:

    grpcurl \
        -format json \
        -import-path ~/cloudapi/ \
        -import-path ~/cloudapi/third_party/googleapis/ \
        -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/cluster_service.proto \
        -rpc-header "Authorization: Bearer $IAM_TOKEN" \
        -d '{
                "cluster_id": "<cluster_ID>",
                "column_filter": [<list_of_output_data_columns>],
                "from_time": "<time_range_left_boundary>",
                "to_time": "<time_range_right_boundary>",
                "filter": "<log_filter>"
            }' \
        mdb.api.cloud.yandex.net:443 \
        yandex.cloud.mdb.kafka.v1.ClusterService.StreamLogs
    

    Where:

    • column_filter: List of output data columns:

      • hostname: Host name.
      • component: Type of component to log, Example: HTTP-Session.
      • message: Message output by the component.
      • query_id: Request ID.
      • severity: Logging level, e.g., Debug.
      • thread: ID of the thread involved in query handling.

      You can specify more than one column in the column_filter parameter if you want to filter logs by multiple columns.

    • from_time: Left boundary of a time range in RFC-3339 format, Example: 2006-01-02T15:04:05Z.
    • to_time: End of the time range in the same format as from_time.

      If you omit this parameter, new logs will be sent to the log stream as they arrive. Semantically, this behavior is similar to tail -f.

    • filter: Log filter. You can filter logs so that the stream contains only the logs you need.

      Tip

      A filter can contain quotation marks and other characters. Escape them if you need to.

      Supported filters:

      • message.hostname: Filtering by host name.
      • message.severity: Filtering by logging level.

      For more information about filters and their syntax, see the API reference.

    You can get the cluster ID with the list of clusters in the folder.

  4. Check the server response to make sure your request was successful.

Was the article helpful?

Previous
Kafka UI for Apache Kafka®
Next
Monitoring the state of clusters and hosts
© 2026 Direct Cursus Technology L.L.C.