Deploying the Apache Kafka® web interface
You can install the UI for Apache Kafka®
You can deploy the UI for Apache Kafka® in two ways:
- In a Docker container on a Yandex Cloud virtual machine. This option is cheaper but less reliable, which makes it more suitable for getting started with the UI for Apache Kafka®.
- In a Yandex Managed Service for Kubernetes cluster. This option is more expensive and more reliable, which makes it suitable for consistent and long-term use of the web interface.
Deploying in Docker containers
To deploy the UI for Apache Kafka® in a Docker container:
If you no longer need the resources you created, delete them.
Getting started
Prepare the infrastructure:
-
Configure a security group for your Managed Service for Apache Kafka® cluster and VM so that you can connect to topics from a cloud-based VM.
-
Create a Managed Service for Apache Kafka® cluster. When creating it, specify the configured security group.
-
In the network hosting the Managed Service for Apache Kafka® cluster, create a VM running Ubuntu 22.04 with a public IP address, and the configured security group.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the kafka-ui-via-docker.tf
configuration file to the same working directory.This file describes:
-
Network.
-
Subnet.
-
VM running Ubuntu 22.04.
-
Default security group and rules required to connect to the cluster and VM from the internet.
-
Managed Service for Apache Kafka® cluster.
-
Apache Kafka® user.
-
-
Specify the variable values in the
kafka-ui-via-docker.tf
file. -
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Install additional dependencies
-
ssh <username>@<VM_public_IP_address>
Where
<username>
is the VM account username. You can find the VM's public IP address in the management console , on the VM page. -
To check that the Managed Service for Apache Kafka® cluster is available, connect to one of its hosts with the
KAFKA
role:telnet <host_FQDN> 9091
You can view the FQDN in the management console:
- Go to the cluster page.
- Go to Hosts.
- Copy the value in the Host FQDN column, in the row of the host with the
KAFKA
role.
If the cluster is available, you will get this message:
Connected to <host_FQDN>
After this, you can abort the command as it does not complete but awaits data transfer.
-
Install Docker:
sudo apt update && sudo apt install docker.io
-
Install
keytool
to manage keys and certificates:sudo apt install openjdk-19-jre-headless
Create a TrustStore
When deploying Apache Kafka® in a Docker container, TrustStore commands run on a VM.
TrustStore is a trusted certificate store used in JKS files. It serves for authenticating a client when connecting to the server. The server validates the client using certificates stored in TrustStore. However, the client stores the private key and the certificate on their side in KeyStore.
In the example below, TrustStore is used to connect to a Managed Service for Apache Kafka® cluster. With no TrustStore created, the Apache Kafka® web interface will lack information about the cluster.
To use TrustStore:
-
Create an SSL certificate:
sudo mkdir -p /usr/local/share/ca-certificates/Yandex/ && \ sudo wget "https://storage.yandexcloud.net/cloud-certs/CA.pem" \ --output-document /usr/local/share/ca-certificates/Yandex/YandexCA.crt && \ sudo chmod 0655 /usr/local/share/ca-certificates/Yandex/YandexCA.crt
-
Create a directory named
/truststore
:mkdir /truststore
It will store the
truststore.jks
file. You need a separate directory so that the file path is correctly recognized in commands and configuration files. -
Upload the
YandexCA.crt
certificate to thetruststore.jks
file:sudo keytool -import \ -file /usr/local/share/ca-certificates/Yandex/YandexCA.crt \ -alias "kafka-ui-cert" \ -keystore /truststore/truststore.jks
You will be prompted to create a password. Memorize it: you will need it to deploy the Apache Kafka® web interface.
Set up the UI for Apache Kafka®
-
On the VM, run the Docker container to deploy your web interface in:
sudo docker run -it -p 8080:8080 \ -e DYNAMIC_CONFIG_ENABLED=true \ -e KAFKA_CLUSTERS_0_NAME=<cluster_name> \ -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=<host_FQDN>:9091 \ -e KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL \ -e KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=PLAIN \ -e KAFKA_CLUSTERS_0_PROPERTIES_CLIENT_DNS_LOOKUP=use_all_dns_ips \ -e KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG='org.apache.kafka.common.security.plain.PlainLoginModule required username="<user_name>" password="<user_password>";' \ -e KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_LOCATION=/truststore/truststore.jks \ -e KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_PASSWORD=<TrustStore_password> \ -v /truststore/truststore.jks:/truststore/truststore.jks \ provectuslabs/kafka-ui
Specify the following in the environment variables:
KAFKA_CLUSTERS_0_NAME
: Managed Service for Apache Kafka® cluster name.KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
: FQDN of the host with theKAFKA
role in the Managed Service for Apache Kafka® cluster.KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG
,username
: Apache Kafka® user name.KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG
,password
: Apache Kafka® user password.KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_PASSWORD
: Password you set when creating thetruststore.jks
file.
Once started, the command does not terminate. While it is running, the UI for Apache Kafka® is available.
-
On a local machine, go to
http://<VM_public_IP_address>:8080
in your browser. The UI for Apache Kafka® with Managed Service for Apache Kafka® cluster data will open.You can find the VM's public IP address in the management console, on the VM page.
Deploying in a Managed Service for Kubernetes cluster
To deploy the UI for Apache Kafka® in a Managed Service for Kubernetes cluster:
- Install additional dependencies.
- Create a TrustStore.
- Deploy your application with the UI for Apache Kafka® in the Kubernetes pod.
- Check the result.
If you no longer need the resources you created, delete them.
Getting started
Prepare the infrastructure:
-
Configure a single security group:
- For the Managed Service for Apache Kafka® cluster so as to enable connection to topics over the internet.
- For the Managed Service for Kubernetes cluster and node group.
-
Create a Managed Service for Apache Kafka® cluster. When creating it, specify the configured security group.
-
In the network hosting the Managed Service for Apache Kafka® cluster, create a Managed Service for Kubernetes cluster. When creating it, specify the configured security group and assign a public address to the cluster.
-
Create a node group in the Managed Service for Kubernetes cluster. When creating it, specify the configured security group.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the kafka-ui-via-kubernetes.tf
configuration file to the same working directory.This file describes:
-
Network.
-
Subnet.
-
Default security group and rules required to connect to the following from the internet:
- Managed Service for Apache Kafka® cluster.
- Managed Service for Kubernetes cluster.
- Managed Service for Kubernetes node group.
-
Managed Service for Apache Kafka® cluster.
-
Apache Kafka® user.
-
Managed Service for Kubernetes cluster.
-
Managed Service for Kubernetes node group.
-
-
Specify the variable values in the
kafka-ui-via-kubernetes.tf
file. -
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Install additional dependencies
On a local machine:
-
Install kubectl
and configure it to work with the created cluster. -
To check that the Managed Service for Apache Kafka® cluster is available, connect to one of its hosts with the
KAFKA
role:telnet <host_FQDN> 9091
You can view the FQDN in the management console:
- Go to the cluster page.
- Go to Hosts.
- Copy the value in the Host FQDN column, in the row of the host with the
KAFKA
role.
If the cluster is available, you will get this message:
Connected to <host_FQDN>
After this, you can abort the command as it does not complete but awaits data transfer.
-
Install
keytool
to manage keys and certificates:sudo apt update && sudo apt install openjdk-19-jre-headless
Create a TrustStore
When deploying Apache Kafka® in a Managed Service for Kubernetes cluster, TrustStore commands run on a local machine.
TrustStore is a trusted certificate store used in JKS files. It serves for authenticating a client when connecting to the server. The server validates the client using certificates stored in TrustStore. However, the client stores the private key and the certificate on their side in KeyStore.
In the example below, TrustStore is used to connect to a Managed Service for Apache Kafka® cluster. With no TrustStore created, the Apache Kafka® web interface will lack information about the cluster.
To use TrustStore:
-
Create an SSL certificate:
sudo mkdir -p /usr/local/share/ca-certificates/Yandex/ && \ sudo wget "https://storage.yandexcloud.net/cloud-certs/CA.pem" \ --output-document /usr/local/share/ca-certificates/Yandex/YandexCA.crt && \ sudo chmod 0655 /usr/local/share/ca-certificates/Yandex/YandexCA.crt
-
Create a directory named
/truststore
:mkdir /truststore
It will store the
truststore.jks
file. You need a separate directory so that the file path is correctly recognized in commands and configuration files. -
Upload the
YandexCA.crt
certificate to thetruststore.jks
file:sudo keytool -import \ -file /usr/local/share/ca-certificates/Yandex/YandexCA.crt \ -alias "kafka-ui-cert" \ -keystore /truststore/truststore.jks
You will be prompted to create a password. Memorize it: you will need it to deploy the Apache Kafka® web interface.
Deploy your application with the UI for Apache Kafka® in the Kubernetes pod
-
To deliver the
truststore.jks
file to the Kubernetes pod, create a secret containing this file:kubectl create secret generic truststore --from-file=/truststore/truststore.jks
-
Create a file named
kafka-ui-configMap.yaml
with the configMap configuration. It contains information about the Managed Service for Apache Kafka® cluster and TrustStore:apiVersion: v1 kind: ConfigMap metadata: name: kafka-ui-values data: KAFKA_CLUSTERS_0_NAME: <Apache Kafka®_cluster_name> KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: <host_FQDN>:9091 KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL: SASL_SSL KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM: PLAIN KAFKA_CLUSTERS_0_PROPERTIES_CLIENT_DNS_LOOKUP: use_all_dns_ips KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.plain.PlainLoginModule required username="<user_name>" password="<user_password>";' KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_LOCATION: /truststore/truststore.jks KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_PASSWORD: <TrustStore_password> AUTH_TYPE: "DISABLED" MANAGEMENT_HEALTH_LDAP_ENABLED: "FALSE"
Specify the following in the environment variables:
KAFKA_CLUSTERS_0_NAME
: Managed Service for Apache Kafka® cluster name.KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
: FQDN of the host with theKAFKA
role in the Managed Service for Apache Kafka® cluster.KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG
,username
: Apache Kafka® user name.KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG
,password
: Apache Kafka® user password.KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_PASSWORD
: Password you set when creating thetruststore.jks
file.
-
Create a file named
kafka-ui-pod.yaml
with the configuration of the pod to deploy your application with the UI for Apache Kafka® in:apiVersion: v1 kind: Pod metadata: name: kafka-ui-pod spec: containers: - name: kafka-ui-pod image: provectuslabs/kafka-ui envFrom: - configMapRef: name: kafka-ui-values volumeMounts: - name: truststore mountPath: "/truststore" readOnly: true volumes: - name: truststore secret: secretName: truststore - name: kafka-ui-configmap configMap: name: kafka-ui-values
-
Apply the configMap configuration:
kubectl apply -f kafka-ui-configMap.yaml
-
Apply the pod configuration:
kubectl apply -f kafka-ui-pod.yaml
Check the result
-
View the pod logs to make sure the UI for Apache Kafka® is deployed successfully:
kubectl logs kafka-ui-pod
The result contains the following lines:
_ _ ___ __ _ _ _ __ __ _ | | | |_ _| / _|___ _ _ /_\ _ __ __ _ __| |_ ___ | |/ /__ _ / _| |_____ | |_| || | | _/ _ | '_| / _ \| '_ / _` / _| ' \/ -_) | ' </ _` | _| / / _`| \___/|___| |_| \___|_| /_/ \_| .__\__,_\__|_||_\___| |_|\_\__,_|_| |_\_\__,| |_| 2024-01-23 12:13:25,648 INFO [background-preinit] o.h.v.i.u.Version: HV000001: Hibernate Validator 8.0.0.Final 2024-01-23 12:13:25,745 INFO [main] c.p.k.u.KafkaUiApplication: Starting KafkaUiApplication using Java 17.0.6 with PID 1 (/kafka-ui-api.jar started by kafkaui in /) 2024-01-23 12:13:25,746 DEBUG [main] c.p.k.u.KafkaUiApplication: Running with Spring Boot v3.0.6, Spring v6.0.8 2024-01-23 12:13:25,747 INFO [main] c.p.k.u.KafkaUiApplication: No active profile set, falling back to 1 default profile: "default"
-
Set the UI for Apache Kafka® port to
8080
:kubectl --namespace default port-forward kafka-ui-pod 8080:8080
-
In your browser, open
http://127.0.0.1:8080/
. The UI for Apache Kafka® with Managed Service for Apache Kafka® cluster data will open.
Delete the resources you created
Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:
Delete:
- Managed Service for Apache Kafka® cluster
- Virtual machine
- Managed Service for Kubernetes node group
- Managed Service for Kubernetes cluster
-
In the terminal window, go to the directory containing the infrastructure plan.
-
Delete the
kafka-ui-via-docker.tf
or thekafka-ui-via-kubernetes.tf
configuration file depending on the deployment method used. -
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Confirm updating the resources.
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the resources described in the configuration file will be deleted.
-