Overview
Managing Apache Kafka within a versatile platform like Confluent Cloud provides significant advantages in scalability and managed services, yet robust monitoring and control remain critical. Kpow emerges as a premier tool for achieving comprehensive visibility and streamlined management of Kafka clusters.
This post details the process of integrating Kpow with Confluent Cloud resources, demonstrating its power in real-world scenarios. To showcase a practical demonstration of Kpow's capabilities, we will walk through the steps for deploying Kpow locally using Docker Compose. This process includes defining the Kpow service within a Docker Compose configuration and meticulously preparing a detailed environment configuration. This environment configuration is crucial as it supplies all necessary connection details for your Confluent Cloud services - such as Kafka brokers, Schema Registry, Kafka Connect, and ksqlDB - in addition to essential authentication parameters and Kpow licensing information. Once Kpow is up and running with these configurations, we will explore its powerful user interface to monitor and interact with our Confluent Cloud Kafka cluster.
About Kpow
Kpow for Apache Kafka is a powerful tool for managing and monitoring Kafka clusters and their associated resources.
You can follow this demo using either the free Community Edition or the Enterprise Edition. The connection example is also available in our live multi-cluster demo environment.
If you don't have a Kafka cluster to work with, try our local Docker Compose setup, which runs Kpow alongside a 3-node Kafka cluster on your machine - perfect for experimenting in a self-contained environment.
Launch a Kpow Instance
Kpow's Docker Compose environment (kpow-local) offers prebuilt resources for deploying Kpow instances. In this guide, we leverage parts of that setup to streamline deployment. Before proceeding, it's essential to have Docker Engine and the Docker Compose plugin installed; if these are not yet set up, refer to the official installation and post-installation guides.
To begin, clone the GitHub repository and create a docker-compose.yml
file in the root of your project. This Compose file launches a Kpow instance, exposing the UI on http://localhost:3000
by mapping container port 3000 to the host. Authentication and authorization are handled via mounted JAAS and RBAC files, while Kafka connection settings and other configurations are supplied through an .env
file specified in the env_file
section.
services: kpow: image: factorhouse/kpow:latest pull_policy: always restart: always ports: - "3000:3000" env_file: - resources/kpow/confluent-trial.env mem_limit: 2G volumes: ## Enterprise edition only - ./resources/jaas:/etc/kpow/jaas - ./resources/rbac:/etc/kpow/rbac
Next, create a configuration file at resources/kpow/confluent-trial.env
to define Kpow's core settings for connecting to Confluent Cloud. This file specifies authentication and authorization settings, details for connecting to the Confluent Cloud resources, and the required Kpow licensing information.
Kpow Enterprise Edition supports authentication and authorization out of the box. In this demo, we use the pre-configured Jetty authentication provider and a role-based access control (RBAC) file to manage user permissions.
The second part of the configuration file provides all the necessary connection settings for Kpow to securely communicate with the Confluent Cloud environment, including Kafka, Schema Registry, Kafka Connect, and ksqlDB.
To achieve the objectives outlined in this guide, where we explore enabling Kpow to monitor brokers and topics, as well as to create a topic, produce messages to it, and consume them from a Confluent Cloud cluster, the Kafka Cluster Connection
section is mandatory. This section provides the fundamental parameters Kpow requires to establish a secure and authenticated link to the target Confluent Cloud environment. Without this vital configuration, Kpow will be unable to discover brokers, or perform any data operations such as creating topics, producing messages, or consuming them - which are the core functionalities we are setting out to configure in this walkthrough. The specific details required within this section, particularly the BOOTSTRAP
server addresses and the API Key/Secret pair used within the SASL_JAAS_CONFIG
(acting as username and password for SASL/PLAIN authentication), will be unique to the specific Confluent Cloud environment being configured. Comprehensive instructions on generating these API keys, understanding their associated permissions necessary for Kpow's operations, and locating the bootstrap server information for that environment can be found within the official Confluent Cloud documentation (for API keys) and guides on connecting clients.
Kafka Cluster Connection
ENVIRONMENT_NAME=Confluent Cloud
: A label shown in the Kpow UI to identify this environment.BOOTSTRAP=<bootstrap-server-addresses>
: The address(es) of the Kafka cluster's bootstrap servers. These are used by Kpow to discover brokers and establish a connection.SECURITY_PROTOCOL=SASL_SSL
: Specifies that communication with the Kafka cluster uses SASL over SSL for secure authentication and encryption.SASL_MECHANISM=PLAIN
: Indicates the use of the PLAIN mechanism for SASL authentication.SASL_JAAS_CONFIG=...
: Contains the username and password used to authenticate with Confluent Cloud using the PLAIN mechanism.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=https
: Ensures the broker's identity is verified via hostname verification, as required by Confluent Cloud.
API Key for Enhanced Confluent Features
CONFLUENT_API_KEY
andCONFLUENT_API_SECRET
: Used to authenticate with Confluent Cloud's REST APIs for additional metadata or control plane features.CONFLUENT_DISK_MODE=COMPLETE
: Instructs Kpow to use full disk-based persistence, useful in managed cloud environments where Kpow runs remotely from the Kafka cluster.
Schema Registry Integration
SCHEMA_REGISTRY_NAME=...
: Display name for the Schema Registry in the Kpow UI.SCHEMA_REGISTRY_URL=...
: The HTTPS endpoint of the Confluent Schema Registry.SCHEMA_REGISTRY_AUTH=USER_INFO
: Specifies the authentication method (in this case, basic user info).SCHEMA_REGISTRY_USER
/SCHEMA_REGISTRY_PASSWORD
: The credentials to authenticate with the Schema Registry.
Kafka Connect Integration
CONNECT_REST_URL=...
: The URL of the Kafka Connect REST API.CONNECT_AUTH=BASIC
: Indicates that basic authentication is used to secure the Connect endpoint.CONNECT_BASIC_AUTH_USER
/CONNECT_BASIC_AUTH_PASS
: The credentials for accessing the Kafka Connect REST interface.
ksqlDB Integration
KSQLDB_NAME=...
: A label for the ksqlDB instance as shown in Kpow.KSQLDB_HOST
andKSQLDB_PORT
: Define the location of the ksqlDB server.KSQLDB_USE_TLS=true
: Enables secure communication with ksqlDB via TLS.KSQLDB_BASIC_AUTH_USER
/KSQLDB_BASIC_AUTH_PASSWORD
: Authentication credentials for ksqlDB.KSQLDB_USE_ALPN=true
: Enables Application-Layer Protocol Negotiation (ALPN), which is typically required when connecting to Confluent-hosted ksqlDB endpoints over HTTPS.
Finally, the configuration includes Kpow license details, which are required to activate and run Kpow.
## AuthN & AuthZ - Enterprise Edition Only JAVA_TOOL_OPTIONS=-Djava.security.auth.login.config=/etc/kpow/jaas/hash-jaas.conf AUTH_PROVIDER_TYPE=jetty RBAC_CONFIGURATION_FILE=/etc/kpow/rbac/hash-rbac.yml ## Confluent Cloud Configuration ENVIRONMENT_NAME=Confluent Cloud BOOTSTRAP=<bootstrap-server-addresses> SECURITY_PROTOCOL=SASL_SSL SASL_MECHANISM=PLAIN SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username="username" password="password"; SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=https CONFLUENT_API_KEY=<confluent-api-key> CONFLUENT_API_SECRET=<confluent-api-secret> CONFLUENT_DISK_MODE=COMPLETE SCHEMA_REGISTRY_NAME=Confluent Schema Registry SCHEMA_REGISTRY_URL=<schema-registry-url> SCHEMA_REGISTRY_AUTH=USER_INFO SCHEMA_REGISTRY_USER=<schema-registry-username> SCHEMA_REGISTRY_PASSWORD=<schema-registry-password> CONNECT_REST_URL=<connect-rest-url> CONNECT_AUTH=BASIC CONNECT_BASIC_AUTH_USER=<connect-basic-auth-username> CONNECT_BASIC_AUTH_PASS=<connect-basic-auth-password> KSQLDB_NAME=Confluent ksqlDB KSQLDB_HOST=<ksqldb-hose> KSQLDB_PORT=443 KSQLDB_USE_TLS=true KSQLDB_BASIC_AUTH_USER=<ksqldb-basic-auth-username> KSQLDB_BASIC_AUTH_PASSWORD=<ksqldb-basic-auth-password> KSQLDB_USE_ALPN=true ## Your License Details LICENSE_ID=<license-id> LICENSE_CODE=<license-code> LICENSEE=<licensee> LICENSE_EXPIRY=<license-expiry> LICENSE_SIGNATURE=<license-signature>
To deploy the Kpow instance, run the following command using Docker Compose:
docker compose up -d
Monitor and Manage Resources
Once we have logged in, for instance using the default credentials (e.g., admin
for username and password), our next step is to connect a Confluent Kafka cluster to Kpow. We then explore how Kpow enables us to monitor brokers, create a topic, produce a message to it, and observe that message being consumed, all within its user-friendly UI.
Conclusion
By following the steps outlined in this guide, we have successfully launched a Kpow instance using Docker Compose and meticulously configured its connection to a Confluent Cloud environment. This process involved preparing a docker-compose.yml
for the Kpow service and a comprehensive .env
file (resources/kpow/confluent-trial.env
) that detailed authentication credentials, Kpow licensing, and crucial connection parameters for various Confluent Cloud components, including Kafka brokers, Schema Registry, Kafka Connect, and ksqlDB.
The subsequent demonstration highlighted Kpow's intuitive user interface and its core capabilities, allowing us to effortlessly monitor Kafka brokers, create topics, produce messages, and observe their consumption within our Confluent Cloud cluster. This integrated setup not only simplifies the operational aspects of managing Kafka in the cloud but also empowers users with deep visibility and control, ultimately leading to more robust and efficient event streaming architectures. With Kpow connected to Confluent Cloud, we are now well-equipped to manage and optimize Kafka deployments effectively.