Overview
Managing Apache Kafka within a platform like Confluent Cloud provides significant advantages in scalability and managed services.
This post details the process of integrating Kpow with Confluent Cloud resources such as Kafka brokers, Schema Registry, Kafka Connect, and ksqlDB. Once Kpow is up and running we will explore the powerful user interface to monitor and interact with our Confluent Cloud Kafka cluster.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
Explore our live multi-cluster demo environment or grab a free Community license and dive into streaming tech on your laptop with Factor House Local.
Launch a Kpow Instance
To integrate Kpow with Confluent Cloud, we will first prepare a configuration file (e.g., confluent-trial.env
). This file will define Kpow's connection settings to the Confluent Cloud resources and include Kpow license details. Before proceeding, confirm that a valid Kpow license is in place, whether we're using the Community or Enterprise edition.
The primary section of this configuration file details the connection settings Kpow needs to securely communicate with the Confluent Cloud environment, encompassing Kafka, Schema Registry, Kafka Connect, and ksqlDB.
To achieve the objectives outlined in this guide, where we explore enabling Kpow to monitor brokers and topics, as well as to create a topic, produce messages to it, and consume them from a Confluent Cloud cluster, the Kafka Cluster Connection
section is mandatory. This section provides the fundamental parameters Kpow requires to establish a secure and authenticated link to the target Confluent Cloud environment. Without this vital configuration, Kpow will be unable to discover brokers, or perform any data operations such as creating topics, producing messages, or consuming them - which are the core functionalities we are setting out to configure in this walkthrough. The specific details required within this section, particularly the BOOTSTRAP
server addresses and the API Key/Secret pair used within the SASL_JAAS_CONFIG
(acting as username and password for SASL/PLAIN authentication), will be unique to the specific Confluent Cloud environment being configured. Comprehensive instructions on generating these API keys, understanding their associated permissions necessary for Kpow's operations, and locating the bootstrap server information for that environment can be found within the official Confluent Cloud documentation (for API keys) and guides on connecting clients.
Kafka Cluster Connection
ENVIRONMENT_NAME=Confluent Cloud
: A label shown in the Kpow UI to identify this environment.BOOTSTRAP=<bootstrap-server-addresses>
: The address(es) of the Kafka cluster's bootstrap servers. These are used by Kpow to discover brokers and establish a connection.SECURITY_PROTOCOL=SASL_SSL
: Specifies that communication with the Kafka cluster uses SASL over SSL for secure authentication and encryption.SASL_MECHANISM=PLAIN
: Indicates the use of the PLAIN mechanism for SASL authentication.SASL_JAAS_CONFIG=...
: Contains the username and password used to authenticate with Confluent Cloud using the PLAIN mechanism.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=https
: Ensures the broker's identity is verified via hostname verification, as required by Confluent Cloud.
API Key for Enhanced Confluent Features
CONFLUENT_API_KEY
andCONFLUENT_API_SECRET
: Used to authenticate with Confluent Cloud's REST APIs for additional metadata or control plane features.CONFLUENT_DISK_MODE=COMPLETE
: Instructs Kpow to use full disk-based persistence, useful in managed cloud environments where Kpow runs remotely from the Kafka cluster.
Schema Registry Integration
SCHEMA_REGISTRY_NAME=...
: Display name for the Schema Registry in the Kpow UI.SCHEMA_REGISTRY_URL=...
: The HTTPS endpoint of the Confluent Schema Registry.SCHEMA_REGISTRY_AUTH=USER_INFO
: Specifies the authentication method (in this case, basic user info).SCHEMA_REGISTRY_USER
/SCHEMA_REGISTRY_PASSWORD
: The credentials to authenticate with the Schema Registry.
Kafka Connect Integration
CONNECT_REST_URL=...
: The URL of the Kafka Connect REST API.CONNECT_AUTH=BASIC
: Indicates that basic authentication is used to secure the Connect endpoint.CONNECT_BASIC_AUTH_USER
/CONNECT_BASIC_AUTH_PASS
: The credentials for accessing the Kafka Connect REST interface.
ksqlDB Integration
KSQLDB_NAME=...
: A label for the ksqlDB instance as shown in Kpow.KSQLDB_HOST
andKSQLDB_PORT
: Define the location of the ksqlDB server.KSQLDB_USE_TLS=true
: Enables secure communication with ksqlDB via TLS.KSQLDB_BASIC_AUTH_USER
/KSQLDB_BASIC_AUTH_PASSWORD
: Authentication credentials for ksqlDB.KSQLDB_USE_ALPN=true
: Enables Application-Layer Protocol Negotiation (ALPN), which is typically required when connecting to Confluent-hosted ksqlDB endpoints over HTTPS.
The configuration file also includes Kpow license details, which, as mentioned, are necessary to activate and run Kpow.
## Confluent Cloud Configuration ENVIRONMENT_NAME=Confluent Cloud BOOTSTRAP=<bootstrap-server-addresses> SECURITY_PROTOCOL=SASL_SSL SASL_MECHANISM=PLAIN SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username="username" password="password"; SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=https CONFLUENT_API_KEY=<confluent-api-key> CONFLUENT_API_SECRET=<confluent-api-secret> CONFLUENT_DISK_MODE=COMPLETE SCHEMA_REGISTRY_NAME=Confluent Schema Registry SCHEMA_REGISTRY_URL=<schema-registry-url> SCHEMA_REGISTRY_AUTH=USER_INFO SCHEMA_REGISTRY_USER=<schema-registry-username> SCHEMA_REGISTRY_PASSWORD=<schema-registry-password> CONNECT_REST_URL=<connect-rest-url> CONNECT_AUTH=BASIC CONNECT_BASIC_AUTH_USER=<connect-basic-auth-username> CONNECT_BASIC_AUTH_PASS=<connect-basic-auth-password> KSQLDB_NAME=Confluent ksqlDB KSQLDB_HOST=<ksqldb-hose> KSQLDB_PORT=443 KSQLDB_USE_TLS=true KSQLDB_BASIC_AUTH_USER=<ksqldb-basic-auth-username> KSQLDB_BASIC_AUTH_PASSWORD=<ksqldb-basic-auth-password> KSQLDB_USE_ALPN=true ## Your License Details LICENSE_ID=<license-id> LICENSE_CODE=<license-code> LICENSEE=<licensee> LICENSE_EXPIRY=<license-expiry> LICENSE_SIGNATURE=<license-signature>
Once the confluent-trial.env
file is prepared, we can launch the Kpow instance using the following docker run
command:
docker run --pull=always -p 3000:3000 --name kpow \ --env-file confluent-trial.env -d factorhouse/kpow-ce:latest
Monitor and Manage Resources
Once Kpow is launched, we can explore how it enables us to monitor brokers, create a topic, produce a message to it, and observe that message being consumed, all within its user-friendly UI.
Conclusion
By following these steps, we've successfully launched Kpow with Docker and established its connection to our Confluent Cloud. The key to this was our confluent-trial.env
file, where we defined Kpow's license details and the essential connection settings for Kafka, Schema Registry, Kafka Connect, and ksqlDB.
The subsequent demonstration highlighted Kpow's intuitive user interface and its core capabilities, allowing us to effortlessly monitor Kafka brokers, create topics, produce messages, and observe their consumption within our Confluent Cloud cluster. This integrated setup not only simplifies the operational aspects of managing Kafka in the cloud but also empowers users with deep visibility and control, ultimately leading to more robust and efficient event streaming architectures. With Kpow connected to Confluent Cloud, we are now well-equipped to manage and optimize Kafka deployments effectively.