
Developer
Knowledge Center
Empowering engineers with everything they need to build, monitor, and scale real-time data pipelines with confidence.

Integrate Kpow with Oracle Compute Infrastructure (OCI) Streaming with Apache Kafka
Unlock the full potential of your dedicated OCI Streaming with Apache Kafka cluster. This guide shows you how to integrate Kpow with your OCI brokers and self-hosted Kafka Connect and Schema Registry, unifying them into a single, developer-ready toolkit for complete visibility and control over your entire Kafka ecosystem.
Overview
When working with real-time data on Oracle Cloud Infrastructure (OCI), you have two powerful, Kafka-compatible streaming services to choose from:
- OCI Streaming with Apache Kafka: A dedicated, managed service that gives you full control over your own Apache Kafka cluster.
- OCI Streaming: A serverless, Kafka-compatible platform designed for effortless, scalable data ingestion.
Choosing the dedicated OCI Streaming with Apache Kafka service gives you maximum control and the complete functionality of open-source Kafka. However, this control comes with a trade-off: unlike some other managed platforms, OCI does not provide managed Kafka Connect or Schema Registry services, recommending users provision them on custom instances.
This guide will walk you through integrating Kpow with your OCI Kafka cluster, alongside self-hosted instances of Kafka Connect and Schema Registry. The result is a complete, developer-ready environment that provides full visibility and control over your entire Kafka ecosystem.
❗ Note on the serverless OCI Streaming service: While you can connect Kpow to OCI's serverless offering, its functionality is limited because some Kafka APIs are yet to be implemented. Our OCI provider documentation explains how to connect, and you can review the specific API gaps in the official Oracle documentation.
💡 Explore our setup guides for other leading platforms like Confluent Cloud, Amazon MSK, and Google Cloud Kafka, or emerging solutions like Redpanda, BufStream, and the Instaclustr Platform.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
Explore our live multi-cluster demo environment or grab a free Community license and dive into streaming tech on your laptop with Factor House Local.
.png)
Prerequisites
Before creating a Kafka cluster, you must set up the necessary network infrastructure within your OCI tenancy. The Kafka cluster itself is deployed directly into this network, and this setup is also what ensures that your client applications (like Kpow) can securely connect to the brokers.
As detailed in the official OCI documentation, you will need:
- A Virtual Cloud Network (VCN): The foundational network for your cloud resources.
- A Subnet: A subdivision of your VCN where you will launch the Kafka cluster and client VM.
- Security Rules: Ingress rules configured in a Security List or Network Security Group to allow traffic on the required ports. For this guide, which uses SASL/SCRAM, you must open port 9092. If you were using mTLS, you would open port 9093.
Create a Vault Secret
OCI Kafka leverages the OCI Vault service to securely manage the credentials used for SASL/SCRAM authentication.
First, create a Vault in your desired compartment. Inside that Vault, create a new Secret with the following JSON content, replacing the placeholder values with your desired username and a strong password.
{ "username": "<vault-username>", "password": "<value-password>" }Take note of the following details, as you will need them when creating the Kafka cluster:
- SASL SCRAM - Vault compartment:
<compartment-name> - SASL SCRAM - Vault:
<vault-name> - SASL SCRAM - Secret compartment:
<compartment-name> - SASL SCRAM - Secret:
<value-secret-name>
Create IAM Policies
To allow OCI to manage your Kafka cluster and its associated network resources, you must create several IAM policies. These policies grant permissions to both user groups (for administrative actions) and the Kafka service principal (for operational tasks).
The required policies are detailed in the "Required IAM Policies" section of the OCI Kafka documentation. Apply these policies in your tenancy's root compartment to ensure the Kafka service has the necessary permissions.
Create a Kafka Cluster
With the prerequisites in place, you can now create your Kafka cluster from the OCI console.
- Navigate to Developer Services > Application Integration > OCI Streaming with Apache Kafka.
- Click Create cluster and follow the wizard:
- Cluster settings: Provide a name, select your compartment, and choose a Kafka version (e.g., 3.7).
- Broker settings: Choose the number of brokers, the OCPU count per broker, and the block volume storage per broker.
- Cluster configuration: OCI creates a default configuration for the cluster. You can review and edit its properties here. For this guide, add
auto.create.topics.enable=trueto the default configuration. Note that after creation, the cluster's configuration can only be changed using the OCI CLI or SDK. - Security settings: This section is for configuring Mutual TLS (mTLS). Since this guide uses SASL/SCRAM, leave this section blank. We will configure security after the cluster is created.
- Networking: Choose the VCN and subnet you configured in the prerequisites.
- Review your settings and click Create. OCI will begin provisioning your dedicated Kafka cluster.
- Once the cluster's status becomes Active, select it from the cluster list page to view its details.
- From the details page, select the Actions menu and then select Update SASL SCRAM.
- In the Update SASL SCRAM panel, select the Vault and the Secret that contain your secure credentials.
- Select Update.
- After the update is complete, return to the Cluster Information section and copy the Bootstrap Servers endpoint for SASL-SCRAM. You will need this for the next steps.
Launch a Client VM
We need a virtual machine to host Kpow, Kafka Connect, and Schema Registry. This VM must have network access to the Kafka cluster.
- Create Instance & Save SSH Key: Navigate to Compute > Instances and begin to create a new compute instance.
- Select an Ubuntu image.
- In the "Add SSH keys" section, choose the option to "Generate a key pair for me" and click the "Save Private Key" button. This is your only chance to download this key, which is required for SSH access.
- Configure Networking: During the instance creation, configure the networking as follows:
- Placement: Assign the instance to the same VCN as your Kafka cluster, in a subnet that can reach your Kafka brokers.
- Kpow UI Access: Ensure the subnet's security rules allow inbound TCP traffic on port 3000. This opens the port for the Kpow web interface.
- Internet Access: The instance needs outbound access to pull the Kpow Docker image.
- Simple Setup: For development, place the instance in a public subnet with an Internet Gateway.
- Secure (Production): We recommend using a private subnet with a NAT Gateway. This allows outbound connections without exposing the instance to inbound internet traffic.
- Connect and Install Docker: Once the VM is in the "Running" state, use the private key you saved to SSH into its public or private IP address and install Docker.
Deploying Kpow with Supporting Instances
On your client VM, we will use Docker Compose to launch Kpow, Kafka Connect, and Schema Registry.
First, create a setup script to prepare the environment. This script downloads the MSK Data Generator (a useful source connector for creating sample data) and sets up the JAAS configuration files required for Schema Registry's basic authentication.
Save the following as setup.sh:
#!/usr/bin/env bash
SCRIPT_PATH="$(cd $(dirname "$0"); pwd)"
DEPS_PATH=$SCRIPT_PATH/deps
rm -rf $DEPS_PATH && mkdir $DEPS_PATH
echo "Set-up environment..."
echo "Downloading MSK data generator..."
mkdir -p $DEPS_PATH/connector/msk-datagen
curl --silent -L -o $DEPS_PATH/connector/msk-datagen/msk-data-generator.jar \
https://github.com/awslabs/amazon-msk-data-generator/releases/download/v0.4.0/msk-data-generator-0.4-jar-with-dependencies.jar
echo "Create Schema Registry configs..."
mkdir -p $DEPS_PATH/schema
cat << 'EOF' > $DEPS_PATH/schema/schema_jaas.conf
schema {
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
debug="true"
file="/etc/schema/schema_realm.properties";
};
EOF
cat << 'EOF' > $DEPS_PATH/schema/schema_realm.properties
admin: CRYPT:adpexzg3FUZAk,schema-admin
EOF
echo "Environment setup completed."Next, create a `docker-compose.yml` file. This defines our three services. Be sure to replace the placeholder values (<BOOTSTRAP_SERVER_ADDRESS>, <VAULT_USERNAME>, <VAULT_PASSWORD>) with your specific OCI Kafka details.
services:
kpow:
image: factorhouse/kpow-ce:latest
container_name: kpow
pull_policy: always
restart: always
ports:
- "3000:3000"
networks:
- factorhouse
depends_on:
connect:
condition: service_healthy
environment:
ENVIRONMENT_NAME: "OCI Kafka Cluster"
BOOTSTRAP: "<BOOTSTRAP_SERVER_ADDRESS>"
SECURITY_PROTOCOL: "SASL_SSL"
SASL_MECHANISM: "SCRAM-SHA-512"
SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
CONNECT_NAME: "Local Connect Cluster"
CONNECT_REST_URL: "http://connect:8083"
SCHEMA_REGISTRY_NAME: "Local Schema Registry"
SCHEMA_REGISTRY_URL: "http://schema:8081"
SCHEMA_REGISTRY_AUTH: "USER_INFO"
SCHEMA_REGISTRY_USER: "admin"
SCHEMA_REGISTRY_PASSWORD: "admin"
env_file:
- license.env
schema:
image: confluentinc/cp-schema-registry:7.8.0
container_name: schema_registry
ports:
- "8081:8081"
networks:
- factorhouse
environment:
SCHEMA_REGISTRY_HOST_NAME: "schema"
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "<BOOTSTRAP_SERVER_ADDRESS>"
## Authentication
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: "SASL_SSL"
SCHEMA_REGISTRY_KAFKASTORE_SASL_MECHANISM: "SCRAM-SHA-512"
SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
SCHEMA_REGISTRY_AUTHENTICATION_METHOD: BASIC
SCHEMA_REGISTRY_AUTHENTICATION_REALM: schema
SCHEMA_REGISTRY_AUTHENTICATION_ROLES: schema-admin
SCHEMA_REGISTRY_OPTS: -Djava.security.auth.login.config=/etc/schema/schema_jaas.conf
volumes:
- ./deps/schema:/etc/schema
connect:
image: confluentinc/cp-kafka-connect:7.8.0
container_name: connect
restart: unless-stopped
ports:
- 8083:8083
networks:
- factorhouse
environment:
CONNECT_BOOTSTRAP_SERVERS: "<BOOTSTRAP_SERVER_ADDRESS>"
CONNECT_REST_PORT: "8083"
CONNECT_GROUP_ID: "oci-demo-connect"
CONNECT_CONFIG_STORAGE_TOPIC: "oci-demo-connect-config"
CONNECT_OFFSET_STORAGE_TOPIC: "oci-demo-connect-offsets"
CONNECT_STATUS_STORAGE_TOPIC: "oci-demo-connect-status"
## Authentication
CONNECT_SECURITY_PROTOCOL: "SASL_SSL"
CONNECT_SASL_MECHANISM: "SCRAM-SHA-512"
CONNECT_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
# Propagate auth settings to internal clients
CONNECT_PRODUCER_SECURITY_PROTOCOL: "SASL_SSL"
CONNECT_PRODUCER_SASL_MECHANISM: "SCRAM-SHA-512"
CONNECT_PRODUCER_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
CONNECT_CONSUMER_SECURITY_PROTOCOL: "SASL_SSL"
CONNECT_CONSUMER_SASL_MECHANISM: "SCRAM-SHA-512"
CONNECT_CONSUMER_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_REST_ADVERTISED_HOST_NAME: "localhost"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_PLUGIN_PATH: /usr/share/java/,/etc/kafka-connect/jars
volumes:
- ./deps/connector:/etc/kafka-connect/jars
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8083/ || exit 1"]
interval: 5s
timeout: 3s
retries: 10
start_period: 20s
networks:
factorhouse:
name: factorhouseFinally, create a license.env file with your Kpow license details. Then, run the setup script and launch the services:
chmod +x setup.sh
bash setup.sh && docker-compose up -dKpow will now be accessible at http://<vm-ip-address>:3000. You will see an overview of your OCI Kafka cluster, including your self-hosted Kafka Connect and Schema Registry instances.

Deploy Kafka Connector
Now let's deploy a connector to generate some data.
In the Connect menu of the Kpow UI, click the Create connector button.

Among the available connectors, select GenerateSourceConnector, which is the source connector that generates fake order records.

Save the following configuration to a Json file, then import it and click Create. This configuration tells the connector to generate order data, use Avro for the value, and apply several Single Message Transforms (SMTs) to shape the final message.
{
"name": "orders-source",
"config": {
"connector.class": "com.amazonaws.mskdatagen.GeneratorSourceConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable": false,
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schemas.enable": true,
"value.converter.schema.registry.url": "http://schema:8081",
"value.converter.basic.auth.credentials.source": "USER_INFO",
"value.converter.basic.auth.user.info": "admin:admin",
"genv.orders.order_id.with": "#{Internet.uuid}",
"genv.orders.bid_time.with": "#{date.past '5','SECONDS'}",
"genv.orders.price.with": "#{number.random_double '2','1','150'}",
"genv.orders.item.with": "#{Commerce.productName}",
"genv.orders.supplier.with": "#{regexify '(Alice|Bob|Carol|Alex|Joe|James|Jane|Jack)'}",
"transforms": "extractKey,flattenKey,convertBidTime",
"transforms.extractKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.extractKey.fields": "order_id",
"transforms.flattenKey.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.flattenKey.field": "order_id",
"transforms.convertBidTime.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.convertBidTime.field": "bid_time",
"transforms.convertBidTime.target.type": "Timestamp",
"transforms.convertBidTime.format": "EEE MMM dd HH:mm:ss zzz yyyy",
"global.throttle.ms": "1000",
"global.history.records.max": "1000"
}
}
Once deployed, you can see the running connector and its task in the Kpow UI.

In the Schema menu, you can verify that a new value schema (orders-value) has been registered for the orders topic.

Finally, navigate to Data > Inspect, select the orders topic, and click Search to see the streaming data produced by your new connector.

Conclusion
You have now successfully integrated Kpow with OCI Streaming with Apache Kafka, providing a complete, self-hosted streaming stack on Oracle's powerful cloud infrastructure. By deploying Kafka Connect and Schema Registry alongside your cluster, you have a fully-featured, production-ready environment.
With Kpow, you have gained end-to-end visibility and control, from monitoring broker health and consumer lag to managing schemas, connectors, and inspecting live data streams. This empowers your team to develop, debug, and operate your Kafka-based applications with confidence.
Highlights
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Integrate Kpow with Oracle Compute Infrastructure (OCI) Streaming with Apache Kafka
Unlock the full potential of your dedicated OCI Streaming with Apache Kafka cluster. This guide shows you how to integrate Kpow with your OCI brokers and self-hosted Kafka Connect and Schema Registry, unifying them into a single, developer-ready toolkit for complete visibility and control over your entire Kafka ecosystem.
Overview
When working with real-time data on Oracle Cloud Infrastructure (OCI), you have two powerful, Kafka-compatible streaming services to choose from:
- OCI Streaming with Apache Kafka: A dedicated, managed service that gives you full control over your own Apache Kafka cluster.
- OCI Streaming: A serverless, Kafka-compatible platform designed for effortless, scalable data ingestion.
Choosing the dedicated OCI Streaming with Apache Kafka service gives you maximum control and the complete functionality of open-source Kafka. However, this control comes with a trade-off: unlike some other managed platforms, OCI does not provide managed Kafka Connect or Schema Registry services, recommending users provision them on custom instances.
This guide will walk you through integrating Kpow with your OCI Kafka cluster, alongside self-hosted instances of Kafka Connect and Schema Registry. The result is a complete, developer-ready environment that provides full visibility and control over your entire Kafka ecosystem.
❗ Note on the serverless OCI Streaming service: While you can connect Kpow to OCI's serverless offering, its functionality is limited because some Kafka APIs are yet to be implemented. Our OCI provider documentation explains how to connect, and you can review the specific API gaps in the official Oracle documentation.
💡 Explore our setup guides for other leading platforms like Confluent Cloud, Amazon MSK, and Google Cloud Kafka, or emerging solutions like Redpanda, BufStream, and the Instaclustr Platform.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
Explore our live multi-cluster demo environment or grab a free Community license and dive into streaming tech on your laptop with Factor House Local.
.png)
Prerequisites
Before creating a Kafka cluster, you must set up the necessary network infrastructure within your OCI tenancy. The Kafka cluster itself is deployed directly into this network, and this setup is also what ensures that your client applications (like Kpow) can securely connect to the brokers.
As detailed in the official OCI documentation, you will need:
- A Virtual Cloud Network (VCN): The foundational network for your cloud resources.
- A Subnet: A subdivision of your VCN where you will launch the Kafka cluster and client VM.
- Security Rules: Ingress rules configured in a Security List or Network Security Group to allow traffic on the required ports. For this guide, which uses SASL/SCRAM, you must open port 9092. If you were using mTLS, you would open port 9093.
Create a Vault Secret
OCI Kafka leverages the OCI Vault service to securely manage the credentials used for SASL/SCRAM authentication.
First, create a Vault in your desired compartment. Inside that Vault, create a new Secret with the following JSON content, replacing the placeholder values with your desired username and a strong password.
{ "username": "<vault-username>", "password": "<value-password>" }Take note of the following details, as you will need them when creating the Kafka cluster:
- SASL SCRAM - Vault compartment:
<compartment-name> - SASL SCRAM - Vault:
<vault-name> - SASL SCRAM - Secret compartment:
<compartment-name> - SASL SCRAM - Secret:
<value-secret-name>
Create IAM Policies
To allow OCI to manage your Kafka cluster and its associated network resources, you must create several IAM policies. These policies grant permissions to both user groups (for administrative actions) and the Kafka service principal (for operational tasks).
The required policies are detailed in the "Required IAM Policies" section of the OCI Kafka documentation. Apply these policies in your tenancy's root compartment to ensure the Kafka service has the necessary permissions.
Create a Kafka Cluster
With the prerequisites in place, you can now create your Kafka cluster from the OCI console.
- Navigate to Developer Services > Application Integration > OCI Streaming with Apache Kafka.
- Click Create cluster and follow the wizard:
- Cluster settings: Provide a name, select your compartment, and choose a Kafka version (e.g., 3.7).
- Broker settings: Choose the number of brokers, the OCPU count per broker, and the block volume storage per broker.
- Cluster configuration: OCI creates a default configuration for the cluster. You can review and edit its properties here. For this guide, add
auto.create.topics.enable=trueto the default configuration. Note that after creation, the cluster's configuration can only be changed using the OCI CLI or SDK. - Security settings: This section is for configuring Mutual TLS (mTLS). Since this guide uses SASL/SCRAM, leave this section blank. We will configure security after the cluster is created.
- Networking: Choose the VCN and subnet you configured in the prerequisites.
- Review your settings and click Create. OCI will begin provisioning your dedicated Kafka cluster.
- Once the cluster's status becomes Active, select it from the cluster list page to view its details.
- From the details page, select the Actions menu and then select Update SASL SCRAM.
- In the Update SASL SCRAM panel, select the Vault and the Secret that contain your secure credentials.
- Select Update.
- After the update is complete, return to the Cluster Information section and copy the Bootstrap Servers endpoint for SASL-SCRAM. You will need this for the next steps.
Launch a Client VM
We need a virtual machine to host Kpow, Kafka Connect, and Schema Registry. This VM must have network access to the Kafka cluster.
- Create Instance & Save SSH Key: Navigate to Compute > Instances and begin to create a new compute instance.
- Select an Ubuntu image.
- In the "Add SSH keys" section, choose the option to "Generate a key pair for me" and click the "Save Private Key" button. This is your only chance to download this key, which is required for SSH access.
- Configure Networking: During the instance creation, configure the networking as follows:
- Placement: Assign the instance to the same VCN as your Kafka cluster, in a subnet that can reach your Kafka brokers.
- Kpow UI Access: Ensure the subnet's security rules allow inbound TCP traffic on port 3000. This opens the port for the Kpow web interface.
- Internet Access: The instance needs outbound access to pull the Kpow Docker image.
- Simple Setup: For development, place the instance in a public subnet with an Internet Gateway.
- Secure (Production): We recommend using a private subnet with a NAT Gateway. This allows outbound connections without exposing the instance to inbound internet traffic.
- Connect and Install Docker: Once the VM is in the "Running" state, use the private key you saved to SSH into its public or private IP address and install Docker.
Deploying Kpow with Supporting Instances
On your client VM, we will use Docker Compose to launch Kpow, Kafka Connect, and Schema Registry.
First, create a setup script to prepare the environment. This script downloads the MSK Data Generator (a useful source connector for creating sample data) and sets up the JAAS configuration files required for Schema Registry's basic authentication.
Save the following as setup.sh:
#!/usr/bin/env bash
SCRIPT_PATH="$(cd $(dirname "$0"); pwd)"
DEPS_PATH=$SCRIPT_PATH/deps
rm -rf $DEPS_PATH && mkdir $DEPS_PATH
echo "Set-up environment..."
echo "Downloading MSK data generator..."
mkdir -p $DEPS_PATH/connector/msk-datagen
curl --silent -L -o $DEPS_PATH/connector/msk-datagen/msk-data-generator.jar \
https://github.com/awslabs/amazon-msk-data-generator/releases/download/v0.4.0/msk-data-generator-0.4-jar-with-dependencies.jar
echo "Create Schema Registry configs..."
mkdir -p $DEPS_PATH/schema
cat << 'EOF' > $DEPS_PATH/schema/schema_jaas.conf
schema {
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
debug="true"
file="/etc/schema/schema_realm.properties";
};
EOF
cat << 'EOF' > $DEPS_PATH/schema/schema_realm.properties
admin: CRYPT:adpexzg3FUZAk,schema-admin
EOF
echo "Environment setup completed."Next, create a `docker-compose.yml` file. This defines our three services. Be sure to replace the placeholder values (<BOOTSTRAP_SERVER_ADDRESS>, <VAULT_USERNAME>, <VAULT_PASSWORD>) with your specific OCI Kafka details.
services:
kpow:
image: factorhouse/kpow-ce:latest
container_name: kpow
pull_policy: always
restart: always
ports:
- "3000:3000"
networks:
- factorhouse
depends_on:
connect:
condition: service_healthy
environment:
ENVIRONMENT_NAME: "OCI Kafka Cluster"
BOOTSTRAP: "<BOOTSTRAP_SERVER_ADDRESS>"
SECURITY_PROTOCOL: "SASL_SSL"
SASL_MECHANISM: "SCRAM-SHA-512"
SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
CONNECT_NAME: "Local Connect Cluster"
CONNECT_REST_URL: "http://connect:8083"
SCHEMA_REGISTRY_NAME: "Local Schema Registry"
SCHEMA_REGISTRY_URL: "http://schema:8081"
SCHEMA_REGISTRY_AUTH: "USER_INFO"
SCHEMA_REGISTRY_USER: "admin"
SCHEMA_REGISTRY_PASSWORD: "admin"
env_file:
- license.env
schema:
image: confluentinc/cp-schema-registry:7.8.0
container_name: schema_registry
ports:
- "8081:8081"
networks:
- factorhouse
environment:
SCHEMA_REGISTRY_HOST_NAME: "schema"
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "<BOOTSTRAP_SERVER_ADDRESS>"
## Authentication
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: "SASL_SSL"
SCHEMA_REGISTRY_KAFKASTORE_SASL_MECHANISM: "SCRAM-SHA-512"
SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
SCHEMA_REGISTRY_AUTHENTICATION_METHOD: BASIC
SCHEMA_REGISTRY_AUTHENTICATION_REALM: schema
SCHEMA_REGISTRY_AUTHENTICATION_ROLES: schema-admin
SCHEMA_REGISTRY_OPTS: -Djava.security.auth.login.config=/etc/schema/schema_jaas.conf
volumes:
- ./deps/schema:/etc/schema
connect:
image: confluentinc/cp-kafka-connect:7.8.0
container_name: connect
restart: unless-stopped
ports:
- 8083:8083
networks:
- factorhouse
environment:
CONNECT_BOOTSTRAP_SERVERS: "<BOOTSTRAP_SERVER_ADDRESS>"
CONNECT_REST_PORT: "8083"
CONNECT_GROUP_ID: "oci-demo-connect"
CONNECT_CONFIG_STORAGE_TOPIC: "oci-demo-connect-config"
CONNECT_OFFSET_STORAGE_TOPIC: "oci-demo-connect-offsets"
CONNECT_STATUS_STORAGE_TOPIC: "oci-demo-connect-status"
## Authentication
CONNECT_SECURITY_PROTOCOL: "SASL_SSL"
CONNECT_SASL_MECHANISM: "SCRAM-SHA-512"
CONNECT_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
# Propagate auth settings to internal clients
CONNECT_PRODUCER_SECURITY_PROTOCOL: "SASL_SSL"
CONNECT_PRODUCER_SASL_MECHANISM: "SCRAM-SHA-512"
CONNECT_PRODUCER_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
CONNECT_CONSUMER_SECURITY_PROTOCOL: "SASL_SSL"
CONNECT_CONSUMER_SASL_MECHANISM: "SCRAM-SHA-512"
CONNECT_CONSUMER_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_REST_ADVERTISED_HOST_NAME: "localhost"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_PLUGIN_PATH: /usr/share/java/,/etc/kafka-connect/jars
volumes:
- ./deps/connector:/etc/kafka-connect/jars
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8083/ || exit 1"]
interval: 5s
timeout: 3s
retries: 10
start_period: 20s
networks:
factorhouse:
name: factorhouseFinally, create a license.env file with your Kpow license details. Then, run the setup script and launch the services:
chmod +x setup.sh
bash setup.sh && docker-compose up -dKpow will now be accessible at http://<vm-ip-address>:3000. You will see an overview of your OCI Kafka cluster, including your self-hosted Kafka Connect and Schema Registry instances.

Deploy Kafka Connector
Now let's deploy a connector to generate some data.
In the Connect menu of the Kpow UI, click the Create connector button.

Among the available connectors, select GenerateSourceConnector, which is the source connector that generates fake order records.

Save the following configuration to a Json file, then import it and click Create. This configuration tells the connector to generate order data, use Avro for the value, and apply several Single Message Transforms (SMTs) to shape the final message.
{
"name": "orders-source",
"config": {
"connector.class": "com.amazonaws.mskdatagen.GeneratorSourceConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable": false,
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schemas.enable": true,
"value.converter.schema.registry.url": "http://schema:8081",
"value.converter.basic.auth.credentials.source": "USER_INFO",
"value.converter.basic.auth.user.info": "admin:admin",
"genv.orders.order_id.with": "#{Internet.uuid}",
"genv.orders.bid_time.with": "#{date.past '5','SECONDS'}",
"genv.orders.price.with": "#{number.random_double '2','1','150'}",
"genv.orders.item.with": "#{Commerce.productName}",
"genv.orders.supplier.with": "#{regexify '(Alice|Bob|Carol|Alex|Joe|James|Jane|Jack)'}",
"transforms": "extractKey,flattenKey,convertBidTime",
"transforms.extractKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.extractKey.fields": "order_id",
"transforms.flattenKey.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.flattenKey.field": "order_id",
"transforms.convertBidTime.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.convertBidTime.field": "bid_time",
"transforms.convertBidTime.target.type": "Timestamp",
"transforms.convertBidTime.format": "EEE MMM dd HH:mm:ss zzz yyyy",
"global.throttle.ms": "1000",
"global.history.records.max": "1000"
}
}
Once deployed, you can see the running connector and its task in the Kpow UI.

In the Schema menu, you can verify that a new value schema (orders-value) has been registered for the orders topic.

Finally, navigate to Data > Inspect, select the orders topic, and click Search to see the streaming data produced by your new connector.

Conclusion
You have now successfully integrated Kpow with OCI Streaming with Apache Kafka, providing a complete, self-hosted streaming stack on Oracle's powerful cloud infrastructure. By deploying Kafka Connect and Schema Registry alongside your cluster, you have a fully-featured, production-ready environment.
With Kpow, you have gained end-to-end visibility and control, from monitoring broker health and consumer lag to managing schemas, connectors, and inspecting live data streams. This empowers your team to develop, debug, and operate your Kafka-based applications with confidence.

Unified community license for Kpow and Flex
The unified Factor House Community License works with both Kpow Community Edition and Flex Community Edition, meaning one license will unlock both products. This makes it even simpler to explore modern data streaming tools, create proof-of-concepts, and evaluate our products.
The new unified Factor House Community License works with both Kpow Community Edition and Flex Community Edition, so you only need one license to unlock both products. This makes it even simpler to explore modern data streaming tools, create proof-of-concepts, and evaluate our products.
What's changing
Previously, we issued separate community licenses for Kpow and Flex, with different tiers for individuals and organisations. Now, there's just one single Community License that unlocks both products.
What's new:
- One license for both products
- Three environments for everyone - whether you're an individual developer or part of a team, you get three non-production installations per product
- Simplified management - access and renew your licenses through our new self-service portal at account.factorhouse.io
Our commitment to the engineering community
Since first launching Kpow CE at Current '22, thousands of engineers have used our community licenses to learn Kafka and Flink without jumping through enterprise procurement hoops. This unified license keeps that same philosophy: high-quality tools that are free for non-production use.
The Factor House Community License is free for individuals and organizations to use in non-production environments. It's perfect for:
- Learning and experimenting with Kafka and Flink
- Building prototypes and proof-of-concepts
- Testing integrations before production deployment
- Exploring sample projects like Factor House Local, Top-K Game Leaderboard, and theLook eCommerce dashboard
Getting started
New users: Head to account.factorhouse.io to grab your free Community license. You'll receive instant access via magic link authentication.
Existing users: Your legacy Kpow and Flex Community licenses will continue to work and are now visible in the portal. When your license renews (after 12 months), consider switching to the unified model for easier management.
What's included
Both Kpow CE and Flex CE include most enterprise features, optimized for learning and testing. Includes Kafka and Flink monitoring and management, fast multi-topic search, and Schema registry and Kafka Connect support.
- License duration: 12 months, renewable annually
- Installations: Up to 3 per product (Kpow CE: 1 Kafka cluster + 1 Schema Registry + 1 Connect cluster per installation; Flex CE: 1 Flink cluster per installation)
- Support: Self-service via Factor House Community Slack, documentation, and release notes
- Deployment: Docker, Docker Compose or Kubernetes
Ready for production? Start a 30-day free trial of our Enterprise editions directly from the portal to unlock RBAC, Kafka Streams monitoring, custom SerDes, and dedicated support.
What about legacy licenses?
If you're currently using a Kpow Individual, Kpow Organization, or Flex Community license, nothing changes immediately. Your existing licenses will continue to work with their respective products and are now accessible in the portal. When your license expires at the end of its 12-month term, you can easily switch to the new unified license for simpler management.
Get your free Community license at account.factorhouse.io
Questions? Reach out to hello@factorhouse.io or join us in the Factor House Community Slack.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Release 95.1: A unified experience across product, web, docs and licensing
95.1 delivers a cohesive experience across Factor House products, licensing, and brand. This release introduces our new license portal, refreshed company-wide branding, a unified Community License for Kpow and Flex, and a series of performance, accessibility, and schema-related improvements.
Upgrading to 95.1
If you are using Kpow with a Google Managed Service for Apache Kafka (Google MSAK) cluster, you will now need to use eitherkpow-java17-gcp-standalone.jaror the95.1-temurin-ubitag of the factorhouse/kpow Docker image.
New Factor House brand: unified look across web, product, and docs
We've refreshed the Factor House brand across our website, documentation, the new license portal, and products to reflect where we are today: a company trusted by engineers running some of the world's most demanding data pipelines. Following our seed funding earlier this year, we've been scaling the team and product offerings to match the quality and value we deliver to enterprise engineers. The new brand brings our external presence in line with what we've built. You'll see updated logos in Kpow and Flex, refreshed styling across docs and the license portal, and a completely redesigned website with clearer navigation and information architecture. Your workflows stay exactly the same, and the result is better consistency across all touchpoints, making it easier for new users to evaluate our tools and for existing users to find what they need.
New license portal: self-service access for all users
We've rolled out our new license portal at account.factorhouse.io, to streamline license management for everyone. New users can instantly grab a Community or Trial license with just their email address, and existing users will see their migrated licenses when they log in. The portal lets you manage multiple licenses from one account, all through a clean, modern interface with magic link authentication. This could be upgrading from Community to a Trial, renewing your annual Community License, or requesting a trial extension. For installation and configuration guidance, check our Kpow and Flex docs.
Visit account.factorhouse.io to provision a community or trial license.
Unified community license for Kpow and Flex
We've consolidated our Community licensing into a single unified license that works with both Kpow Community Edition and Flex Community Edition. Your Community license allows you to run Kpow and Flex in up to three non-production environments each, making it easier to learn, test, and build with Kafka and Flink. The new licence streamlines management, providing a single key for both products and annual renewal via the licence portal. Perfect for exploring projects like Factor House Local or building your own data pipelines. Existing legacy licenses will continue to work and will also be accessible in the license portal.
Read more about the unified community licenses and inclusions.
Performance improvements
This release brings in a number of performance improvements to Kpow, Flex and Factor Platform. The work to compute and materialize views and insights about your Kafka or Flink resources has now been decreased by an order of magnitude. For our top-end customers we have observed a 70% performance increase in Kpow’s materialization.
Data Inspect enhancements
Confluent Data Rules support: Data inspect now supports Confluent Schema Registry Data Rules, including CEL, CEL_FIELD, and JSONata rule types. If you're using Data Contracts in Confluent Cloud, Data Inspect now accurately identifies rule failures and lets you filter them with kJQ.
Support for Avro Primitive Types: We’ve added support for Avro schemas that consist of a plain primitive type, including string, number, and boolean.
Schema Registry & navigation improvements
General Schema Registry improvements (from 94.6): In 94.6, we introduced improvements to Schema Registry performance and updated the observation engine. This release continues that work, with additional refinements based on real-world usage.
Karapace compatibility fix: We identified and fixed a regression in the new observation engine that affected Karapace users.
Redpanda Schema Registry note: The new observation engine is not compatible with Redpanda’s Schema Registry. Customers using Redpanda should set `OBSERVATION_VERSION=1` until full support is available.
Navigation improvements: Filters on the Schema Overview pages now persist when navigating into a subject and back.
Chart accessibility & UX improvements
This release brings a meaningful accessibility improvement to Kpow & Flex: Keyboard navigation for line charts. Users can now focus a line chart and use the left and right arrow keys to view data point tooltips. We plan to expand accessibility for charts to include bar charts and tree maps in the near future, bringing us closer to full WCAG 2.1 Level AA compliance as reported in our Voluntary Product Accessibility Template (VPAT).
We’ve also improved the UX of comparing adjacent line charts: Each series is now consistently coloured across different line charts on a page, making it easier to identify trends across a series, e.g., a particular topic’s producer write/s vs. consumer read/s.
These changes benefit everyone: developers using assistive technology, teams with accessibility requirements, and anyone who prefers keyboard navigation. Accessibility isn't an afterthought, it's a baseline expectation for enterprise-grade tooling, and we're committed to leading by example in the Kafka and Flink ecosystem.
View our VPAT documentation for 95.1.
All Resources
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Integrate Kpow with Oracle Compute Infrastructure (OCI) Streaming with Apache Kafka
Unlock the full potential of your dedicated OCI Streaming with Apache Kafka cluster. This guide shows you how to integrate Kpow with your OCI brokers and self-hosted Kafka Connect and Schema Registry, unifying them into a single, developer-ready toolkit for complete visibility and control over your entire Kafka ecosystem.
Overview
When working with real-time data on Oracle Cloud Infrastructure (OCI), you have two powerful, Kafka-compatible streaming services to choose from:
- OCI Streaming with Apache Kafka: A dedicated, managed service that gives you full control over your own Apache Kafka cluster.
- OCI Streaming: A serverless, Kafka-compatible platform designed for effortless, scalable data ingestion.
Choosing the dedicated OCI Streaming with Apache Kafka service gives you maximum control and the complete functionality of open-source Kafka. However, this control comes with a trade-off: unlike some other managed platforms, OCI does not provide managed Kafka Connect or Schema Registry services, recommending users provision them on custom instances.
This guide will walk you through integrating Kpow with your OCI Kafka cluster, alongside self-hosted instances of Kafka Connect and Schema Registry. The result is a complete, developer-ready environment that provides full visibility and control over your entire Kafka ecosystem.
❗ Note on the serverless OCI Streaming service: While you can connect Kpow to OCI's serverless offering, its functionality is limited because some Kafka APIs are yet to be implemented. Our OCI provider documentation explains how to connect, and you can review the specific API gaps in the official Oracle documentation.
💡 Explore our setup guides for other leading platforms like Confluent Cloud, Amazon MSK, and Google Cloud Kafka, or emerging solutions like Redpanda, BufStream, and the Instaclustr Platform.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
Explore our live multi-cluster demo environment or grab a free Community license and dive into streaming tech on your laptop with Factor House Local.
.png)
Prerequisites
Before creating a Kafka cluster, you must set up the necessary network infrastructure within your OCI tenancy. The Kafka cluster itself is deployed directly into this network, and this setup is also what ensures that your client applications (like Kpow) can securely connect to the brokers.
As detailed in the official OCI documentation, you will need:
- A Virtual Cloud Network (VCN): The foundational network for your cloud resources.
- A Subnet: A subdivision of your VCN where you will launch the Kafka cluster and client VM.
- Security Rules: Ingress rules configured in a Security List or Network Security Group to allow traffic on the required ports. For this guide, which uses SASL/SCRAM, you must open port 9092. If you were using mTLS, you would open port 9093.
Create a Vault Secret
OCI Kafka leverages the OCI Vault service to securely manage the credentials used for SASL/SCRAM authentication.
First, create a Vault in your desired compartment. Inside that Vault, create a new Secret with the following JSON content, replacing the placeholder values with your desired username and a strong password.
{ "username": "<vault-username>", "password": "<value-password>" }Take note of the following details, as you will need them when creating the Kafka cluster:
- SASL SCRAM - Vault compartment:
<compartment-name> - SASL SCRAM - Vault:
<vault-name> - SASL SCRAM - Secret compartment:
<compartment-name> - SASL SCRAM - Secret:
<value-secret-name>
Create IAM Policies
To allow OCI to manage your Kafka cluster and its associated network resources, you must create several IAM policies. These policies grant permissions to both user groups (for administrative actions) and the Kafka service principal (for operational tasks).
The required policies are detailed in the "Required IAM Policies" section of the OCI Kafka documentation. Apply these policies in your tenancy's root compartment to ensure the Kafka service has the necessary permissions.
Create a Kafka Cluster
With the prerequisites in place, you can now create your Kafka cluster from the OCI console.
- Navigate to Developer Services > Application Integration > OCI Streaming with Apache Kafka.
- Click Create cluster and follow the wizard:
- Cluster settings: Provide a name, select your compartment, and choose a Kafka version (e.g., 3.7).
- Broker settings: Choose the number of brokers, the OCPU count per broker, and the block volume storage per broker.
- Cluster configuration: OCI creates a default configuration for the cluster. You can review and edit its properties here. For this guide, add
auto.create.topics.enable=trueto the default configuration. Note that after creation, the cluster's configuration can only be changed using the OCI CLI or SDK. - Security settings: This section is for configuring Mutual TLS (mTLS). Since this guide uses SASL/SCRAM, leave this section blank. We will configure security after the cluster is created.
- Networking: Choose the VCN and subnet you configured in the prerequisites.
- Review your settings and click Create. OCI will begin provisioning your dedicated Kafka cluster.
- Once the cluster's status becomes Active, select it from the cluster list page to view its details.
- From the details page, select the Actions menu and then select Update SASL SCRAM.
- In the Update SASL SCRAM panel, select the Vault and the Secret that contain your secure credentials.
- Select Update.
- After the update is complete, return to the Cluster Information section and copy the Bootstrap Servers endpoint for SASL-SCRAM. You will need this for the next steps.
Launch a Client VM
We need a virtual machine to host Kpow, Kafka Connect, and Schema Registry. This VM must have network access to the Kafka cluster.
- Create Instance & Save SSH Key: Navigate to Compute > Instances and begin to create a new compute instance.
- Select an Ubuntu image.
- In the "Add SSH keys" section, choose the option to "Generate a key pair for me" and click the "Save Private Key" button. This is your only chance to download this key, which is required for SSH access.
- Configure Networking: During the instance creation, configure the networking as follows:
- Placement: Assign the instance to the same VCN as your Kafka cluster, in a subnet that can reach your Kafka brokers.
- Kpow UI Access: Ensure the subnet's security rules allow inbound TCP traffic on port 3000. This opens the port for the Kpow web interface.
- Internet Access: The instance needs outbound access to pull the Kpow Docker image.
- Simple Setup: For development, place the instance in a public subnet with an Internet Gateway.
- Secure (Production): We recommend using a private subnet with a NAT Gateway. This allows outbound connections without exposing the instance to inbound internet traffic.
- Connect and Install Docker: Once the VM is in the "Running" state, use the private key you saved to SSH into its public or private IP address and install Docker.
Deploying Kpow with Supporting Instances
On your client VM, we will use Docker Compose to launch Kpow, Kafka Connect, and Schema Registry.
First, create a setup script to prepare the environment. This script downloads the MSK Data Generator (a useful source connector for creating sample data) and sets up the JAAS configuration files required for Schema Registry's basic authentication.
Save the following as setup.sh:
#!/usr/bin/env bash
SCRIPT_PATH="$(cd $(dirname "$0"); pwd)"
DEPS_PATH=$SCRIPT_PATH/deps
rm -rf $DEPS_PATH && mkdir $DEPS_PATH
echo "Set-up environment..."
echo "Downloading MSK data generator..."
mkdir -p $DEPS_PATH/connector/msk-datagen
curl --silent -L -o $DEPS_PATH/connector/msk-datagen/msk-data-generator.jar \
https://github.com/awslabs/amazon-msk-data-generator/releases/download/v0.4.0/msk-data-generator-0.4-jar-with-dependencies.jar
echo "Create Schema Registry configs..."
mkdir -p $DEPS_PATH/schema
cat << 'EOF' > $DEPS_PATH/schema/schema_jaas.conf
schema {
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
debug="true"
file="/etc/schema/schema_realm.properties";
};
EOF
cat << 'EOF' > $DEPS_PATH/schema/schema_realm.properties
admin: CRYPT:adpexzg3FUZAk,schema-admin
EOF
echo "Environment setup completed."Next, create a `docker-compose.yml` file. This defines our three services. Be sure to replace the placeholder values (<BOOTSTRAP_SERVER_ADDRESS>, <VAULT_USERNAME>, <VAULT_PASSWORD>) with your specific OCI Kafka details.
services:
kpow:
image: factorhouse/kpow-ce:latest
container_name: kpow
pull_policy: always
restart: always
ports:
- "3000:3000"
networks:
- factorhouse
depends_on:
connect:
condition: service_healthy
environment:
ENVIRONMENT_NAME: "OCI Kafka Cluster"
BOOTSTRAP: "<BOOTSTRAP_SERVER_ADDRESS>"
SECURITY_PROTOCOL: "SASL_SSL"
SASL_MECHANISM: "SCRAM-SHA-512"
SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
CONNECT_NAME: "Local Connect Cluster"
CONNECT_REST_URL: "http://connect:8083"
SCHEMA_REGISTRY_NAME: "Local Schema Registry"
SCHEMA_REGISTRY_URL: "http://schema:8081"
SCHEMA_REGISTRY_AUTH: "USER_INFO"
SCHEMA_REGISTRY_USER: "admin"
SCHEMA_REGISTRY_PASSWORD: "admin"
env_file:
- license.env
schema:
image: confluentinc/cp-schema-registry:7.8.0
container_name: schema_registry
ports:
- "8081:8081"
networks:
- factorhouse
environment:
SCHEMA_REGISTRY_HOST_NAME: "schema"
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "<BOOTSTRAP_SERVER_ADDRESS>"
## Authentication
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: "SASL_SSL"
SCHEMA_REGISTRY_KAFKASTORE_SASL_MECHANISM: "SCRAM-SHA-512"
SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
SCHEMA_REGISTRY_AUTHENTICATION_METHOD: BASIC
SCHEMA_REGISTRY_AUTHENTICATION_REALM: schema
SCHEMA_REGISTRY_AUTHENTICATION_ROLES: schema-admin
SCHEMA_REGISTRY_OPTS: -Djava.security.auth.login.config=/etc/schema/schema_jaas.conf
volumes:
- ./deps/schema:/etc/schema
connect:
image: confluentinc/cp-kafka-connect:7.8.0
container_name: connect
restart: unless-stopped
ports:
- 8083:8083
networks:
- factorhouse
environment:
CONNECT_BOOTSTRAP_SERVERS: "<BOOTSTRAP_SERVER_ADDRESS>"
CONNECT_REST_PORT: "8083"
CONNECT_GROUP_ID: "oci-demo-connect"
CONNECT_CONFIG_STORAGE_TOPIC: "oci-demo-connect-config"
CONNECT_OFFSET_STORAGE_TOPIC: "oci-demo-connect-offsets"
CONNECT_STATUS_STORAGE_TOPIC: "oci-demo-connect-status"
## Authentication
CONNECT_SECURITY_PROTOCOL: "SASL_SSL"
CONNECT_SASL_MECHANISM: "SCRAM-SHA-512"
CONNECT_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
# Propagate auth settings to internal clients
CONNECT_PRODUCER_SECURITY_PROTOCOL: "SASL_SSL"
CONNECT_PRODUCER_SASL_MECHANISM: "SCRAM-SHA-512"
CONNECT_PRODUCER_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
CONNECT_CONSUMER_SECURITY_PROTOCOL: "SASL_SSL"
CONNECT_CONSUMER_SASL_MECHANISM: "SCRAM-SHA-512"
CONNECT_CONSUMER_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="<VAULT_USERNAME>" password="<VAULT_PASSWORD>";'
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_REST_ADVERTISED_HOST_NAME: "localhost"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_PLUGIN_PATH: /usr/share/java/,/etc/kafka-connect/jars
volumes:
- ./deps/connector:/etc/kafka-connect/jars
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8083/ || exit 1"]
interval: 5s
timeout: 3s
retries: 10
start_period: 20s
networks:
factorhouse:
name: factorhouseFinally, create a license.env file with your Kpow license details. Then, run the setup script and launch the services:
chmod +x setup.sh
bash setup.sh && docker-compose up -dKpow will now be accessible at http://<vm-ip-address>:3000. You will see an overview of your OCI Kafka cluster, including your self-hosted Kafka Connect and Schema Registry instances.

Deploy Kafka Connector
Now let's deploy a connector to generate some data.
In the Connect menu of the Kpow UI, click the Create connector button.

Among the available connectors, select GenerateSourceConnector, which is the source connector that generates fake order records.

Save the following configuration to a Json file, then import it and click Create. This configuration tells the connector to generate order data, use Avro for the value, and apply several Single Message Transforms (SMTs) to shape the final message.
{
"name": "orders-source",
"config": {
"connector.class": "com.amazonaws.mskdatagen.GeneratorSourceConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable": false,
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schemas.enable": true,
"value.converter.schema.registry.url": "http://schema:8081",
"value.converter.basic.auth.credentials.source": "USER_INFO",
"value.converter.basic.auth.user.info": "admin:admin",
"genv.orders.order_id.with": "#{Internet.uuid}",
"genv.orders.bid_time.with": "#{date.past '5','SECONDS'}",
"genv.orders.price.with": "#{number.random_double '2','1','150'}",
"genv.orders.item.with": "#{Commerce.productName}",
"genv.orders.supplier.with": "#{regexify '(Alice|Bob|Carol|Alex|Joe|James|Jane|Jack)'}",
"transforms": "extractKey,flattenKey,convertBidTime",
"transforms.extractKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.extractKey.fields": "order_id",
"transforms.flattenKey.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.flattenKey.field": "order_id",
"transforms.convertBidTime.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.convertBidTime.field": "bid_time",
"transforms.convertBidTime.target.type": "Timestamp",
"transforms.convertBidTime.format": "EEE MMM dd HH:mm:ss zzz yyyy",
"global.throttle.ms": "1000",
"global.history.records.max": "1000"
}
}
Once deployed, you can see the running connector and its task in the Kpow UI.

In the Schema menu, you can verify that a new value schema (orders-value) has been registered for the orders topic.

Finally, navigate to Data > Inspect, select the orders topic, and click Search to see the streaming data produced by your new connector.

Conclusion
You have now successfully integrated Kpow with OCI Streaming with Apache Kafka, providing a complete, self-hosted streaming stack on Oracle's powerful cloud infrastructure. By deploying Kafka Connect and Schema Registry alongside your cluster, you have a fully-featured, production-ready environment.
With Kpow, you have gained end-to-end visibility and control, from monitoring broker health and consumer lag to managing schemas, connectors, and inspecting live data streams. This empowers your team to develop, debug, and operate your Kafka-based applications with confidence.

Unified community license for Kpow and Flex
The unified Factor House Community License works with both Kpow Community Edition and Flex Community Edition, meaning one license will unlock both products. This makes it even simpler to explore modern data streaming tools, create proof-of-concepts, and evaluate our products.
The new unified Factor House Community License works with both Kpow Community Edition and Flex Community Edition, so you only need one license to unlock both products. This makes it even simpler to explore modern data streaming tools, create proof-of-concepts, and evaluate our products.
What's changing
Previously, we issued separate community licenses for Kpow and Flex, with different tiers for individuals and organisations. Now, there's just one single Community License that unlocks both products.
What's new:
- One license for both products
- Three environments for everyone - whether you're an individual developer or part of a team, you get three non-production installations per product
- Simplified management - access and renew your licenses through our new self-service portal at account.factorhouse.io
Our commitment to the engineering community
Since first launching Kpow CE at Current '22, thousands of engineers have used our community licenses to learn Kafka and Flink without jumping through enterprise procurement hoops. This unified license keeps that same philosophy: high-quality tools that are free for non-production use.
The Factor House Community License is free for individuals and organizations to use in non-production environments. It's perfect for:
- Learning and experimenting with Kafka and Flink
- Building prototypes and proof-of-concepts
- Testing integrations before production deployment
- Exploring sample projects like Factor House Local, Top-K Game Leaderboard, and theLook eCommerce dashboard
Getting started
New users: Head to account.factorhouse.io to grab your free Community license. You'll receive instant access via magic link authentication.
Existing users: Your legacy Kpow and Flex Community licenses will continue to work and are now visible in the portal. When your license renews (after 12 months), consider switching to the unified model for easier management.
What's included
Both Kpow CE and Flex CE include most enterprise features, optimized for learning and testing. Includes Kafka and Flink monitoring and management, fast multi-topic search, and Schema registry and Kafka Connect support.
- License duration: 12 months, renewable annually
- Installations: Up to 3 per product (Kpow CE: 1 Kafka cluster + 1 Schema Registry + 1 Connect cluster per installation; Flex CE: 1 Flink cluster per installation)
- Support: Self-service via Factor House Community Slack, documentation, and release notes
- Deployment: Docker, Docker Compose or Kubernetes
Ready for production? Start a 30-day free trial of our Enterprise editions directly from the portal to unlock RBAC, Kafka Streams monitoring, custom SerDes, and dedicated support.
What about legacy licenses?
If you're currently using a Kpow Individual, Kpow Organization, or Flex Community license, nothing changes immediately. Your existing licenses will continue to work with their respective products and are now accessible in the portal. When your license expires at the end of its 12-month term, you can easily switch to the new unified license for simpler management.
Get your free Community license at account.factorhouse.io
Questions? Reach out to hello@factorhouse.io or join us in the Factor House Community Slack.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Release 95.1: A unified experience across product, web, docs and licensing
95.1 delivers a cohesive experience across Factor House products, licensing, and brand. This release introduces our new license portal, refreshed company-wide branding, a unified Community License for Kpow and Flex, and a series of performance, accessibility, and schema-related improvements.
Upgrading to 95.1
If you are using Kpow with a Google Managed Service for Apache Kafka (Google MSAK) cluster, you will now need to use eitherkpow-java17-gcp-standalone.jaror the95.1-temurin-ubitag of the factorhouse/kpow Docker image.
New Factor House brand: unified look across web, product, and docs
We've refreshed the Factor House brand across our website, documentation, the new license portal, and products to reflect where we are today: a company trusted by engineers running some of the world's most demanding data pipelines. Following our seed funding earlier this year, we've been scaling the team and product offerings to match the quality and value we deliver to enterprise engineers. The new brand brings our external presence in line with what we've built. You'll see updated logos in Kpow and Flex, refreshed styling across docs and the license portal, and a completely redesigned website with clearer navigation and information architecture. Your workflows stay exactly the same, and the result is better consistency across all touchpoints, making it easier for new users to evaluate our tools and for existing users to find what they need.
New license portal: self-service access for all users
We've rolled out our new license portal at account.factorhouse.io, to streamline license management for everyone. New users can instantly grab a Community or Trial license with just their email address, and existing users will see their migrated licenses when they log in. The portal lets you manage multiple licenses from one account, all through a clean, modern interface with magic link authentication. This could be upgrading from Community to a Trial, renewing your annual Community License, or requesting a trial extension. For installation and configuration guidance, check our Kpow and Flex docs.
Visit account.factorhouse.io to provision a community or trial license.
Unified community license for Kpow and Flex
We've consolidated our Community licensing into a single unified license that works with both Kpow Community Edition and Flex Community Edition. Your Community license allows you to run Kpow and Flex in up to three non-production environments each, making it easier to learn, test, and build with Kafka and Flink. The new licence streamlines management, providing a single key for both products and annual renewal via the licence portal. Perfect for exploring projects like Factor House Local or building your own data pipelines. Existing legacy licenses will continue to work and will also be accessible in the license portal.
Read more about the unified community licenses and inclusions.
Performance improvements
This release brings in a number of performance improvements to Kpow, Flex and Factor Platform. The work to compute and materialize views and insights about your Kafka or Flink resources has now been decreased by an order of magnitude. For our top-end customers we have observed a 70% performance increase in Kpow’s materialization.
Data Inspect enhancements
Confluent Data Rules support: Data inspect now supports Confluent Schema Registry Data Rules, including CEL, CEL_FIELD, and JSONata rule types. If you're using Data Contracts in Confluent Cloud, Data Inspect now accurately identifies rule failures and lets you filter them with kJQ.
Support for Avro Primitive Types: We’ve added support for Avro schemas that consist of a plain primitive type, including string, number, and boolean.
Schema Registry & navigation improvements
General Schema Registry improvements (from 94.6): In 94.6, we introduced improvements to Schema Registry performance and updated the observation engine. This release continues that work, with additional refinements based on real-world usage.
Karapace compatibility fix: We identified and fixed a regression in the new observation engine that affected Karapace users.
Redpanda Schema Registry note: The new observation engine is not compatible with Redpanda’s Schema Registry. Customers using Redpanda should set `OBSERVATION_VERSION=1` until full support is available.
Navigation improvements: Filters on the Schema Overview pages now persist when navigating into a subject and back.
Chart accessibility & UX improvements
This release brings a meaningful accessibility improvement to Kpow & Flex: Keyboard navigation for line charts. Users can now focus a line chart and use the left and right arrow keys to view data point tooltips. We plan to expand accessibility for charts to include bar charts and tree maps in the near future, bringing us closer to full WCAG 2.1 Level AA compliance as reported in our Voluntary Product Accessibility Template (VPAT).
We’ve also improved the UX of comparing adjacent line charts: Each series is now consistently coloured across different line charts on a page, making it easier to identify trends across a series, e.g., a particular topic’s producer write/s vs. consumer read/s.
These changes benefit everyone: developers using assistive technology, teams with accessibility requirements, and anyone who prefers keyboard navigation. Accessibility isn't an afterthought, it's a baseline expectation for enterprise-grade tooling, and we're committed to leading by example in the Kafka and Flink ecosystem.
View our VPAT documentation for 95.1.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
.webp)
Deploy Kpow on EKS via AWS Marketplace using Helm
Streamline your Kpow deployment on Amazon EKS with our guide, fully integrated with the AWS Marketplace. We use eksctl to automate IAM Roles for Service Accounts (IRSA), providing a secure integration for Kpow's licensing and metering. This allows your instance to handle license validation via AWS License Manager and report usage for hourly subscriptions, enabling a production-ready deployment with minimal configuration.
Overview
This guide provides a comprehensive walkthrough for deploying Kpow, a powerful toolkit for Apache Kafka, onto an Amazon EKS (Elastic Kubernetes Service) cluster. We will cover the entire process from start to finish, including provisioning the necessary AWS infrastructure, deploying a Kafka cluster using the Strimzi operator, and finally, installing Kpow using a subscription from the AWS Marketplace.
The guide demonstrates how to set up both Kpow Annual and Kpow Hourly products, highlighting the specific integration points with AWS services like IAM for service accounts, ECR for container images, and the AWS License Manager for the annual subscription. By the end of this tutorial, you will have a fully functional environment running Kpow on EKS, ready to monitor and manage your Kafka cluster.
The source code and configuration files used in this guide can be found in the features/eks-deployment folder of this GitHub repository.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
Explore our live multi-cluster demo environment or grab a free Community license and dive into streaming tech on your laptop with Factor House Local.

Prerequisites
To follow along the guide, you need:
- CLI Tools:
- AWS Infrastructure:
- VPC: A Virtual Private Cloud (VPC) that has both public and private subnets is required.
- IAM Permissions: A user with the necessary IAM permissions to create an EKS cluster with a service account.
- Kpow Subscription:
- A subscription to a Kpow product through the AWS Marketplace is required. After subscribing, you will receive access to the necessary components and deployment instructions.
- The specifics of accessing the container images and Helm chart depend on the chosen Kpow product:
- Kpow Annual product:
- Subscribing to the annual product provides access to the ECR (Elastic Container Registry) image and the corresponding Helm chart.
- Kpow Hourly product:
- For the hourly product, access to the ECR image will be provided and deployment utilizes the public Factor House Helm repository for installation.
- Kpow Annual product:
Deploy an EKS cluster
We will use eksctl to provision an Amazon EKS cluster. The configuration for the cluster is defined in the manifests/eks/cluster.eksctl.yaml file within the repository.
Before creating the cluster, you must open this file and replace the placeholder values for <VPC-ID>, <PRIVATE-SUBNET-ID-* >, and <PUBLIC-SUBNET-ID-* > with your actual VPC and subnet IDs.
⚠️ The provided configuration assumes the EKS cluster will be deployed in theus-east-1region. If you intend to use a different region, you must update themetadata.regionfield and ensure the availability zone keys undervpc.subnets(e.g.,us-east-1a,us-east-1b) match the availability zones of the subnets in your chosen region.
Here is the content of the cluster.eksctl.yaml file:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: fh-eks-cluster
region: us-east-1
vpc:
id: "<VPC-ID>"
subnets:
private:
us-east-1a:
id: "<PRIVATE-SUBNET-ID-1>"
us-east-1b:
id: "<PRIVATE-SUBNET-ID-2>"
public:
us-east-1a:
id: "<PUBLIC-SUBNET-ID-1>"
us-east-1b:
id: "<PUBLIC-SUBNET-ID-2>"
iam:
withOIDC: true
serviceAccounts:
- metadata:
name: kpow-annual
namespace: factorhouse
attachPolicyARNs:
- "arn:aws:iam::aws:policy/service-role/AWSLicenseManagerConsumptionPolicy"
- metadata:
name: kpow-hourly
namespace: factorhouse
attachPolicyARNs:
- "arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage"
nodeGroups:
- name: ng-dev
instanceType: t3.medium
desiredCapacity: 4
minSize: 2
maxSize: 6
privateNetworking: trueThis configuration sets up the following:
- Cluster Metadata: A cluster named
fh-eks-clusterin theus-east-1region. - VPC: Specifies an existing VPC and its public/private subnets where the cluster resources will be deployed.
- IAM with OIDC: Enables the IAM OIDC provider, which allows Kubernetes service accounts to be associated with IAM roles. This is crucial for granting AWS permissions to your pods.
- Service Accounts:
kpow-annual: Creates a service account for the Kpow Annual product. It attaches theAWSLicenseManagerConsumptionPolicy, allowing Kpow to validate its license with the AWS License Manager service.kpow-hourly: Creates a service account for the Kpow Hourly product. It attaches theAWSMarketplaceMeteringRegisterUsagepolicy, which is required for reporting usage metrics to the AWS Marketplace.
- Node Group: Defines a managed node group named
ng-devwitht3.mediuminstances. The worker nodes will be placed in the private subnets (privateNetworking: true).
Once you have updated the YAML file with your networking details, run the following command to create the cluster. This process can take 15-20 minutes to complete.
eksctl create cluster -f cluster.eksctl.yamlOnce the cluster is created, eksctl automatically updates your kubeconfig file (usually located at ~/.kube/config) with the new cluster's connection details. This allows you to start interacting with your cluster immediately using kubectl.
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# ip-192-168-...-21.ec2.internal Ready <none> 2m15s v1.32.9-eks-113cf36
# ...Launch a Kafka cluster
With the EKS cluster running, we will now launch an Apache Kafka cluster into it. We will use the Strimzi Kafka operator, which simplifies the process of running Kafka on Kubernetes.
Install the Strimzi operator
First, create a dedicated namespace for the Kafka cluster.
kubectl create namespace kafka
Next, download the Strimzi operator installation YAML. The repository already contains the file manifests/kafka/strimzi-cluster-operator-0.45.1.yaml, but the following commands show how it was downloaded and modified for this guide.
## Define the Strimzi version and download URL
STRIMZI_VERSION="0.45.1"
DOWNLOAD_URL=https://github.com/strimzi/strimzi-kafka-operator/releases/download/$STRIMZI_VERSION/strimzi-cluster-operator-$STRIMZI_VERSION.yaml
## Download the operator manifest
curl -L -o manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml ${DOWNLOAD_URL}
## Modify the manifest to install the operator in the 'kafka' namespace
sed -i 's/namespace: .*/namespace: kafka/' manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yamlNow, apply the manifest to install the Strimzi operator in your EKS cluster.
kubectl apply -f manifests/kafka/strimzi-cluster-operator-0.45.1.yaml -n kafkaDeploy a Kafka cluster
The configuration for our Kafka cluster is defined in manifests/kafka/kafka-cluster.yaml. It describes a simple, single-node cluster suitable for development, using ephemeral storage, meaning data will be lost if the pods restart.
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: fh-k8s-cluster
spec:
kafka:
version: 3.9.1
replicas: 1
listeners:
- name: plain
port: 9092
type: internal
tls: false
# ... (content truncated for brevity)Deploy the Kafka cluster with the following command:
kubectl create -f manifests/kafka/kafka-cluster.yaml -n kafkaVerify the deployment
After a few minutes, all the necessary pods and services for Kafka will be running. You can verify this by listing all resources in the kafka namespace.
kubectl get all -n kafka -o nameThe output should look similar to this, showing the pods for Strimzi, Kafka, Zookeeper, and the associated services. The most important service for connecting applications is the Kafka bootstrap service.
# pod/fh-k8s-cluster-entity-operator-...
# pod/fh-k8s-cluster-kafka-0
# ...
# service/fh-k8s-cluster-kafka-bootstrap <-- Kafka bootstrap service
# ...Deploy Kpow
Now that the EKS and Kafka clusters are running, we can deploy Kpow. This guide covers the deployment of both Kpow Annual and Kpow Hourly products. Both deployments will use a common set of configurations for connecting to Kafka and setting up authentication/authorization.
First, ensure you have a namespace for Kpow. The eksctl command we ran earlier already created the service accounts in the factorhouse namespace, so we will use that. If you hadn't created it, you would run kubectl create namespace factorhouse.
Create ConfigMaps
We will use two Kubernetes ConfigMaps to manage Kpow's configuration. This approach separates the core configuration from the Helm deployment values.
kpow-config-files: This ConfigMap holds file-based configurations, including RBAC policies, JAAS configuration, and user properties for authentication.kpow-config: This ConfigMap provides environment variables to the Kpow container, such as the Kafka bootstrap address and settings to enable our authentication provider.
The contents of these files can be found in the repository at manifests/kpow/config-files.yaml and manifests/kpow/config.yaml.
manifests/kpow/config-files.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kpow-config-files
namespace: factorhouse
data:
hash-rbac.yml: |
# RBAC policies defining user roles and permissions
admin_roles:
- "kafka-admins"
# ... (content truncated for brevity)
hash-jaas.conf: |
# JAAS login module configuration
kpow {
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
file="/etc/kpow/jaas/hash-realm.properties";
};
# ... (content truncated for brevity)
hash-realm.properties: |
# User credentials (username: password, roles)
# admin/admin
admin: CRYPT:adpexzg3FUZAk,server-administrators,content-administrators,kafka-admins
# user/password
user: password,kafka-users
# ... (content truncated for brevity)manifests/kpow/config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kpow-config
namespace: factorhouse
data:
# Environment Configuration
BOOTSTRAP: "fh-k8s-cluster-kafka-bootstrap.kafka.svc.cluster.local:9092"
REPLICATION_FACTOR: "1"
# AuthN + AuthZ
JAVA_TOOL_OPTIONS: "-Djava.awt.headless=true -Djava.security.auth.login.config=/etc/kpow/jaas/hash-jaas.conf"
AUTH_PROVIDER_TYPE: "jetty"
RBAC_CONFIGURATION_FILE: "/etc/kpow/rbac/hash-rbac.yml"Apply these manifests to create the ConfigMaps in the factorhouse namespace.
kubectl apply -f manifests/kpow/config-files.yaml \
-f manifests/kpow/config.yaml -n factorhouseYou can verify their creation by running:
kubectl get configmap -n factorhouse
# NAME DATA AGE
# kpow-config 5 ...
# kpow-config-files 3 ...Deploy Kpow Annual
Download the Helm chart
The Helm chart for Kpow Annual is in a private Amazon ECR repository. First, authenticate your Helm client.
# Enable Helm's experimental support for OCI registries
export HELM_EXPERIMENTAL_OCI=1
# Log in to the AWS Marketplace ECR registry
aws ecr get-login-password \
--region us-east-1 | helm registry login \
--username AWS \
--password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.comNext, pull and extract the chart.
# Create a directory, pull the chart, and extract it
mkdir -p awsmp-chart && cd awsmp-chart
# Pull the latest version of the Helm chart from ECR (add --version <x.x.x> to specify a version)
helm pull oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/factor-house/kpow-aws-annual
tar xf $(pwd)/* && find $(pwd) -maxdepth 1 -type f -delete
cd ..Launch Kpow Annual
Now, install Kpow using Helm. We will reference the service account kpow-annual that was created during the EKS cluster setup, which has the required IAM policy for license management.
helm install kpow-annual ./awsmp-chart/kpow-aws-annual/ \
-n factorhouse \
--set serviceAccount.create=false \
--set serviceAccount.name=kpow-annual \
--values ./values/eks-annual.yamlThe Helm values for this deployment are in values/eks-annual.yaml. It mounts the configuration files from our ConfigMaps and sets resource limits.
# values/eks-annual.yaml
env:
ENVIRONMENT_NAME: "Kafka from Kpow Annual"
envFromConfigMap: "kpow-config"
volumeMounts:
- name: kpow-config-volumes
mountPath: /etc/kpow/rbac/hash-rbac.yml
subPath: hash-rbac.yml
- name: kpow-config-volumes
mountPath: /etc/kpow/jaas/hash-jaas.conf
subPath: hash-jaas.conf
- name: kpow-config-volumes
mountPath: /etc/kpow/jaas/hash-realm.properties
subPath: hash-realm.properties
volumes:
- name: kpow-config-volumes
configMap:
name: "kpow-config-files"
resources:
limits:
cpu: 1
memory: 0.5Gi
requests:
cpu: 1
memory: 0.5GiNote: The CPU and memory values are intentionally set low for this guide. For production environments, check the official documentation for recommended capacity.
Verify and access Kpow Annual
Check that the Kpow pod is running successfully.
kubectl get all -l app.kubernetes.io/instance=kpow-annual -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-annual-kpow-aws-annual-c6bc849fb-zw5ww 0/1 Running 0 46s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-annual-kpow-aws-annual ClusterIP 10.100.220.114 <none> 3000/TCP 47s
# ...To access the UI, forward the service port to your local machine.
kubectl -n factorhouse port-forward service/kpow-annual-kpow-aws-annual 3000:3000You can now access Kpow by navigating to http://localhost:3000 in your browser.

Deploy Kpow Hourly
Configure the Kpow Helm repository
The Helm chart for Kpow Hourly is available in the Factor House Helm repository. First, add the Helm repository.
helm repo add factorhouse https://charts.factorhouse.ioNext, update Helm repositories to ensure you install the latest version of Kpow.
helm repo updateLaunch Kpow Hourly
Install Kpow using Helm, referencing the kpow-hourly service account which has the IAM policy for marketplace metering.
helm install kpow-hourly factorhouse/kpow-aws-hourly \
-n factorhouse \
--set serviceAccount.create=false \
--set serviceAccount.name=kpow-hourly \
--values ./values/eks-hourly.yamlThe Helm values are defined in values/eks-hourly.yaml.
# values/eks-hourly.yaml
env:
ENVIRONMENT_NAME: "Kafka from Kpow Hourly"
envFromConfigMap: "kpow-config"
volumeMounts:
# ... (volume configuration is the same as annual)
volumes:
# ...
resources:
# ...Verify and access Kpow Hourly
Check that the Kpow pod is running.
kubectl get all -l app.kubernetes.io/instance=kpow-hourly -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-hourly-kpow-aws-hourly-68869b6cb9-x9prf 0/1 Running 0 83s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-hourly-kpow-aws-hourly ClusterIP 10.100.221.36 <none> 3000/TCP 85s
# ...To access the UI, forward the service port to a different local port (e.g., 3001) to avoid conflicts.
kubectl -n factorhouse port-forward service/kpow-hourly-kpow-aws-hourly 3001:3000You can now access Kpow by navigating to http://localhost:3001 in your browser.

Delete resources
To avoid ongoing AWS charges, clean up all created resources in reverse order.
Delete Kpow and ConfigMaps
helm uninstall kpow-annual kpow-hourly -n factorhouse
kubectl delete -f manifests/kpow/config-files.yaml \
-f manifests/kpow/config.yaml -n factorhouseDelete the Kafka cluster and Strimzi operator
STRIMZI_VERSION="0.45.1"
kubectl delete -f manifests/kafka/kafka-cluster.yaml -n kafka
kubectl delete -f manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml -n kafkaDelete the EKS cluster
This command will remove the cluster and all associated resources.
eksctl delete cluster -f manifests/eks/cluster.eksctl.yamlConclusion
In this guide, we have successfully deployed a complete, production-ready environment for monitoring Apache Kafka on AWS. By leveraging eksctl, we provisioned a robust EKS cluster with correctly configured IAM roles for service accounts, a critical step for secure integration with AWS services. We then deployed a Kafka cluster using the Strimzi operator, demonstrating the power of Kubernetes operators in simplifying complex stateful applications.
Finally, we walked through the deployment of both Kpow Annual and Kpow Hourly from the AWS Marketplace. This showcased the flexibility of Kpow's subscription models and their seamless integration with AWS for licensing and metering. You are now equipped with the knowledge to set up, configure, and manage Kpow on EKS, unlocking powerful insights and operational control over your Kafka ecosystem.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Release 94.6: Factor Platform, Ververica Integration, and kJQ Enhancements
The first Factor Platform release candidate is here, a major milestone toward a unified control plane for real-time data streaming technologies. This release also introduces Ververica Platform integration in Flex, plus support for Kafka Clients 4.1 / Confluent 8.0.0 and new kJQ operators for richer stream inspection.
Factor Platform release candidate: Early access to unified streaming control
For organisations operating streaming at scale, the challenge has never been about any one technology. It's about managing complexity across regions, tools, and teams while maintaining governance, performance, and cost control.
We've spent years building tools that bring clarity to Apache Kafka and Apache Flink. Now, we're taking everything we've learned and building something bigger: Factor Platform, a unified control plane for real-time data infrastructure.
Factor Platform delivers complete visibility and federated control across hundreds of clusters, multiple clouds, and distributed teams from a single interface. Engineers gain deep operational insight into jobs, topics, and lineage. Business and compliance teams benefit from native catalogs, FinOps intelligence, and audit-ready transparency.
The first release candidate is live. It's designed for early adopters exploring large-scale, persistent streaming environments, and it's ready to be shaped by the teams who use it.
Interested in early access? Contact sales@factorhouse.io

Unlocking native Flink management with Ververica Platform
Our collaboration with Ververica (the original creators of Apache Flink), enters a new phase with the introduction of Flex + Ververica Platform integration. This brings Flink’s enterprise management and observability capabilities directly into the Factor House ecosystem.
Flex users can now connect to Ververica Platform (Community or Enterprise v2) and instantly visualize session clusters, job deployments, and runtime performance. The current release provides a snapshot view of Ververica resources at startup, with live synchronization planned for future updates. It's a huge step toward true end-to-end streaming visibility—from data ingestion, to transformation, to delivery.
Configuration is straightforward: point to your Ververica REST API, authenticate via secure token, and your Flink environments appear right alongside your clusters.
This release represents just the beginning of our partnership with Ververica. Together, we’re exploring deeper integrations across the Flink ecosystem, including OpenShift and Amazon Managed Service for Apache Flink, to make enterprise-scale stream processing simpler and more powerful.
Read the full Ververica Platform integration guide →
Advancing Kafka support with Kafka Clients 4.1.0 and Confluent Schema SerDes 8.0.0
We’ve upgraded to Kafka Clients 4.1.0 / Confluent Schema SerDes 8.0.0, aligning Kpow with the latest Kafka ecosystem updates. Teams using custom Protobuf Serdes should review potential compatibility changes.
Data Inspect gets more powerful with kJQ enhancements
Data Inspect in Kpow has been upgraded with improvements to kJQ, our lightweight JSON query language for streaming data. The new release introduces map() and select() functions, expanding the expressive power of kJQ for working with nested and dynamic data. These additions make it possible to iterate over collections, filter elements based on complex conditions, and compose advanced data quality or anomaly detection filters directly in the browser. Users can now extract specific values from arrays, filter deeply nested structures, and chain logic with built-in functions like contains, test, and is-empty.
For example, you can now write queries like:
.value.correctingProperty.names | map(.localeLanguageCode) | contains("pt")Or filter and validate nested collections:
.value.names | map(select(.languageCode == "pt-Pt")) | is-empty | notThese updates make Data Inspect far more powerful for real-time debugging, validation, and exploratory data analysis. Explore the full range of examples and interactive demos in the kJQ documentation.
See map() and select() in action in the kJQ Playground →
Schema Registry performance improvements
We’ve greatly improved Schema Registry performance for large installations. The observation process now cuts down on the number of REST calls each schema observation makes by an order of magnitude. Kpow now defaults to SCHEMA_REGISTRY_OBSERVATION_VERSION=2, meaning all customers automatically benefit from these performance boosts.
.webp)
Kpow Custom Serdes and Protobuf v4.31.1
This post explains an update in the version of protobuf libraries used by Kpow, and a possible compatibility impact this update may cause to user defined Custom Serdes.
Kpow Custom Serdes and Protobuf v4.31.1
Note: The potential compatibility issues described in this post only impacts users who have implemented Custom Serdes that contain generated protobuf classes.
Resolution: If you encounter these compatibility issues, resolve them by re-generating any generated protobuf classes with protoc v31.1.
In the upcoming v94.6 release of Kpow, we're updating all Confluent Serdes dependencies to the latest major version 8.0.1.
In io.confluent/kafka-protobuf-serializer:8.0.1 the protobuf version is advanced from 3.25.5 to 4.31.1, and so the version of protobuf used by Kpow changes.
- Confluent protobuf upgrade PR: https://github.com/confluentinc/schema-registry/pull/3569
- Related Github issue: https://github.com/confluentinc/schema-registry/issues/3047
This is a major upgrade of the underlying protobuf libraries, and there are some breaking changes related to generated code.
Protobuf 3.26.6 introduces a breaking change that fails at runtime (deliberately) if the makeExtensionsImmutable method is called as part of generated protobuf code.
The decision to break at runtime was taken because earlier versions of protobuf were found to be vulnerable to the footmitten CVE.
- Protobuf footmitten CVE and breaking change announcement: https://protobuf.dev/news/2025-01-23/
- Apache protobuf discussion thread: https://lists.apache.org/thread/87osjw051xnx5l5v50dt3t81yfjxygwr
- Comment on a Schema Registry ticket: https://github.com/confluentinc/schema-registry/issues/3360
We found that when we advanced to the 8.0.1 version of the libraries; we encountered issues with some test classes generated by 3.x protobuf libraries.
Compilation issues:
Compiling 14 source files to /home/runner/work/core/core/target/kpow-enterprise/classes
/home/runner/work/core/core/modules/kpow/src-java-dev/factorhouse/serdes/MyRecordOuterClass.java:129: error: cannot find symbol
makeExtensionsImmutable();
^
symbol: method makeExtensionsImmutable()
location: class MyRecordRuntime issues:
Bad type on operand stack
Exception Details:
Location:
io/confluent/kafka/schemaregistry/protobuf/ProtobufSchema.toMessage(Lcom/google/protobuf/DescriptorProtos$FileDescriptorProto;Lcom/google/protobuf/DescriptorProtos$DescriptorProto;)Lcom/squareup/wire/schema/internal/parser/MessageElement; : invokestatic
Reason:
Type 'com/google/protobuf/DescriptorProtos$MessageOptions' (current frame, stack[1]) is not assignable to 'com/google/protobuf/GeneratedMessage$ExtendableMessage'
Current Frame:
bci:
flags: { }
locals: { 'com/google/protobuf/DescriptorProtos$FileDescriptorProto', 'com/google/protobuf/DescriptorProtos$DescriptorProto', 'java/lang/String', 'com/google/common/collect/ImmutableList$Builder', 'com/google/common/collect/ImmutableList$Builder', 'com/google/common/collect/ImmutableList$Builder', 'com/google/common/collect/ImmutableList$Builder', 'java/util/LinkedHashMap', 'java/util/LinkedHashMap', 'java/util/List', 'com/google/common/collect/ImmutableList$Builder' }
stack: { 'com/google/common/collect/ImmutableList$Builder', 'com/google/protobuf/DescriptorProtos$MessageOptions' }
Bytecode:
0000000: 2bb6 0334 4db2 0072 1303 352c b903 3703
0000010: 00b8 0159 4eb8 0159 3a04 b801 593a 05b8
0000020: 0159 3a06 bb02 8959 b702 8b3a 07bb 0289If you encounter these compatibility issues, resolve them by re-generating any generated protobuf classes with protoc v31.1.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Events & Webinars
Stay plugged in with the Factor House team and our community.

[MELBOURNE, AUS] Apache Kafka and Apache Flink Meetup, 27 November
Melbourne, we’re making it a double feature. Workshop by day, meetup by night - same location, each with valuable content for data and software engineers, or those working with Data Streaming technologies. Build the backbone your apps deserve, then roll straight into the evening meetup.

[SYDNEY, AUS] Apache Kafka and Apache Flink Meetup, 26 November
Sydney, we’re making it a double feature. Workshop by day, meetup by night - same location, each with valuable content for data and software engineers, or those working with Data Streaming technologies. Build the backbone your apps deserve, then roll straight into the evening meetup.
Join the Factor Community
We’re building more than products, we’re building a community. Whether you're getting started or pushing the limits of what's possible with Kafka and Flink, we invite you to connect, share, and learn with others.