
Developer
Knowledge Center
Empowering engineers with everything they need to build, monitor, and scale real-time data pipelines with confidence.
.webp)
Deploy Kpow on EKS via AWS Marketplace using Helm
Streamline your Kpow deployment on Amazon EKS with our guide, fully integrated with the AWS Marketplace. We use eksctl to automate IAM Roles for Service Accounts (IRSA), providing a secure integration for Kpow's licensing and metering. This allows your instance to handle license validation via AWS License Manager and report usage for hourly subscriptions, enabling a production-ready deployment with minimal configuration.
Overview
This guide provides a comprehensive walkthrough for deploying Kpow, a powerful toolkit for Apache Kafka, onto an Amazon EKS (Elastic Kubernetes Service) cluster. We will cover the entire process from start to finish, including provisioning the necessary AWS infrastructure, deploying a Kafka cluster using the Strimzi operator, and finally, installing Kpow using a subscription from the AWS Marketplace.
The guide demonstrates how to set up both Kpow Annual and Kpow Hourly products, highlighting the specific integration points with AWS services like IAM for service accounts, ECR for container images, and the AWS License Manager for the annual subscription. By the end of this tutorial, you will have a fully functional environment running Kpow on EKS, ready to monitor and manage your Kafka cluster.
The source code and configuration files used in this guide can be found in the features/eks-deployment folder of this GitHub repository.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
Explore our live multi-cluster demo environment or grab a free Community license and dive into streaming tech on your laptop with Factor House Local.

Prerequisites
To follow along the guide, you need:
- CLI Tools:
- AWS Infrastructure:
- VPC: A Virtual Private Cloud (VPC) that has both public and private subnets is required.
- IAM Permissions: A user with the necessary IAM permissions to create an EKS cluster with a service account.
- Kpow Subscription:
- A subscription to a Kpow product through the AWS Marketplace is required. After subscribing, you will receive access to the necessary components and deployment instructions.
- The specifics of accessing the container images and Helm chart depend on the chosen Kpow product:
- Kpow Annual product:
- Subscribing to the annual product provides access to the ECR (Elastic Container Registry) image and the corresponding Helm chart.
- Kpow Hourly product:
- For the hourly product, access to the ECR image will be provided and deployment utilizes the public Factor House Helm repository for installation.
- Kpow Annual product:
Deploy an EKS cluster
We will use eksctl to provision an Amazon EKS cluster. The configuration for the cluster is defined in the manifests/eks/cluster.eksctl.yaml file within the repository.
Before creating the cluster, you must open this file and replace the placeholder values for <VPC-ID>, <PRIVATE-SUBNET-ID-* >, and <PUBLIC-SUBNET-ID-* > with your actual VPC and subnet IDs.
⚠️ The provided configuration assumes the EKS cluster will be deployed in theus-east-1region. If you intend to use a different region, you must update themetadata.regionfield and ensure the availability zone keys undervpc.subnets(e.g.,us-east-1a,us-east-1b) match the availability zones of the subnets in your chosen region.
Here is the content of the cluster.eksctl.yaml file:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: fh-eks-cluster
region: us-east-1
vpc:
id: "<VPC-ID>"
subnets:
private:
us-east-1a:
id: "<PRIVATE-SUBNET-ID-1>"
us-east-1b:
id: "<PRIVATE-SUBNET-ID-2>"
public:
us-east-1a:
id: "<PUBLIC-SUBNET-ID-1>"
us-east-1b:
id: "<PUBLIC-SUBNET-ID-2>"
iam:
withOIDC: true
serviceAccounts:
- metadata:
name: kpow-annual
namespace: factorhouse
attachPolicyARNs:
- "arn:aws:iam::aws:policy/service-role/AWSLicenseManagerConsumptionPolicy"
- metadata:
name: kpow-hourly
namespace: factorhouse
attachPolicyARNs:
- "arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage"
nodeGroups:
- name: ng-dev
instanceType: t3.medium
desiredCapacity: 4
minSize: 2
maxSize: 6
privateNetworking: trueThis configuration sets up the following:
- Cluster Metadata: A cluster named
fh-eks-clusterin theus-east-1region. - VPC: Specifies an existing VPC and its public/private subnets where the cluster resources will be deployed.
- IAM with OIDC: Enables the IAM OIDC provider, which allows Kubernetes service accounts to be associated with IAM roles. This is crucial for granting AWS permissions to your pods.
- Service Accounts:
kpow-annual: Creates a service account for the Kpow Annual product. It attaches theAWSLicenseManagerConsumptionPolicy, allowing Kpow to validate its license with the AWS License Manager service.kpow-hourly: Creates a service account for the Kpow Hourly product. It attaches theAWSMarketplaceMeteringRegisterUsagepolicy, which is required for reporting usage metrics to the AWS Marketplace.
- Node Group: Defines a managed node group named
ng-devwitht3.mediuminstances. The worker nodes will be placed in the private subnets (privateNetworking: true).
Once you have updated the YAML file with your networking details, run the following command to create the cluster. This process can take 15-20 minutes to complete.
eksctl create cluster -f cluster.eksctl.yamlOnce the cluster is created, eksctl automatically updates your kubeconfig file (usually located at ~/.kube/config) with the new cluster's connection details. This allows you to start interacting with your cluster immediately using kubectl.
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# ip-192-168-...-21.ec2.internal Ready <none> 2m15s v1.32.9-eks-113cf36
# ...Launch a Kafka cluster
With the EKS cluster running, we will now launch an Apache Kafka cluster into it. We will use the Strimzi Kafka operator, which simplifies the process of running Kafka on Kubernetes.
Install the Strimzi operator
First, create a dedicated namespace for the Kafka cluster.
kubectl create namespace kafka
Next, download the Strimzi operator installation YAML. The repository already contains the file manifests/kafka/strimzi-cluster-operator-0.45.1.yaml, but the following commands show how it was downloaded and modified for this guide.
## Define the Strimzi version and download URL
STRIMZI_VERSION="0.45.1"
DOWNLOAD_URL=https://github.com/strimzi/strimzi-kafka-operator/releases/download/$STRIMZI_VERSION/strimzi-cluster-operator-$STRIMZI_VERSION.yaml
## Download the operator manifest
curl -L -o manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml ${DOWNLOAD_URL}
## Modify the manifest to install the operator in the 'kafka' namespace
sed -i 's/namespace: .*/namespace: kafka/' manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yamlNow, apply the manifest to install the Strimzi operator in your EKS cluster.
kubectl apply -f manifests/kafka/strimzi-cluster-operator-0.45.1.yaml -n kafkaDeploy a Kafka cluster
The configuration for our Kafka cluster is defined in manifests/kafka/kafka-cluster.yaml. It describes a simple, single-node cluster suitable for development, using ephemeral storage, meaning data will be lost if the pods restart.
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: fh-k8s-cluster
spec:
kafka:
version: 3.9.1
replicas: 1
listeners:
- name: plain
port: 9092
type: internal
tls: false
# ... (content truncated for brevity)Deploy the Kafka cluster with the following command:
kubectl create -f manifests/kafka/kafka-cluster.yaml -n kafkaVerify the deployment
After a few minutes, all the necessary pods and services for Kafka will be running. You can verify this by listing all resources in the kafka namespace.
kubectl get all -n kafka -o nameThe output should look similar to this, showing the pods for Strimzi, Kafka, Zookeeper, and the associated services. The most important service for connecting applications is the Kafka bootstrap service.
# pod/fh-k8s-cluster-entity-operator-...
# pod/fh-k8s-cluster-kafka-0
# ...
# service/fh-k8s-cluster-kafka-bootstrap <-- Kafka bootstrap service
# ...Deploy Kpow
Now that the EKS and Kafka clusters are running, we can deploy Kpow. This guide covers the deployment of both Kpow Annual and Kpow Hourly products. Both deployments will use a common set of configurations for connecting to Kafka and setting up authentication/authorization.
First, ensure you have a namespace for Kpow. The eksctl command we ran earlier already created the service accounts in the factorhouse namespace, so we will use that. If you hadn't created it, you would run kubectl create namespace factorhouse.
Create ConfigMaps
We will use two Kubernetes ConfigMaps to manage Kpow's configuration. This approach separates the core configuration from the Helm deployment values.
kpow-config-files: This ConfigMap holds file-based configurations, including RBAC policies, JAAS configuration, and user properties for authentication.kpow-config: This ConfigMap provides environment variables to the Kpow container, such as the Kafka bootstrap address and settings to enable our authentication provider.
The contents of these files can be found in the repository at manifests/kpow/config-files.yaml and manifests/kpow/config.yaml.
manifests/kpow/config-files.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kpow-config-files
namespace: factorhouse
data:
hash-rbac.yml: |
# RBAC policies defining user roles and permissions
admin_roles:
- "kafka-admins"
# ... (content truncated for brevity)
hash-jaas.conf: |
# JAAS login module configuration
kpow {
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
file="/etc/kpow/jaas/hash-realm.properties";
};
# ... (content truncated for brevity)
hash-realm.properties: |
# User credentials (username: password, roles)
# admin/admin
admin: CRYPT:adpexzg3FUZAk,server-administrators,content-administrators,kafka-admins
# user/password
user: password,kafka-users
# ... (content truncated for brevity)manifests/kpow/config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kpow-config
namespace: factorhouse
data:
# Environment Configuration
BOOTSTRAP: "fh-k8s-cluster-kafka-bootstrap.kafka.svc.cluster.local:9092"
REPLICATION_FACTOR: "1"
# AuthN + AuthZ
JAVA_TOOL_OPTIONS: "-Djava.awt.headless=true -Djava.security.auth.login.config=/etc/kpow/jaas/hash-jaas.conf"
AUTH_PROVIDER_TYPE: "jetty"
RBAC_CONFIGURATION_FILE: "/etc/kpow/rbac/hash-rbac.yml"Apply these manifests to create the ConfigMaps in the factorhouse namespace.
kubectl apply -f manifests/kpow/config-files.yaml \
-f manifests/kpow/config.yaml -n factorhouseYou can verify their creation by running:
kubectl get configmap -n factorhouse
# NAME DATA AGE
# kpow-config 5 ...
# kpow-config-files 3 ...Deploy Kpow Annual
Download the Helm chart
The Helm chart for Kpow Annual is in a private Amazon ECR repository. First, authenticate your Helm client.
# Enable Helm's experimental support for OCI registries
export HELM_EXPERIMENTAL_OCI=1
# Log in to the AWS Marketplace ECR registry
aws ecr get-login-password \
--region us-east-1 | helm registry login \
--username AWS \
--password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.comNext, pull and extract the chart.
# Create a directory, pull the chart, and extract it
mkdir -p awsmp-chart && cd awsmp-chart
# Pull the latest version of the Helm chart from ECR (add --version <x.x.x> to specify a version)
helm pull oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/factor-house/kpow-aws-annual
tar xf $(pwd)/* && find $(pwd) -maxdepth 1 -type f -delete
cd ..Launch Kpow Annual
Now, install Kpow using Helm. We will reference the service account kpow-annual that was created during the EKS cluster setup, which has the required IAM policy for license management.
helm install kpow-annual ./awsmp-chart/kpow-aws-annual/ \
-n factorhouse \
--set serviceAccount.create=false \
--set serviceAccount.name=kpow-annual \
--values ./values/eks-annual.yamlThe Helm values for this deployment are in values/eks-annual.yaml. It mounts the configuration files from our ConfigMaps and sets resource limits.
# values/eks-annual.yaml
env:
ENVIRONMENT_NAME: "Kafka from Kpow Annual"
envFromConfigMap: "kpow-config"
volumeMounts:
- name: kpow-config-volumes
mountPath: /etc/kpow/rbac/hash-rbac.yml
subPath: hash-rbac.yml
- name: kpow-config-volumes
mountPath: /etc/kpow/jaas/hash-jaas.conf
subPath: hash-jaas.conf
- name: kpow-config-volumes
mountPath: /etc/kpow/jaas/hash-realm.properties
subPath: hash-realm.properties
volumes:
- name: kpow-config-volumes
configMap:
name: "kpow-config-files"
resources:
limits:
cpu: 1
memory: 0.5Gi
requests:
cpu: 1
memory: 0.5GiNote: The CPU and memory values are intentionally set low for this guide. For production environments, check the official documentation for recommended capacity.
Verify and access Kpow Annual
Check that the Kpow pod is running successfully.
kubectl get all -l app.kubernetes.io/instance=kpow-annual -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-annual-kpow-aws-annual-c6bc849fb-zw5ww 0/1 Running 0 46s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-annual-kpow-aws-annual ClusterIP 10.100.220.114 <none> 3000/TCP 47s
# ...To access the UI, forward the service port to your local machine.
kubectl -n factorhouse port-forward service/kpow-annual-kpow-aws-annual 3000:3000You can now access Kpow by navigating to http://localhost:3000 in your browser.

Deploy Kpow Hourly
Configure the Kpow Helm repository
The Helm chart for Kpow Hourly is available in the Factor House Helm repository. First, add the Helm repository.
helm repo add factorhouse https://charts.factorhouse.ioNext, update Helm repositories to ensure you install the latest version of Kpow.
helm repo updateLaunch Kpow Hourly
Install Kpow using Helm, referencing the kpow-hourly service account which has the IAM policy for marketplace metering.
helm install kpow-hourly factorhouse/kpow-aws-hourly \
-n factorhouse \
--set serviceAccount.create=false \
--set serviceAccount.name=kpow-hourly \
--values ./values/eks-hourly.yamlThe Helm values are defined in values/eks-hourly.yaml.
# values/eks-hourly.yaml
env:
ENVIRONMENT_NAME: "Kafka from Kpow Hourly"
envFromConfigMap: "kpow-config"
volumeMounts:
# ... (volume configuration is the same as annual)
volumes:
# ...
resources:
# ...Verify and access Kpow Hourly
Check that the Kpow pod is running.
kubectl get all -l app.kubernetes.io/instance=kpow-hourly -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-hourly-kpow-aws-hourly-68869b6cb9-x9prf 0/1 Running 0 83s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-hourly-kpow-aws-hourly ClusterIP 10.100.221.36 <none> 3000/TCP 85s
# ...To access the UI, forward the service port to a different local port (e.g., 3001) to avoid conflicts.
kubectl -n factorhouse port-forward service/kpow-hourly-kpow-aws-hourly 3001:3000You can now access Kpow by navigating to http://localhost:3001 in your browser.

Delete resources
To avoid ongoing AWS charges, clean up all created resources in reverse order.
Delete Kpow and ConfigMaps
helm uninstall kpow-annual kpow-hourly -n factorhouse
kubectl delete -f manifests/kpow/config-files.yaml \
-f manifests/kpow/config.yaml -n factorhouseDelete the Kafka cluster and Strimzi operator
STRIMZI_VERSION="0.45.1"
kubectl delete -f manifests/kafka/kafka-cluster.yaml -n kafka
kubectl delete -f manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml -n kafkaDelete the EKS cluster
This command will remove the cluster and all associated resources.
eksctl delete cluster -f manifests/eks/cluster.eksctl.yamlConclusion
In this guide, we have successfully deployed a complete, production-ready environment for monitoring Apache Kafka on AWS. By leveraging eksctl, we provisioned a robust EKS cluster with correctly configured IAM roles for service accounts, a critical step for secure integration with AWS services. We then deployed a Kafka cluster using the Strimzi operator, demonstrating the power of Kubernetes operators in simplifying complex stateful applications.
Finally, we walked through the deployment of both Kpow Annual and Kpow Hourly from the AWS Marketplace. This showcased the flexibility of Kpow's subscription models and their seamless integration with AWS for licensing and metering. You are now equipped with the knowledge to set up, configure, and manage Kpow on EKS, unlocking powerful insights and operational control over your Kafka ecosystem.
Highlights
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
.webp)
Deploy Kpow on EKS via AWS Marketplace using Helm
Streamline your Kpow deployment on Amazon EKS with our guide, fully integrated with the AWS Marketplace. We use eksctl to automate IAM Roles for Service Accounts (IRSA), providing a secure integration for Kpow's licensing and metering. This allows your instance to handle license validation via AWS License Manager and report usage for hourly subscriptions, enabling a production-ready deployment with minimal configuration.
Overview
This guide provides a comprehensive walkthrough for deploying Kpow, a powerful toolkit for Apache Kafka, onto an Amazon EKS (Elastic Kubernetes Service) cluster. We will cover the entire process from start to finish, including provisioning the necessary AWS infrastructure, deploying a Kafka cluster using the Strimzi operator, and finally, installing Kpow using a subscription from the AWS Marketplace.
The guide demonstrates how to set up both Kpow Annual and Kpow Hourly products, highlighting the specific integration points with AWS services like IAM for service accounts, ECR for container images, and the AWS License Manager for the annual subscription. By the end of this tutorial, you will have a fully functional environment running Kpow on EKS, ready to monitor and manage your Kafka cluster.
The source code and configuration files used in this guide can be found in the features/eks-deployment folder of this GitHub repository.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
Explore our live multi-cluster demo environment or grab a free Community license and dive into streaming tech on your laptop with Factor House Local.

Prerequisites
To follow along the guide, you need:
- CLI Tools:
- AWS Infrastructure:
- VPC: A Virtual Private Cloud (VPC) that has both public and private subnets is required.
- IAM Permissions: A user with the necessary IAM permissions to create an EKS cluster with a service account.
- Kpow Subscription:
- A subscription to a Kpow product through the AWS Marketplace is required. After subscribing, you will receive access to the necessary components and deployment instructions.
- The specifics of accessing the container images and Helm chart depend on the chosen Kpow product:
- Kpow Annual product:
- Subscribing to the annual product provides access to the ECR (Elastic Container Registry) image and the corresponding Helm chart.
- Kpow Hourly product:
- For the hourly product, access to the ECR image will be provided and deployment utilizes the public Factor House Helm repository for installation.
- Kpow Annual product:
Deploy an EKS cluster
We will use eksctl to provision an Amazon EKS cluster. The configuration for the cluster is defined in the manifests/eks/cluster.eksctl.yaml file within the repository.
Before creating the cluster, you must open this file and replace the placeholder values for <VPC-ID>, <PRIVATE-SUBNET-ID-* >, and <PUBLIC-SUBNET-ID-* > with your actual VPC and subnet IDs.
⚠️ The provided configuration assumes the EKS cluster will be deployed in theus-east-1region. If you intend to use a different region, you must update themetadata.regionfield and ensure the availability zone keys undervpc.subnets(e.g.,us-east-1a,us-east-1b) match the availability zones of the subnets in your chosen region.
Here is the content of the cluster.eksctl.yaml file:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: fh-eks-cluster
region: us-east-1
vpc:
id: "<VPC-ID>"
subnets:
private:
us-east-1a:
id: "<PRIVATE-SUBNET-ID-1>"
us-east-1b:
id: "<PRIVATE-SUBNET-ID-2>"
public:
us-east-1a:
id: "<PUBLIC-SUBNET-ID-1>"
us-east-1b:
id: "<PUBLIC-SUBNET-ID-2>"
iam:
withOIDC: true
serviceAccounts:
- metadata:
name: kpow-annual
namespace: factorhouse
attachPolicyARNs:
- "arn:aws:iam::aws:policy/service-role/AWSLicenseManagerConsumptionPolicy"
- metadata:
name: kpow-hourly
namespace: factorhouse
attachPolicyARNs:
- "arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage"
nodeGroups:
- name: ng-dev
instanceType: t3.medium
desiredCapacity: 4
minSize: 2
maxSize: 6
privateNetworking: trueThis configuration sets up the following:
- Cluster Metadata: A cluster named
fh-eks-clusterin theus-east-1region. - VPC: Specifies an existing VPC and its public/private subnets where the cluster resources will be deployed.
- IAM with OIDC: Enables the IAM OIDC provider, which allows Kubernetes service accounts to be associated with IAM roles. This is crucial for granting AWS permissions to your pods.
- Service Accounts:
kpow-annual: Creates a service account for the Kpow Annual product. It attaches theAWSLicenseManagerConsumptionPolicy, allowing Kpow to validate its license with the AWS License Manager service.kpow-hourly: Creates a service account for the Kpow Hourly product. It attaches theAWSMarketplaceMeteringRegisterUsagepolicy, which is required for reporting usage metrics to the AWS Marketplace.
- Node Group: Defines a managed node group named
ng-devwitht3.mediuminstances. The worker nodes will be placed in the private subnets (privateNetworking: true).
Once you have updated the YAML file with your networking details, run the following command to create the cluster. This process can take 15-20 minutes to complete.
eksctl create cluster -f cluster.eksctl.yamlOnce the cluster is created, eksctl automatically updates your kubeconfig file (usually located at ~/.kube/config) with the new cluster's connection details. This allows you to start interacting with your cluster immediately using kubectl.
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# ip-192-168-...-21.ec2.internal Ready <none> 2m15s v1.32.9-eks-113cf36
# ...Launch a Kafka cluster
With the EKS cluster running, we will now launch an Apache Kafka cluster into it. We will use the Strimzi Kafka operator, which simplifies the process of running Kafka on Kubernetes.
Install the Strimzi operator
First, create a dedicated namespace for the Kafka cluster.
kubectl create namespace kafka
Next, download the Strimzi operator installation YAML. The repository already contains the file manifests/kafka/strimzi-cluster-operator-0.45.1.yaml, but the following commands show how it was downloaded and modified for this guide.
## Define the Strimzi version and download URL
STRIMZI_VERSION="0.45.1"
DOWNLOAD_URL=https://github.com/strimzi/strimzi-kafka-operator/releases/download/$STRIMZI_VERSION/strimzi-cluster-operator-$STRIMZI_VERSION.yaml
## Download the operator manifest
curl -L -o manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml ${DOWNLOAD_URL}
## Modify the manifest to install the operator in the 'kafka' namespace
sed -i 's/namespace: .*/namespace: kafka/' manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yamlNow, apply the manifest to install the Strimzi operator in your EKS cluster.
kubectl apply -f manifests/kafka/strimzi-cluster-operator-0.45.1.yaml -n kafkaDeploy a Kafka cluster
The configuration for our Kafka cluster is defined in manifests/kafka/kafka-cluster.yaml. It describes a simple, single-node cluster suitable for development, using ephemeral storage, meaning data will be lost if the pods restart.
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: fh-k8s-cluster
spec:
kafka:
version: 3.9.1
replicas: 1
listeners:
- name: plain
port: 9092
type: internal
tls: false
# ... (content truncated for brevity)Deploy the Kafka cluster with the following command:
kubectl create -f manifests/kafka/kafka-cluster.yaml -n kafkaVerify the deployment
After a few minutes, all the necessary pods and services for Kafka will be running. You can verify this by listing all resources in the kafka namespace.
kubectl get all -n kafka -o nameThe output should look similar to this, showing the pods for Strimzi, Kafka, Zookeeper, and the associated services. The most important service for connecting applications is the Kafka bootstrap service.
# pod/fh-k8s-cluster-entity-operator-...
# pod/fh-k8s-cluster-kafka-0
# ...
# service/fh-k8s-cluster-kafka-bootstrap <-- Kafka bootstrap service
# ...Deploy Kpow
Now that the EKS and Kafka clusters are running, we can deploy Kpow. This guide covers the deployment of both Kpow Annual and Kpow Hourly products. Both deployments will use a common set of configurations for connecting to Kafka and setting up authentication/authorization.
First, ensure you have a namespace for Kpow. The eksctl command we ran earlier already created the service accounts in the factorhouse namespace, so we will use that. If you hadn't created it, you would run kubectl create namespace factorhouse.
Create ConfigMaps
We will use two Kubernetes ConfigMaps to manage Kpow's configuration. This approach separates the core configuration from the Helm deployment values.
kpow-config-files: This ConfigMap holds file-based configurations, including RBAC policies, JAAS configuration, and user properties for authentication.kpow-config: This ConfigMap provides environment variables to the Kpow container, such as the Kafka bootstrap address and settings to enable our authentication provider.
The contents of these files can be found in the repository at manifests/kpow/config-files.yaml and manifests/kpow/config.yaml.
manifests/kpow/config-files.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kpow-config-files
namespace: factorhouse
data:
hash-rbac.yml: |
# RBAC policies defining user roles and permissions
admin_roles:
- "kafka-admins"
# ... (content truncated for brevity)
hash-jaas.conf: |
# JAAS login module configuration
kpow {
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
file="/etc/kpow/jaas/hash-realm.properties";
};
# ... (content truncated for brevity)
hash-realm.properties: |
# User credentials (username: password, roles)
# admin/admin
admin: CRYPT:adpexzg3FUZAk,server-administrators,content-administrators,kafka-admins
# user/password
user: password,kafka-users
# ... (content truncated for brevity)manifests/kpow/config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kpow-config
namespace: factorhouse
data:
# Environment Configuration
BOOTSTRAP: "fh-k8s-cluster-kafka-bootstrap.kafka.svc.cluster.local:9092"
REPLICATION_FACTOR: "1"
# AuthN + AuthZ
JAVA_TOOL_OPTIONS: "-Djava.awt.headless=true -Djava.security.auth.login.config=/etc/kpow/jaas/hash-jaas.conf"
AUTH_PROVIDER_TYPE: "jetty"
RBAC_CONFIGURATION_FILE: "/etc/kpow/rbac/hash-rbac.yml"Apply these manifests to create the ConfigMaps in the factorhouse namespace.
kubectl apply -f manifests/kpow/config-files.yaml \
-f manifests/kpow/config.yaml -n factorhouseYou can verify their creation by running:
kubectl get configmap -n factorhouse
# NAME DATA AGE
# kpow-config 5 ...
# kpow-config-files 3 ...Deploy Kpow Annual
Download the Helm chart
The Helm chart for Kpow Annual is in a private Amazon ECR repository. First, authenticate your Helm client.
# Enable Helm's experimental support for OCI registries
export HELM_EXPERIMENTAL_OCI=1
# Log in to the AWS Marketplace ECR registry
aws ecr get-login-password \
--region us-east-1 | helm registry login \
--username AWS \
--password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.comNext, pull and extract the chart.
# Create a directory, pull the chart, and extract it
mkdir -p awsmp-chart && cd awsmp-chart
# Pull the latest version of the Helm chart from ECR (add --version <x.x.x> to specify a version)
helm pull oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/factor-house/kpow-aws-annual
tar xf $(pwd)/* && find $(pwd) -maxdepth 1 -type f -delete
cd ..Launch Kpow Annual
Now, install Kpow using Helm. We will reference the service account kpow-annual that was created during the EKS cluster setup, which has the required IAM policy for license management.
helm install kpow-annual ./awsmp-chart/kpow-aws-annual/ \
-n factorhouse \
--set serviceAccount.create=false \
--set serviceAccount.name=kpow-annual \
--values ./values/eks-annual.yamlThe Helm values for this deployment are in values/eks-annual.yaml. It mounts the configuration files from our ConfigMaps and sets resource limits.
# values/eks-annual.yaml
env:
ENVIRONMENT_NAME: "Kafka from Kpow Annual"
envFromConfigMap: "kpow-config"
volumeMounts:
- name: kpow-config-volumes
mountPath: /etc/kpow/rbac/hash-rbac.yml
subPath: hash-rbac.yml
- name: kpow-config-volumes
mountPath: /etc/kpow/jaas/hash-jaas.conf
subPath: hash-jaas.conf
- name: kpow-config-volumes
mountPath: /etc/kpow/jaas/hash-realm.properties
subPath: hash-realm.properties
volumes:
- name: kpow-config-volumes
configMap:
name: "kpow-config-files"
resources:
limits:
cpu: 1
memory: 0.5Gi
requests:
cpu: 1
memory: 0.5GiNote: The CPU and memory values are intentionally set low for this guide. For production environments, check the official documentation for recommended capacity.
Verify and access Kpow Annual
Check that the Kpow pod is running successfully.
kubectl get all -l app.kubernetes.io/instance=kpow-annual -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-annual-kpow-aws-annual-c6bc849fb-zw5ww 0/1 Running 0 46s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-annual-kpow-aws-annual ClusterIP 10.100.220.114 <none> 3000/TCP 47s
# ...To access the UI, forward the service port to your local machine.
kubectl -n factorhouse port-forward service/kpow-annual-kpow-aws-annual 3000:3000You can now access Kpow by navigating to http://localhost:3000 in your browser.

Deploy Kpow Hourly
Configure the Kpow Helm repository
The Helm chart for Kpow Hourly is available in the Factor House Helm repository. First, add the Helm repository.
helm repo add factorhouse https://charts.factorhouse.ioNext, update Helm repositories to ensure you install the latest version of Kpow.
helm repo updateLaunch Kpow Hourly
Install Kpow using Helm, referencing the kpow-hourly service account which has the IAM policy for marketplace metering.
helm install kpow-hourly factorhouse/kpow-aws-hourly \
-n factorhouse \
--set serviceAccount.create=false \
--set serviceAccount.name=kpow-hourly \
--values ./values/eks-hourly.yamlThe Helm values are defined in values/eks-hourly.yaml.
# values/eks-hourly.yaml
env:
ENVIRONMENT_NAME: "Kafka from Kpow Hourly"
envFromConfigMap: "kpow-config"
volumeMounts:
# ... (volume configuration is the same as annual)
volumes:
# ...
resources:
# ...Verify and access Kpow Hourly
Check that the Kpow pod is running.
kubectl get all -l app.kubernetes.io/instance=kpow-hourly -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-hourly-kpow-aws-hourly-68869b6cb9-x9prf 0/1 Running 0 83s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-hourly-kpow-aws-hourly ClusterIP 10.100.221.36 <none> 3000/TCP 85s
# ...To access the UI, forward the service port to a different local port (e.g., 3001) to avoid conflicts.
kubectl -n factorhouse port-forward service/kpow-hourly-kpow-aws-hourly 3001:3000You can now access Kpow by navigating to http://localhost:3001 in your browser.

Delete resources
To avoid ongoing AWS charges, clean up all created resources in reverse order.
Delete Kpow and ConfigMaps
helm uninstall kpow-annual kpow-hourly -n factorhouse
kubectl delete -f manifests/kpow/config-files.yaml \
-f manifests/kpow/config.yaml -n factorhouseDelete the Kafka cluster and Strimzi operator
STRIMZI_VERSION="0.45.1"
kubectl delete -f manifests/kafka/kafka-cluster.yaml -n kafka
kubectl delete -f manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml -n kafkaDelete the EKS cluster
This command will remove the cluster and all associated resources.
eksctl delete cluster -f manifests/eks/cluster.eksctl.yamlConclusion
In this guide, we have successfully deployed a complete, production-ready environment for monitoring Apache Kafka on AWS. By leveraging eksctl, we provisioned a robust EKS cluster with correctly configured IAM roles for service accounts, a critical step for secure integration with AWS services. We then deployed a Kafka cluster using the Strimzi operator, demonstrating the power of Kubernetes operators in simplifying complex stateful applications.
Finally, we walked through the deployment of both Kpow Annual and Kpow Hourly from the AWS Marketplace. This showcased the flexibility of Kpow's subscription models and their seamless integration with AWS for licensing and metering. You are now equipped with the knowledge to set up, configure, and manage Kpow on EKS, unlocking powerful insights and operational control over your Kafka ecosystem.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Release 94.6: Factor Platform, Ververica Integration, and kJQ Enhancements
The first Factor Platform release candidate is here, a major milestone toward a unified control plane for real-time data streaming technologies. This release also introduces Ververica Platform integration in Flex, plus support for Kafka Clients 4.1 / Confluent 8.0.0 and new kJQ operators for richer stream inspection.
Factor Platform release candidate: Early access to unified streaming control
For organisations operating streaming at scale, the challenge has never been about any one technology. It's about managing complexity across regions, tools, and teams while maintaining governance, performance, and cost control.
We've spent years building tools that bring clarity to Apache Kafka and Apache Flink. Now, we're taking everything we've learned and building something bigger: Factor Platform, a unified control plane for real-time data infrastructure.
Factor Platform delivers complete visibility and federated control across hundreds of clusters, multiple clouds, and distributed teams from a single interface. Engineers gain deep operational insight into jobs, topics, and lineage. Business and compliance teams benefit from native catalogs, FinOps intelligence, and audit-ready transparency.
The first release candidate is live. It's designed for early adopters exploring large-scale, persistent streaming environments, and it's ready to be shaped by the teams who use it.
Interested in early access? Contact sales@factorhouse.io

Unlocking native Flink management with Ververica Platform
Our collaboration with Ververica (the original creators of Apache Flink), enters a new phase with the introduction of Flex + Ververica Platform integration. This brings Flink’s enterprise management and observability capabilities directly into the Factor House ecosystem.
Flex users can now connect to Ververica Platform (Community or Enterprise v2) and instantly visualize session clusters, job deployments, and runtime performance. The current release provides a snapshot view of Ververica resources at startup, with live synchronization planned for future updates. It's a huge step toward true end-to-end streaming visibility—from data ingestion, to transformation, to delivery.
Configuration is straightforward: point to your Ververica REST API, authenticate via secure token, and your Flink environments appear right alongside your clusters.
This release represents just the beginning of our partnership with Ververica. Together, we’re exploring deeper integrations across the Flink ecosystem, including OpenShift and Amazon Managed Service for Apache Flink, to make enterprise-scale stream processing simpler and more powerful.
Read the full Ververica Platform integration guide →
Advancing Kafka support with Kafka Clients 4.1.0 and Confluent Schema SerDes 8.0.0
We’ve upgraded to Kafka Clients 4.1.0 / Confluent Schema SerDes 8.0.0, aligning Kpow with the latest Kafka ecosystem updates. Teams using custom Protobuf Serdes should review potential compatibility changes.
Data Inspect gets more powerful with kJQ enhancements
Data Inspect in Kpow has been upgraded with improvements to kJQ, our lightweight JSON query language for streaming data. The new release introduces map() and select() functions, expanding the expressive power of kJQ for working with nested and dynamic data. These additions make it possible to iterate over collections, filter elements based on complex conditions, and compose advanced data quality or anomaly detection filters directly in the browser. Users can now extract specific values from arrays, filter deeply nested structures, and chain logic with built-in functions like contains, test, and is-empty.
For example, you can now write queries like:
.value.correctingProperty.names | map(.localeLanguageCode) | contains("pt")Or filter and validate nested collections:
.value.names | map(select(.languageCode == "pt-Pt")) | is-empty | notThese updates make Data Inspect far more powerful for real-time debugging, validation, and exploratory data analysis. Explore the full range of examples and interactive demos in the kJQ documentation.
See map() and select() in action in the kJQ Playground →
Schema Registry performance improvements
We’ve greatly improved Schema Registry performance for large installations. The observation process now cuts down on the number of REST calls each schema observation makes by an order of magnitude. Kpow now defaults to SCHEMA_REGISTRY_OBSERVATION_VERSION=2, meaning all customers automatically benefit from these performance boosts.
.webp)
Kpow Custom Serdes and Protobuf v4.31.1
This post explains an update in the version of protobuf libraries used by Kpow, and a possible compatibility impact this update may cause to user defined Custom Serdes.
Kpow Custom Serdes and Protobuf v4.31.1
Note: The potential compatibility issues described in this post only impacts users who have implemented Custom Serdes that contain generated protobuf classes.
Resolution: If you encounter these compatibility issues, resolve them by re-generating any generated protobuf classes with protoc v31.1.
In the upcoming v94.6 release of Kpow, we're updating all Confluent Serdes dependencies to the latest major version 8.0.1.
In io.confluent/kafka-protobuf-serializer:8.0.1 the protobuf version is advanced from 3.25.5 to 4.31.1, and so the version of protobuf used by Kpow changes.
- Confluent protobuf upgrade PR: https://github.com/confluentinc/schema-registry/pull/3569
- Related Github issue: https://github.com/confluentinc/schema-registry/issues/3047
This is a major upgrade of the underlying protobuf libraries, and there are some breaking changes related to generated code.
Protobuf 3.26.6 introduces a breaking change that fails at runtime (deliberately) if the makeExtensionsImmutable method is called as part of generated protobuf code.
The decision to break at runtime was taken because earlier versions of protobuf were found to be vulnerable to the footmitten CVE.
- Protobuf footmitten CVE and breaking change announcement: https://protobuf.dev/news/2025-01-23/
- Apache protobuf discussion thread: https://lists.apache.org/thread/87osjw051xnx5l5v50dt3t81yfjxygwr
- Comment on a Schema Registry ticket: https://github.com/confluentinc/schema-registry/issues/3360
We found that when we advanced to the 8.0.1 version of the libraries; we encountered issues with some test classes generated by 3.x protobuf libraries.
Compilation issues:
Compiling 14 source files to /home/runner/work/core/core/target/kpow-enterprise/classes
/home/runner/work/core/core/modules/kpow/src-java-dev/factorhouse/serdes/MyRecordOuterClass.java:129: error: cannot find symbol
makeExtensionsImmutable();
^
symbol: method makeExtensionsImmutable()
location: class MyRecordRuntime issues:
Bad type on operand stack
Exception Details:
Location:
io/confluent/kafka/schemaregistry/protobuf/ProtobufSchema.toMessage(Lcom/google/protobuf/DescriptorProtos$FileDescriptorProto;Lcom/google/protobuf/DescriptorProtos$DescriptorProto;)Lcom/squareup/wire/schema/internal/parser/MessageElement; : invokestatic
Reason:
Type 'com/google/protobuf/DescriptorProtos$MessageOptions' (current frame, stack[1]) is not assignable to 'com/google/protobuf/GeneratedMessage$ExtendableMessage'
Current Frame:
bci:
flags: { }
locals: { 'com/google/protobuf/DescriptorProtos$FileDescriptorProto', 'com/google/protobuf/DescriptorProtos$DescriptorProto', 'java/lang/String', 'com/google/common/collect/ImmutableList$Builder', 'com/google/common/collect/ImmutableList$Builder', 'com/google/common/collect/ImmutableList$Builder', 'com/google/common/collect/ImmutableList$Builder', 'java/util/LinkedHashMap', 'java/util/LinkedHashMap', 'java/util/List', 'com/google/common/collect/ImmutableList$Builder' }
stack: { 'com/google/common/collect/ImmutableList$Builder', 'com/google/protobuf/DescriptorProtos$MessageOptions' }
Bytecode:
0000000: 2bb6 0334 4db2 0072 1303 352c b903 3703
0000010: 00b8 0159 4eb8 0159 3a04 b801 593a 05b8
0000020: 0159 3a06 bb02 8959 b702 8b3a 07bb 0289If you encounter these compatibility issues, resolve them by re-generating any generated protobuf classes with protoc v31.1.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
All Resources
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Release 94.3: BYO AI, Topic data inference, and Data inspect improvements
This minor release from Factor House introduces BYO AI model support, topic data inference, and major enhancements to data inspect—such as AI-powered filtering, new UI elements, and AVRO date formatting. It also adds integration with GCP Managed Kafka Schema Registry, improved webhook support, updated policy handling, and several UI and error-handling improvements.
This minor release from Factor House introduces BYO AI model support, topic data inference, and major enhancements to data inspect—such as AI-powered filtering, new UI elements, and AVRO date formatting. It also adds integration with GCP Managed Kafka Schema Registry, improved webhook support, updated policy handling, and several UI and error-handling improvements.
BYO AI
Kpow now offers optional integrations with popular AI models, from Ollama to enterprise solutions like Azure OpenAI. These features are entirely opt-in: unless you configure an AI provider, Kpow will not expose any AI functionality.
As of release 94.3, Kpow supports integrations with:
- Azure OpenAI
- AWS Bedrock
- OpenAI
- Anthropic
- Ollama
You can configure one or more AI models, and set a default model in your user preferences for use with Kpow’s AI-powered features.
These AI models power all AI-powered features in Kpow. Read the documentation for more details.

kJQ filter generation
Transform natural language queries into powerful kJQ filters with AI-assisted query generation. This feature empowers users of all technical backgrounds to extract insights from Kafka topics without requiring deep JQ programming knowledge.
How it works
Simply describe what you're looking for in plain English, and the AI model generates a syntactically correct kJQ filter tailored to your data. The system leverages:
- Natural language processing: Convert conversational prompts like "show me all orders over $100 from the last hour" into precise kJQ expressions.
- Schema-aware generation: When topic schemas are available, the AI optionally incorporates field names, data types, and structure to create more accurate filters.
- Validation integration: Generated filters are automatically validated against Kpow's kJQ engine to ensure syntactic correctness before execution.
Usage
Navigate to any topic's Data Inspect view and select the AI Filter option. Enter your query in natural language, and Kpow will generate the corresponding kJQ filter. You can then execute, modify, or save the generated filter for future use.
The AI filter generator works best when provided with specific, actionable descriptions of the data you want to find. Include field names, value ranges, example data and logical operators in your natural language query for optimal results.

Auto deserializers
Within Kpow's Data Inspect UI, you can specify Auto as the key or value deserializer and Kpow will attempt to infer the data format and deserialize the records it consumes.
Auto SerDes provides immediate data inspection capabilities without requiring prior knowledge of topic serialization formats. This feature is particularly valuable when:
- Exploring unfamiliar topics for the first time
- Working with multiple topics that may contain mixed data formats
- Debugging serialization issues across different environments
- Onboarding new team members who need quick topic insights
The Auto SerDes option appears alongside manually configurable deserializers like JSON, Avro, String, and custom SerDes in the Data Inspect interface.
When selected, Kpow analyzes each topic and applies the most appropriate deserializer automatically.
Topic inference observation
To persist and query inferred topic information—such as key deserializer, value deserializer, and schema registry ID—in Kpow’s UI, enable the Topic SerDes Observation job by setting:
INFER_TOPIC_SERDES=true

kJQ language improvements
In response to our customers' evolving filtering needs, we've significantly improved the kJQ language to make Kafka record filtering more powerful and flexible. Check out the updated kJQ filters documentation for full details.
Below are some highlights of the improvements:
Chained alternatives
Selects the first non-null email address and checks if it ends with ".com":
.value.primary_email // .value.secondary_email // .value.contact_email | endswith(".com")String/Array slices
Matches where the first 3 characters of transaction_id equal TXN:
.value.transaction_id[0:3] == "TXN"For example, { "transaction_id": "TXN12345" } matches, while { "transaction_id": "ORD12345" } does not
UUID type support
kJQ supports UUID types out of the box, including the UUID deserializer, AVRO + logical types, or Transit / JSON and EDN deserializers that have richer data types.
To compare against literal UUID strings, prefix them with #uuid to coerce into a UUID:
.key == #uuid "fc1ba6a8-6d77-46a0-b9cf-277b6d355fa6"Data inspect improvements
UI overhaul
We've enhanced the data inspect UI with smaller toolbar buttons and reorganized dropdowns to accommodate the new features detailed below.

Event log
All queries display the 'Event log' when anomalies occur during search, such as data policy applications, deserialization errors, or record processing exceptions. Each event includes a timestamp and a severity level (INFO, WARN, or ERROR).

Send to kREPL
Customers can now send data inspect queries directly to the kREPL, our programmatic Kafka interface designed for power users. The kREPL enables notebook-style data transformations—such as aggregations and groupings—on consumed data.
You can access this feature through the 'Send' dropdown in the result metadata toolbar. Learn more about the kREPL by visiting our documentation.

GCP managed Kafka schema registry integration
Kpow now supports the Google Schema Registry as a varient of Confluent-compatible schema registry. Here is an example configuration.
ENVIRONMENT_NAME=GCP Kafka Cluster
BOOTSTRAP=bootstrap.<cluster-id>.<gcp-region>.managedkafka.<gcp-project-id>.cloud.goog:9092
SECURITY_PROTOCOL=SASL_SSL
SASL_MECHANISM=OAUTHBEARER
SASL_LOGIN_CALLBACK_HANDLER_CLASS=com.google.cloud.hosted.kafka.auth.GcpLoginCallbackHandler
SASL_JAAS_CONFIG=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
SCHEMA_REGISTRY_NAME=GCP Schema Registry
SCHEMA_REGISTRY_URL=https://managedkafka.googleapis.com/v1/projects/<gcp-project-id>/locations/<gcp-region>/schemaRegistries/<registry-id>
SCHEMA_REGISTRY_BEARER_AUTH_CUSTOM_PROVIDER_CLASS=com.google.cloud.hosted.kafka.auth.GcpBearerAuthCredentialProvider
SCHEMA_REGISTRY_BEARER_AUTH_CREDENTIALS_SOURCE=CUSTOMAdditional webhooks
We now support both Microsoft Teams and a generic HTTP webhook call. When configured, Kpow can send Data governance (Audit log) records to your Slack, all you need to do is configure a webhook.
Temporary policy improvements
Administrators can now set a custom expiration date and time when creating temporary policies. Selecting Custom in the duration dropdown activates a mandatory datetime picker that only allows future dates.
Configuration options include the TEMPORARY_POLICY_MAX_MS environment variable to control the maximum policy duration (default: 1 hour). Setting this to -1 removes the duration limit.
Discover more about these updates in our documentation.
Helm charts
Artifact Hub updates
- Added values schema support for improved chart validation
- Charts are now signed for enhanced security and trust
- Publisher identity is verified to ensure authenticity
- Charts are marked as official for trusted, curated content
Container security improvements
The container security context has been tightened by disabling privilege escalation (AllowPrivilegeEscalation=false), running the container as a non-root user (UID 1001), disallowing privileged mode, and dropping all Linux capabilities.

Ensuring Your Data Streaming Stack Is Ready for the EU Data Act
The EU Data Act takes effect in September 2025, introducing major implications for teams running Kafka. This article explores what the Act means for data streaming engineers, and how Kpow can help ensure compliance — from user data access to audit logging and secure interoperability.
Introduction: What’s Changing?
The EU Data Act, effective as of September 12, 2025, introduces new requirements for data access, sharing, and interoperability across the European Union's single market. For engineers managing data streaming platforms like Apache Kafka, this means re-evaluating how your stack handles user data, security, provider switching, and compliance with both the Data Act and GDPR.
What Does the EU Data Act Mean for Data Streaming?
At its core, the Data Act requires that users of connected products and services have access to their raw usage data and metadata. This means that streaming platforms processing such data need to enable seamless and secure access and sharing capabilities. Additionally, the Act outlines rules for business-to-business and business-to-government data sharing, while safeguarding trade secrets and ensuring the security of data.
Key highlights of the Act
Why These Changes Matter for Kafka Engineers
Kafka is at the heart of many modern data streaming architectures, making it critical to align Kafka operations with these new legal requirements. Engineers must ensure:
- Data streams can be accessed and shared securely and transparently.
- Systems support interoperability to facilitate provider switching without data loss or downtime.
- Access controls and auditing are in place to protect sensitive data and demonstrate compliance.
- Data subject rights under GDPR, such as access, correction, and deletion, can be fulfilled efficiently.
How Kpow Supports Compliance and Operational Excellence
At Factor House, we designed Kpow with these challenges in mind. As an enterprise-native company, our flagship solution Kpow is a comprehensive Kafka management and monitoring platform that empowers engineers to meet the demands of the new EU data landscape. Right out of the box, Kpow enables engineers to meet stringent EU Data Act requirements with ease.
| EU Data Act |
Kpow's fulfilment out-of-the-box
|
|---|---|
| Data Access and Portability: Data processing services must provide users and authorized third parties access to product and service data in accessible, interoperable formats and to support seamless switching between providers. |
Kpow connects directly to Kafka clusters using standard client configurations, enabling
engineers to expose and manage streaming data effectively, supporting data access requests and
portability without vendor lock-in or switching charges.
|
| Transparency and Jurisdictional Information: Mandated transparency about the jurisdiction of ICT infrastructure and the technical, organizational, and contractual safeguards against unlawful international governmental access to non-personal data. |
Kpow stores all monitoring data locally within Kafka clusters, minimizing data exposure and
supporting data sovereignty.
|
| Security and Access Controls: Protect trade secrets and personal data, and comply with GDPR when personal data is involved. |
Kpow integrates with enterprise-grade authentication providers (OAuth2, OpenID, SAML, LDAP) and
implements configurable Role-Based Access Control (RBAC), ensuring that only authorized users
can access sensitive data streams. Kpow allows configurable redaction of data inspection results
through its data policies, providing enhanced protection against the exposure of sensitive
information.
|
| Auditability and Monitoring: Data sharing and security require an auditable trail of who accessed what data and when. |
Kpow provides rich telemetry, consumer group insights, and audit logging capabilities, enabling
organizations to monitor data access and usage.
|
| Service Switching and Interoperability: Ensure customers can migrate data streaming workloads smoothly without disruption or additional costs. |
Kpow enables multi-cluster management through standard Kafka client configurations, allowing
seamless connection, monitoring, and migration across multiple Kafka clusters and environments
without vendor lock-in or proprietary dependencies.
|
| Internal Procedures and Legal Compliance Support: Protect trade secrets and other sensitive data while enabling lawful data sharing without unnecessary obstruction. |
By providing detailed visibility and control over Kafka data streams, Kpow helps organizations
implement internal procedures to respond promptly to data access requests, identify trade
secrets, and apply necessary protective measures.
|
Practical Steps for Engineers
- Review your current Kafka stack: Ensure configurations support data access, portability, and interoperability.
- Implement robust authentication and RBAC: Protect sensitive streams and support GDPR compliance.
- Enable detailed audit logging: Prepare for regulatory audits and internal monitoring.
- Test provider switching: Validate that you can migrate workloads without disruption or extra costs.
- Stay updated: Monitor regulatory updates and best practices for ongoing compliance.
Access "Turnkey" Compliance with Kpow
Kpow’s secure, transparent, and flexible Kafka management capabilities align with the EU Data Act’s requirements, enabling controlled data access, robust security, local data storage, auditability, and interoperability. This makes it an effective tool for data streaming engineers and organizations aiming to comply with the EU’s new data sharing and protection rules starting September 2025.

Future-Proof Your Kafka Streaming
The EU Data Act is reshaping how data streaming services operate in Europe. Ensuring your Kafka infrastructure is compliant and resilient is no longer optional—it’s essential.
To help you navigate this transition, Factor House offers a free 30-day fully-featured trial license of Kpow.
Experience firsthand how Kpow’s secure, transparent, and flexible Kafka management capabilities can simplify compliance and enhance your streaming operations. Start your free trial of Kpow today.
Sources (and further information)
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Beyond Kafka: Sharp Signals from Current London 2025
The real-time ecosystem has outgrown Kafka alone. At Current London 2025, the transition from Kafka Summit was more than a name change — it marked a shift toward streaming-first AI, system-level control, and production-ready Flink. Here's what Factor House saw and learned on the ground.
The transition from Kafka Summit to Current is now complete, with this year's London conference now rebranded to match Confluent’s US and India events and providing a strong indicator that the real-time ecosystem now extends far beyond Apache Kafka. With more than 2,200 attendees, in-depth technical presentations, and countless exhibitor hall discussions, it is evident that real-time streaming is here to stay and the ecosystem is evolving quickly, branching out into AI-native systems, multi-technology stacks, and production-grade stream processing.
What We Saw
For the third consecutive year, Factor House was on the ground in London as a Silver Sponsor, running product demos, meeting clients, and, more importantly, learning about the needs of engineers managing complex deployments, platform teams integrating Flink and Kafka, and architects exploring AI built on live data.
While the Kafka Summit name has been replaced, Kafka remains a foundational technology. However, attention is shifting to system-level control, end-to-end observability, and tools that reduce operational friction without sacrificing power. We’re focused on that space with Kpow for Kafka, Flex for Flink, and, soon, the Factor Platform.
Key Signals
- AI Is Going Event-Driven - But Engineers Remain Cautious
- Streaming-first AI was a recurring theme at Current. Sessions like "Flink Jobs as Agents" (Airy Inc.) explored how AI agents can interact with a live state, reacting in real time rather than relying on stale snapshots.
- But several engineers we spoke to flagged concerns.
- While Kafka and Flink provide the backbone, durable, deterministic, and observable, the idea of introducing autonomous agents into critical pipelines raised eyebrows. There’s excitement, yes, but also scepticism around operational safety, debuggability, and unintended consequences. As one engineer put it:
- “If an LLM is making decisions in my pipeline, I want to know what it saw, why it acted, and how to stop it fast.”
- Visibility and control are not optional; they’re the line between innovation and outage. AI might be event-driven, but it’s still infrastructure. And infrastructure needs guardrails.
- Production Resilience > Architectural Purity
- Sessions from OpenAI, AWS, and Daimler all emphasized pragmatism. OpenAI’s Changing Engines Mid-Flight offered real lessons on handling Kafka migrations under load. Elegant designs are great, but shipping reliable systems matters most.
- Flink Is Now a First-Class Citizen
- Flink moved from curiosity to cornerstone. Teams from ShareChat, Wix, and Pinterest shared how they reduced latency and costs while simplifying their pipelines. However, Flink remains operationally raw; hence, Flex, our UI and API, is designed to make Flink observable and manageable in real production environments.
Noteworthy Tools
We saw an uptick in focused tools solving specific friction points, some standouts for us:
- ShadowTraffic – Safe, controlled Kafka traffic simulation.
- RisingWave – Real-time SQL queries over Kafka streams.
- Gravitee – Fine-grained Kafka API access control.
- Imply – Sub-second dashboards on live data.
- Snowplow – Clean, structured enrichment pipelines for streaming events.
Where Factor House Fits
As complexity grows and streaming intersects with AI, teams need visibility, safety, and efficiency, not more abstraction. Our upcoming Factor Platform unifies Kpow and Flex into a single control plane for data in motion, enabling teams to manage Kafka and Flink with confidence across clusters, clouds, and regions, and providing a layer of clarity across an organizations complete streaming ecosystem.
If you’d like to learn more about Factor House products book a demo today.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
.webp)
Integrate Kpow with Bufstream
Learn how to integrate Bufstream—a Kafka-compatible streaming platform—with Kpow to enable streamlined, schema-aware development for Kafka-compatible streaming platforms.
Overview
Kpow supports a wide range of Apache Kafka and Kafka API–compatible platforms, providing robust tools to manage, monitor, and secure data streaming workloads. In this guide, we'll walkthrough how to integrate Bufstream - a cloud-native, Kafka-compatible streaming solution - with Kpow, enabling seamless use of Bufstream's advanced schema management alongside Kpow's comprehensive Kafka management capabilities for an optimized streaming experience.
💡 Visit our earlier posts to learn how to set up Kpow with leading platforms such as Confluent Cloud, Amazon MSK, and Google Cloud Managed Service for Apache Kafka, as well as emerging solutions like Redpanda.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
Explore our live multi-cluster demo environment or grab a free Community license and dive into streaming tech on your laptop with Factor House Local.

Prerequisites
This tutorial uses the Community Edition of Kpow, where the default user has all the necessary permissions for the tasks we will perform.
For users of Kpow Enterprise Edition with user authorization enabled, performing these actions would require the logged-in user to have the SCHEMA_CREATE permission for Role-Based Access Control (RBAC) or have ALLOW_SCHEMA_CREATE=true set for Simple Access Control. You can learn more in the Kpow User Authorization documentation.
Launch Bufstream and Kpow
We begin by creating a dedicated Docker network named factorhouse, which establishes a private communication channel for the Bufstream and Kpow containers. We then launch a single Bufstream broker and Kpow instance in dedicated containers.
Here's an overview of the containers:
- Bufstream (
bufstream)- Image:
bufbuild/bufstream:latest - Host Ports:
9092: Exposes the Kafka API to the host machine.
- Configuration:
- Mode: Runs in
inmemorymode, creating a temporary, single-node broker where all data is lost on shutdown. - Kafka API: Listens for connections internally on port
9092and is also mapped to9092on the host. It advertises itself asbufstream:9092within the Docker network.
- Mode: Runs in
- Network: Attached to the
factorhousenetwork, making it reachable by other containers at the hostnamebufstream.
- Image:
- Kpow (
kpow)- Image:
factorhouse/kpow-ce:latest - Host Ports:
3000: Exposes the Kpow web UI to the host machine, accessible athttp://localhost:3000.
- Configuration:
- ENVIRONMENT_NAME: UI friendly label for this cluster and resources.
- BOOTSTRAP: Configured to connect to the Bufstream broker at
bufstream:9092, using the internal Docker network for communication. - SCHEMA_REGISTRY_NAME: UI friendly label for a schema registry.
- SCHEMA_REGISTRY_URL: Configured to connect to the Confluent-compatible API endpoint of Buf's public demo registry (
https://demo.buf.dev/integrations/confluent/bufstream-demo). For details on connecting to a custom Confluent Schema Registry instance, see the official documentation. - Licensing: The configuration is loaded from an environment file specified by the
$KPOW_LICENSE_FILEshell variable, which is required to run the container.
- Network: Attached to the
factorhousenetwork, which allows it to resolve and connect tobufstream:9092.
- Image:
## Create a docker network to be shared
docker network create factorhouse
## Start a Bufstream broker
docker run -d -p 9092:9092 --name bufstream --network factorhouse \
-e BUFSTREAM_KAFKA_HOST="0.0.0.0" \
-e BUFSTREAM_KAFKA_PUBLIC_HOST="bufstream" \
-e BUFSTREAM_KAFKA_PUBLIC_PORT="9092" \
bufbuild/bufstream:latest serve --inmemory
## Start a Kpow instance
docker run -d -p 3000:3000 --name kpow --network factorhouse \
-e ENVIRONMENT_NAME="Bufstream" \
-e BOOTSTRAP="bufstream:9092" \
-e SCHEMA_REGISTRY_NAME="Buf Registry" \
-e SCHEMA_REGISTRY_URL="https://demo.buf.dev/integrations/confluent/bufstream-demo" \
--env-file=$KPOW_LICENSE_FILE \
factorhouse/kpow-ce:latestOnce the containers are running, navigate to http://localhost:3000 to access the Kpow UI. We will see that Kpow has automatically discovered and connected to the single Bufstream broker and Buf's public demo registry.

Schema Overview
In the Schema menu, we can see that there are four subjects in Buf's demo repository. All of them use the Protobuf format, and for this guide, we will focus on the workflow-dataset-nyc-taxis-value subject.

Clicking the menu next to a subject reveals options to view or edit the subject. By selecting View subject, we can examine the subject's schema definition.
The schema uses modern proto3 syntax and defines a detailed structure for a single New York City taxi ride via the main Ride message. It captures a comprehensive range of data, including nested fields for pickup and drop-off events, trip characteristics such as passenger count and distance, and a detailed breakdown of payment components-covering base fare, taxes, tolls, and tips.
// Copyright 2023-2025 Buf Technologies, Inc.
//
// All rights reserved.
syntax = "proto3";
package buf.bufstream.priv.dataset.nyctaxi.v1alpha1;
import "buf/confluent/v1/extensions.proto";
import "google/protobuf/timestamp.proto";
// Ride represents a single tax ride in New York City.
message Ride {
option (buf.confluent.v1.subject) = {
instance_name: "bufstream-demo"
name: "workflow-dataset-nyc-taxis-value"
};
// ID of the vendor that provided the ride.
optional int32 vendor_id = 1;
// Details about the rider's pickup.
Event pickup = 2;
// Details about the rider's drop-off.
Event drop_off = 3;
// Other details of the trip.
TripDetails details = 4;
// Details of the fare and payment.
Payment payment_info = 5;
// Unclear what this is; it's in the Parquet schema though...
optional string store_and_forward = 6;
// Event includes properties for an event: time and location.
message Event {
// The time of the event.
google.protobuf.Timestamp time = 1;
// The location of the event.
optional int32 location_id = 2;
}
// TripDetails describes properties of the trip.
message TripDetails {
// The number of passengers riding.
optional int64 passenger_count = 1;
// The number of miles travelled.
optional double distance = 2;
// The rate code for this trip.
optional int32 rate_code_id = 3;
}
// Payment describes details about the payment and the amount charged.
message Payment {
// The type of instrument used for payment.
optional int64 type = 1;
// The total amount paid.
optional double total_amount = 2;
// Details of the amount charged.
FareDetails details = 3;
}
// FareDetails breaks down all of the components of the amount charged.
message FareDetails {
// The fare, based on distance and time.
optional double base_fare = 1;
// Any extra fee agreed upon before the ride began.
optional double extra = 2;
// MTA taxes.
optional double mta_tax = 3;
// Optional tip.
optional double tip = 4;
// Fees for using toll bridges and roads.
optional double tolls = 5;
// Surcharge that vendor charges.
optional double improvement_surcharge = 6;
// Surcharge based on time and location/traffic conditions.
optional double congestion_surcharge = 7;
// Surcharge for trips to and from an airport.
optional double airport_fee = 8;
}
}Below is an example JSON record that conforms to the schema:
{
"vendorId": 2,
"pickup": {
"time": "2023-01-01T14:25:15Z",
"locationId": 138
},
"dropOff": {
"time": "2023-01-01T14:40:30Z",
"locationId": 236
},
"details": {
"passengerCount": 1,
"distance": 3.45,
"rateCodeId": 1
},
"paymentInfo": {
"type": 1,
"totalAmount": 24.8,
"details": {
"baseFare": 17.5,
"extra": 1.25,
"mtaTax": 0.5,
"tip": 3.55,
"tolls": 0,
"improvementSurcharge": 0.3,
"congestionSurcharge": 2.5,
"airportFee": 0
}
},
"storeAndForward": "N"
}Working with Protobuf Data
With an understanding of the schema, let's use Kpow to interact with it by producing and consuming a Protobuf message. First, create a new topic named workflow-dataset-nyc-taxis from the Kpow UI.

Now, to produce a record to the workflow-dataset-nyc-taxis topic:
- Go to the Data menu, select the topic, and open the Produce tab.
- Set the Key Serializer to
String. - Set the Value Serializer to
Protobuf. - Choose Buf Registry as the Schema Registry.
- Select the
workflow-dataset-nyc-taxis-valuesubject. - Enter your key and a valid JSON value for the message, then click Produce.

To view the result, navigate to the Data tab and select the workflow-dataset-nyc-taxis topic. In the deserializer options, choose String as the Key deserializer and Protobuf as the Value deserializer, then select Buf Registry. Kpow will automatically fetch the correct schema version, deserialize the Protobuf message, and display the data as readable JSON.

Shutdown Environment
The Docker containers and network can be removed using the commands below.
docker rm -f bufstream kpow
docker network rm factorhouseConclusion
In this guide, we demonstrated how to integrate Bufstream with Kpow. We launched the services using Docker, connected Kpow to Buf's public Schema Registry, and explored a real-world Protobuf schema. Most importantly, we demonstrated how Kpow's UI simplifies the process of producing and consuming schema-validated Protobuf messages, providing immediate feedback and making data inspection effortless. This setup provides a powerful and streamlined workflow, significantly improving the development experience for any schema-driven application on Kafka.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
.webp)
Integrate Kpow with the Redpanda Streaming Platform
A comprehensive, step-by-step guide to integrating Kpow with the Redpanda platform, covering the complete configuration needed to monitor the Kafka-compatible broker and manage schemas through its built-in schema registry.
Overview
Redpanda offers a simple, powerful, and Kafka®-compatible streaming data platform. Kpow provides a rich, developer-focused UI to manage and monitor it. Together, they form a robust stack for building and operating real-time data pipelines.
This guide will walk you through the process of setting up Kpow with a local Redpanda cluster using Docker. We will cover launching the environment, using Kpow to create and manage an Avro schema in Redpanda's built-in registry, producing schema-governed data to a topic, and finally, inspecting that data in a human-readable format.
💡 Kpow supports Confluent-compatible schema registries out-of-the-box, which is why it works seamlessly with Redpanda. Learn how to connect other compatible registries by reading our guide on How to Integrate Confluent-compatible Registries with Kpow.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
Explore our live multi-cluster demo environment or grab a free Community license and dive into streaming tech on your laptop with Factor House Local.

Prerequisites
This tutorial uses the Community Edition of Kpow, where the default user has all the necessary permissions for the tasks we will perform.
For users of Kpow Enterprise Edition with user authorization enabled, performing these actions would require the logged-in user to have the SCHEMA_CREATE permission for Role-Based Access Control (RBAC) or have ALLOW_SCHEMA_CREATE=true set for Simple Access Control. You can learn more in the Kpow User Authorization documentation.
Launch Redpanda and Kpow
We begin by creating a dedicated Docker network named factorhouse, which establishes a private communication channel for the Redpanda and Kpow containers. We then launch a single Redpanda broker and Kpow instance in dedicated containers.
Here's an overview of the containers:
- Redpanda (
redpanda)- Image:
redpandadata/redpanda:latest - Host Ports:
19092: Exposes the Kafka API to the host machine.18081: Exposes the Schema Registry API to the host machine.
- Configuration:
- Mode: Runs in
dev-containermode, optimized for a single-node development environment. - Kafka API: Listens for connections internally on port
9092and externally on19092. It advertises itself asredpanda:9092within the Docker network andlocalhost:19092to the host. - Schema Registry: The built-in schema registry is enabled, listening internally on
8081and externally on18081.
- Mode: Runs in
- Network: Attached to the
factorhousenetwork, making it reachable by other containers at the hostnameredpanda.
- Image:
- Kpow (
kpow)- Image:
factorhouse/kpow-ce:latest(Usefactorhouse/kpow:latestfor the enterprise edition) - Host Port:
3000(for accessing the Kpow web UI from a browser athttp://localhost:3000). - Configuration:
- ENVIRONMENT_NAME: UI friendly label for this cluster and resources.
- BOOTSTRAP: Configured to connect to the Redpanda cluster at
redpanda:9092, using the internal Docker network for communication. - SCHEMA_REGISTRY_NAME: UI friendly label for a schema registry.
- SCHEMA_REGISTRY_URL: Configured to connect to Redpanda's integrated schema registry at
http://redpanda:8081. - Licensing: The configuration is loaded from an environment file specified by the
$KPOW_LICENSE_FILEshell variable, which is required to run the container.
- Network: Attached to the
factorhousenetwork, allowing it to resolve and connect to theredpandacontainer.
- Image:
## Create a docker network to be shared
docker network create factorhouse
## Start a Redpanda broker and schema registry
docker run -d -p 19092:19092 -p 18081:18081 --name redpanda --hostname redpanda --network factorhouse \
redpandadata/redpanda:latest redpanda start \
--kafka-addr internal://0.0.0.0:9092,external://0.0.0.0:19092 \
--advertise-kafka-addr internal://redpanda:9092,external://localhost:19092 \
--schema-registry-addr internal://0.0.0.0:8081,external://0.0.0.0:18081 \
--rpc-addr redpanda:33145 \
--advertise-rpc-addr redpanda:33145 \
--mode dev-container
## Start a Kpow instance
docker run -d -p 3000:3000 --name kpow --network factorhouse \
-e ENVIRONMENT_NAME="Local Redpanda Cluster" \
-e BOOTSTRAP="redpanda:9092" \
-e SCHEMA_REGISTRY_NAME="Local Repanda Registry" \
-e SCHEMA_REGISTRY_URL="http://redpanda:8081" \
--env-file=$KPOW_LICENSE_FILE \
factorhouse/kpow-ce:latestOnce the containers are running, navigate to http://localhost:3000 to access the Kpow UI. We can see that Kpow has automatically discovered and connected to the single Redpanda broker and its schema registry.

Schema Management
With our environment running, let's use Kpow to create a new schema subject in the Redpanda schema registry.
- In the Schema menu, click Create subject.
- Since we only have one registry configured, the Local Repanda Registry is selected by default.
- Enter a subject name (e.g.,
demo-redpanda-value), chooseAVROas the type, and provide a schema definition. Click Create.

The Redpanda schema registry persists its schemas in an internal Kafka topic named _schemas. We can verify that our schema was created by inspecting the records of this topic directly within Kpow's Data tab.

Working with Avro Data
Finally, we'll produce and inspect an Avro record that is validated against the schema we just created.
First, create a new topic named demo-redpanda from the Kpow UI.

Now, to produce a record to the demo-redpanda topic:
- Go to the Data menu, select the topic, and open the Produce tab.
- Select
Stringas the Key Serializer - Set the Value Serializer to
AVRO. - Choose Local Repanda Registry as the Schema Registry.
- Select the
demo-redpanda-valuesubject. - Enter key/value data and click Produce.

To see the result, navigate back to the Data tab and select the demo-redpanda topic. In the deserializer options, choose String as the Key deserializer and AVRO as the Value deserializer, then select Local Repanda Registry. Kpow automatically fetches the correct schema version, deserializes the binary Avro message, and presents the data as easy-to-read JSON.

Shutdown Environment
The Docker containers and network can be removed using the commands below.
docker rm -f redpanda kpow
docker network rm factorhouseConclusion
In just a few minutes, we successfully launched a complete local streaming environment using Redpanda and Kpow. We demonstrated how this powerful combination simplifies the developer workflow by using Kpow's intuitive UI to manage schemas, produce Avro-encoded messages, and inspect topic data without writing any custom code. This setup proves how Redpanda's Kafka compatibility and integrated features, paired with Kpow's comprehensive toolkit, provide an efficient and accessible platform for developers building event-driven applications.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
.webp)
Integrate Confluent-compatible schema registries with Kpow
This guide demonstrates how to address the operational complexity of managing multiple Kafka schema registries. We integrate Confluent-compatible registries—Confluent Schema Registry, Apicurio Registry, and Karapace—and manage them all through a single pane of glass using Kpow.
Overview
In modern data architectures built on Apache Kafka, a Schema Registry is an essential component for enforcing data contracts and supporting strong data governance. While the Confluent Schema Registry set the original standard, the ecosystem has expanded to include powerful Confluent-compatible alternatives such as Red Hat’s Apicurio Registry and Aiven’s Karapace.
Whether driven by a gradual migration, the need to support autonomous teams, or simply technology evaluation, many organizations find themselves running multiple schema registries in parallel. This inevitably leads to operational complexity and a fragmented view of their data governance.
This guide demonstrates how Kpow directly solves this challenge. We will integrate these popular schema registries into a single Kafka environment and show how to manage them all seamlessly through Kpow's single, unified interface.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
Explore our live multi-cluster demo environment or grab a free Community license and dive into streaming tech on your laptop with Factor House Local.

Prerequisites
To create subjects in Kpow, the logged-in user must have the necessary permissions. If Role-Based Access Control (RBAC) is enabled, this requires the SCHEMA_CREATE action. For Simple Access Control, set ALLOW_SCHEMA_CREATE=true. For details, see the Kpow User Authorization docs.
Launch Kafka Environment
To accelerate the setup, we will use the Factor House Local repository, which provides a solid foundation with pre-built configurations for authentication and authorization.
First, clone the repository:
git clone https://github.com/factorhouse/factorhouse-localNext, navigate into the project root and create a Docker Compose file named compose-kpow-multi-registries.yml. This file defines our entire stack: a 3-broker Kafka cluster, our three schema registries, and Kpow.
services:
schema:
image: confluentinc/cp-schema-registry:7.8.0
container_name: schema_registry
ports:
- "8081:8081"
networks:
- factorhouse
depends_on:
- zookeeper
- kafka-1
- kafka-2
- kafka-3
environment:
SCHEMA_REGISTRY_HOST_NAME: "schema"
SCHEMA_REGISTRY_LISTENERS: http://schema:8081,http://${DOCKER_HOST_IP:-127.0.0.1}:8081
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "kafka-1:19092,kafka-2:19093,kafka-3:19094"
SCHEMA_REGISTRY_AUTHENTICATION_METHOD: BASIC
SCHEMA_REGISTRY_AUTHENTICATION_REALM: schema
SCHEMA_REGISTRY_AUTHENTICATION_ROLES: schema-admin
SCHEMA_REGISTRY_OPTS: -Djava.security.auth.login.config=/etc/schema/schema_jaas.conf
volumes:
- ./resources/kpow/schema:/etc/schema
apicurio:
image: apicurio/apicurio-registry:3.0.9
container_name: apicurio
ports:
- "8080:8080"
networks:
- factorhouse
environment:
APICURIO_KAFKASQL_BOOTSTRAP_SERVERS: kafka-1:19092,kafka-2:19093,kafka-3:19094
APICURIO_STORAGE_KIND: kafkasql
APICURIO_AUTH_ENABLED: "true"
APICURIO_AUTH_ROLE_BASED_AUTHORIZATION: "true"
APICURIO_AUTH_STATIC_USERS: "admin=admin" # Format: user1=pass1,user2=pass2
APICURIO_AUTH_STATIC_ROLES: "admin:sr-admin" # Format: user:role,user2:role2
karapace:
image: ghcr.io/aiven-open/karapace:develop
container_name: karapace
entrypoint:
- python3
- -m
- karapace
ports:
- "8082:8081"
networks:
- factorhouse
depends_on:
- zookeeper
- kafka-1
- kafka-2
- kafka-3
environment:
KARAPACE_KARAPACE_REGISTRY: true
KARAPACE_ADVERTISED_HOSTNAME: karapace
KARAPACE_ADVERTISED_PROTOCOL: http
KARAPACE_BOOTSTRAP_URI: kafka-1:19092,kafka-2:19093,kafka-3:19094
KARAPACE_PORT: 8081
KARAPACE_HOST: 0.0.0.0
KARAPACE_CLIENT_ID: karapace-0
KARAPACE_GROUP_ID: karapace
KARAPACE_MASTER_ELECTION_STRATEGY: highest
KARAPACE_MASTER_ELIGIBILITY: true
KARAPACE_TOPIC_NAME: _karapace
KARAPACE_COMPATIBILITY: "BACKWARD"
kpow:
image: factorhouse/kpow:latest
container_name: kpow-ee
pull_policy: always
restart: always
ports:
- "3000:3000"
networks:
- factorhouse
depends_on:
- schema
- apicurio
- karapace
env_file:
- resources/kpow/config/multi-registry.env
- ${KPOW_TRIAL_LICENSE:-resources/kpow/config/trial-license.env}
mem_limit: 2G
volumes:
- ./resources/kpow/jaas:/etc/kpow/jaas
- ./resources/kpow/rbac:/etc/kpow/rbac
zookeeper:
image: confluentinc/cp-zookeeper:7.8.0
container_name: zookeeper
ports:
- "2181:2181"
networks:
- factorhouse
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka-1:
image: confluentinc/cp-kafka:7.8.0
container_name: kafka-1
ports:
- "9092:9092"
networks:
- factorhouse
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka-1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
KAFKA_LOG4J_ROOT_LOGLEVEL: INFO
KAFKA_NUM_PARTITIONS: "3"
KAFKA_DEFAULT_REPLICATION_FACTOR: "3"
depends_on:
- zookeeper
kafka-2:
image: confluentinc/cp-kafka:7.8.0
container_name: kafka-2
ports:
- "9093:9093"
networks:
- factorhouse
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka-2:19093,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
KAFKA_LOG4J_ROOT_LOGLEVEL: INFO
KAFKA_NUM_PARTITIONS: "3"
KAFKA_DEFAULT_REPLICATION_FACTOR: "3"
depends_on:
- zookeeper
kafka-3:
image: confluentinc/cp-kafka:7.8.0
container_name: kafka-3
ports:
- "9094:9094"
networks:
- factorhouse
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka-3:19094,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
KAFKA_LOG4J_ROOT_LOGLEVEL: INFO
KAFKA_NUM_PARTITIONS: "3"
KAFKA_DEFAULT_REPLICATION_FACTOR: "3"
depends_on:
- zookeeper
networks:
factorhouse:
name: factorhouseHere's an overview of the three schema registries and Kpow:
- Confluent Schema Registry (
schema)- Image:
confluentinc/cp-schema-registry:7.8.0 - Storage: It uses the connected Kafka cluster for durable storage, persisting schemas in an internal topic (named
_schemasby default). - Security: This service is secured using
BASICHTTP authentication. Access requires a valid username and password, which are defined in theschema_jaas.conffile mounted via thevolumesdirective.
- Image:
- Apicurio Registry (
apicurio)- Image:
apicurio/apicurio-registry:3.0.9 - Storage: It's configured to use the
kafkasqlstorage backend, and schemas are stored in a Kafka topic (kafkasql-journal). - Security: Authentication is enabled and managed directly through environment variables. This setup creates a static user (
adminwith passwordadmin) and grants it administrative privileges. - API Endpoint: To align with the Kafka environment, we'll use
/apis/ccompat/v7as the Confluent Compatibility API endpoint.
- Image:
- Karapace Registry (
karapace)- Image:
ghcr.io/aiven-open/karapace:develop - Storage: Like the others, it uses a Kafka topic (
_karapace) to store its schema data. - Security: For simplicity, authentication is not configured, leaving the API openly accessible on the network. However, the logged-in Kpow user must still have the appropriate permissions to manage schema resources—highlighting one of the key access control benefits Kpow offers in enterprise environments.
- Image:
- Kpow (
kpow)- Image:
factorhouse/kpow:latest - Host Port: 3000
- Configuration:
env_file: Its primary configuration is loaded from external files. Themulti-registry.envfile is crucial, as it contains the connection details for the Kafka cluster and all three schema registries.- Licensing: The configuration also loads a license file. It uses a local
trial-license.envby default, but this can be overridden by setting theKPOW_TRIAL_LICENSEenvironment variable to a different file path.
- Volumes:
./resources/kpow/jaas: This mounts authentication configuration (JAAS file) into Kpow../resources/kpow/rbac: This mounts Role-Based Access Control (RBAC) file.
- Image:
We also need to create the Kpow configuration file (resources/kpow/config/multi-registry.env). The environment variables in this file configures Kpow's own user security, the connection to the Kafka cluster, and the integration with all three schema registries.
## AauthN + AuthZ
JAVA_TOOL_OPTIONS="-Djava.awt.headless=true -Djava.security.auth.login.config=/etc/kpow/jaas/hash-jaas.conf"
AUTH_PROVIDER_TYPE=jetty
RBAC_CONFIGURATION_FILE=/etc/kpow/rbac/hash-rbac.yml
## Kafka environments
ENVIRONMENT_NAME=Multi-registry Integration
BOOTSTRAP=kafka-1:19092,kafka-2:19093,kafka-3:19094
SCHEMA_REGISTRY_RESOURCE_IDS=CONFLUENT,APICURIO,KARAPACE
CONFLUENT_SCHEMA_REGISTRY_URL=http://schema:8081
CONFLUENT_SCHEMA_REGISTRY_AUTH=USER_INFO
CONFLUENT_SCHEMA_REGISTRY_USER=admin
CONFLUENT_SCHEMA_REGISTRY_PASSWORD=admin
APICURIO_SCHEMA_REGISTRY_URL=http://apicurio:8080/apis/ccompat/v7
APICURIO_SCHEMA_REGISTRY_AUTH=USER_INFO
APICURIO_SCHEMA_REGISTRY_USER=admin
APICURIO_SCHEMA_REGISTRY_PASSWORD=admin
KARAPACE_SCHEMA_REGISTRY_URL=http://karapace:8081We can start all services in the background using the Docker Compose file:
docker compose -f ./compose-kpow-multi-registries.yml up -dOnce the containers are running, navigate to http://localhost:3000 to access the Kpow UI (admin as both username and password). In the left-hand navigation menu under Schema, you will see all three registries - CONFLUENT, APICURIO, and KARAPACE.

Unified Schema Management
Now, we will create a schema subject in each registry directly from Kpow.
- In the Schema menu, click Create subject.
- Select CONFLUENT from the Registry dropdown.
- Enter a subject name (e.g.,
demo-confluent-value), chooseAVROas the type, and provide a schema definition. Click Create.
Subject: demo-confluent-value

Following the same pattern, create subjects for the other two registries:
- Apicurio: Select
APICURIOand create thedemo-apicurio-valuesubject. - Karapace: Select
KARAPACEand create thedemo-karapace-valuesubject.
Subject: demo-apicurio-value

Subject: demo-karapace-value

Each registry persists its schemas in an internal Kafka topic. We can verify this in Kpow's Data tab by inspecting the contents of their respective storage topics:
- CONFLUENT:
_schemas - APICURIO:
kafkasql-journal(the default topic for itskafkasqlstorage engine) - KARAPACE:
_karapace

Produce and Inspect Records Across All Registries
Finally, we'll produce and inspect Avro records, leveraging the schemas from each registry.
First, create the topics demo-confluent, demo-apicurio, and demo-karapace in Kpow.

To produce a record for the demo-confluent topic:
- Go to the Data menu, select the topic, and open the Produce tab.
- Select
Stringas the Key Serializer - Set the Value Serializer to
AVRO. - Choose CONFLUENT as the Schema Registry.
- Select the
demo-confluent-valuesubject. - Enter key/value data and click Produce.
Topic: demo-confluent

Repeat this for the other topics, making sure to select the corresponding registry and subject for demo-apicurio and demo-karapace.
Topic: demo-apicurio

Topic: demo-karapace

To inspect the records, navigate back to the Data tab for each topic. Select the correct Schema Registry in the deserializer options. Kpow will automatically fetch the correct schema, deserialize the binary Avro data, and present it as human-readable JSON.
Topic: demo-confluent

Topic: demo-apicurio

Topic: demo-karapace

Conclusion
This guide has demonstrated that managing a heterogeneous, multi-registry Kafka environment does not have to be a fragmented or complex task. By leveraging the Confluent-compatible APIs of Apicurio and Karapace, we can successfully integrate them alongside the standard Confluent Schema Registry.
With Kpow providing a single pane of glass, we gain centralized control and visibility over all our schema resources. This unified approach simplifies critical operations like schema management, data production, and inspection, empowering teams to use the best tool for their needs without sacrificing governance or operational efficiency.
Events & Webinars
Stay plugged in with the Factor House team and our community.

[MELBOURNE, AUS] Apache Kafka and Apache Flink Meetup, 27 November
Melbourne, we’re making it a double feature. Workshop by day, meetup by night - same location, each with valuable content for data and software engineers, or those working with Data Streaming technologies. Build the backbone your apps deserve, then roll straight into the evening meetup.

[SYDNEY, AUS] Apache Kafka and Apache Flink Meetup, 26 November
Sydney, we’re making it a double feature. Workshop by day, meetup by night - same location, each with valuable content for data and software engineers, or those working with Data Streaming technologies. Build the backbone your apps deserve, then roll straight into the evening meetup.
Join the Factor Community
We’re building more than products, we’re building a community. Whether you're getting started or pushing the limits of what's possible with Kafka and Flink, we invite you to connect, share, and learn with others.