
Developer
Knowledge Center
Empowering engineers with everything they need to build, monitor, and scale real-time data pipelines with confidence.
.webp)
Deploy Kpow on EKS via AWS Marketplace using Helm
Streamline your Kpow deployment on Amazon EKS with our guide, fully integrated with the AWS Marketplace. We use eksctl to automate IAM Roles for Service Accounts (IRSA), providing a secure integration for Kpow's licensing and metering. This allows your instance to handle license validation via AWS License Manager and report usage for hourly subscriptions, enabling a production-ready deployment with minimal configuration.
Overview
This guide provides a comprehensive walkthrough for deploying Kpow, a powerful toolkit for Apache Kafka, onto an Amazon EKS (Elastic Kubernetes Service) cluster. We will cover the entire process from start to finish, including provisioning the necessary AWS infrastructure, deploying a Kafka cluster using the Strimzi operator, and finally, installing Kpow using a subscription from the AWS Marketplace.
The guide demonstrates how to set up both Kpow Annual and Kpow Hourly products, highlighting the specific integration points with AWS services like IAM for service accounts, ECR for container images, and the AWS License Manager for the annual subscription. By the end of this tutorial, you will have a fully functional environment running Kpow on EKS, ready to monitor and manage your Kafka cluster.
The source code and configuration files used in this guide can be found in the features/eks-deployment folder of this GitHub repository.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
Explore our live multi-cluster demo environment or grab a free Community license and dive into streaming tech on your laptop with Factor House Local.

Prerequisites
To follow along the guide, you need:
- CLI Tools:
- AWS Infrastructure:
- VPC: A Virtual Private Cloud (VPC) that has both public and private subnets is required.
- IAM Permissions: A user with the necessary IAM permissions to create an EKS cluster with a service account.
- Kpow Subscription:
- A subscription to a Kpow product through the AWS Marketplace is required. After subscribing, you will receive access to the necessary components and deployment instructions.
- The specifics of accessing the container images and Helm chart depend on the chosen Kpow product:
- Kpow Annual product:
- Subscribing to the annual product provides access to the ECR (Elastic Container Registry) image and the corresponding Helm chart.
- Kpow Hourly product:
- For the hourly product, access to the ECR image will be provided and deployment utilizes the public Factor House Helm repository for installation.
- Kpow Annual product:
Deploy an EKS cluster
We will use eksctl to provision an Amazon EKS cluster. The configuration for the cluster is defined in the manifests/eks/cluster.eksctl.yaml file within the repository.
Before creating the cluster, you must open this file and replace the placeholder values for <VPC-ID>, <PRIVATE-SUBNET-ID-* >, and <PUBLIC-SUBNET-ID-* > with your actual VPC and subnet IDs.
⚠️ The provided configuration assumes the EKS cluster will be deployed in theus-east-1region. If you intend to use a different region, you must update themetadata.regionfield and ensure the availability zone keys undervpc.subnets(e.g.,us-east-1a,us-east-1b) match the availability zones of the subnets in your chosen region.
Here is the content of the cluster.eksctl.yaml file:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: fh-eks-cluster
region: us-east-1
vpc:
id: "<VPC-ID>"
subnets:
private:
us-east-1a:
id: "<PRIVATE-SUBNET-ID-1>"
us-east-1b:
id: "<PRIVATE-SUBNET-ID-2>"
public:
us-east-1a:
id: "<PUBLIC-SUBNET-ID-1>"
us-east-1b:
id: "<PUBLIC-SUBNET-ID-2>"
iam:
withOIDC: true
serviceAccounts:
- metadata:
name: kpow-annual
namespace: factorhouse
attachPolicyARNs:
- "arn:aws:iam::aws:policy/service-role/AWSLicenseManagerConsumptionPolicy"
- metadata:
name: kpow-hourly
namespace: factorhouse
attachPolicyARNs:
- "arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage"
nodeGroups:
- name: ng-dev
instanceType: t3.medium
desiredCapacity: 4
minSize: 2
maxSize: 6
privateNetworking: trueThis configuration sets up the following:
- Cluster Metadata: A cluster named
fh-eks-clusterin theus-east-1region. - VPC: Specifies an existing VPC and its public/private subnets where the cluster resources will be deployed.
- IAM with OIDC: Enables the IAM OIDC provider, which allows Kubernetes service accounts to be associated with IAM roles. This is crucial for granting AWS permissions to your pods.
- Service Accounts:
kpow-annual: Creates a service account for the Kpow Annual product. It attaches theAWSLicenseManagerConsumptionPolicy, allowing Kpow to validate its license with the AWS License Manager service.kpow-hourly: Creates a service account for the Kpow Hourly product. It attaches theAWSMarketplaceMeteringRegisterUsagepolicy, which is required for reporting usage metrics to the AWS Marketplace.
- Node Group: Defines a managed node group named
ng-devwitht3.mediuminstances. The worker nodes will be placed in the private subnets (privateNetworking: true).
Once you have updated the YAML file with your networking details, run the following command to create the cluster. This process can take 15-20 minutes to complete.
eksctl create cluster -f cluster.eksctl.yamlOnce the cluster is created, eksctl automatically updates your kubeconfig file (usually located at ~/.kube/config) with the new cluster's connection details. This allows you to start interacting with your cluster immediately using kubectl.
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# ip-192-168-...-21.ec2.internal Ready <none> 2m15s v1.32.9-eks-113cf36
# ...Launch a Kafka cluster
With the EKS cluster running, we will now launch an Apache Kafka cluster into it. We will use the Strimzi Kafka operator, which simplifies the process of running Kafka on Kubernetes.
Install the Strimzi operator
First, create a dedicated namespace for the Kafka cluster.
kubectl create namespace kafka
Next, download the Strimzi operator installation YAML. The repository already contains the file manifests/kafka/strimzi-cluster-operator-0.45.1.yaml, but the following commands show how it was downloaded and modified for this guide.
## Define the Strimzi version and download URL
STRIMZI_VERSION="0.45.1"
DOWNLOAD_URL=https://github.com/strimzi/strimzi-kafka-operator/releases/download/$STRIMZI_VERSION/strimzi-cluster-operator-$STRIMZI_VERSION.yaml
## Download the operator manifest
curl -L -o manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml ${DOWNLOAD_URL}
## Modify the manifest to install the operator in the 'kafka' namespace
sed -i 's/namespace: .*/namespace: kafka/' manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yamlNow, apply the manifest to install the Strimzi operator in your EKS cluster.
kubectl apply -f manifests/kafka/strimzi-cluster-operator-0.45.1.yaml -n kafkaDeploy a Kafka cluster
The configuration for our Kafka cluster is defined in manifests/kafka/kafka-cluster.yaml. It describes a simple, single-node cluster suitable for development, using ephemeral storage, meaning data will be lost if the pods restart.
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: fh-k8s-cluster
spec:
kafka:
version: 3.9.1
replicas: 1
listeners:
- name: plain
port: 9092
type: internal
tls: false
# ... (content truncated for brevity)Deploy the Kafka cluster with the following command:
kubectl create -f manifests/kafka/kafka-cluster.yaml -n kafkaVerify the deployment
After a few minutes, all the necessary pods and services for Kafka will be running. You can verify this by listing all resources in the kafka namespace.
kubectl get all -n kafka -o nameThe output should look similar to this, showing the pods for Strimzi, Kafka, Zookeeper, and the associated services. The most important service for connecting applications is the Kafka bootstrap service.
# pod/fh-k8s-cluster-entity-operator-...
# pod/fh-k8s-cluster-kafka-0
# ...
# service/fh-k8s-cluster-kafka-bootstrap <-- Kafka bootstrap service
# ...Deploy Kpow
Now that the EKS and Kafka clusters are running, we can deploy Kpow. This guide covers the deployment of both Kpow Annual and Kpow Hourly products. Both deployments will use a common set of configurations for connecting to Kafka and setting up authentication/authorization.
First, ensure you have a namespace for Kpow. The eksctl command we ran earlier already created the service accounts in the factorhouse namespace, so we will use that. If you hadn't created it, you would run kubectl create namespace factorhouse.
Create ConfigMaps
We will use two Kubernetes ConfigMaps to manage Kpow's configuration. This approach separates the core configuration from the Helm deployment values.
kpow-config-files: This ConfigMap holds file-based configurations, including RBAC policies, JAAS configuration, and user properties for authentication.kpow-config: This ConfigMap provides environment variables to the Kpow container, such as the Kafka bootstrap address and settings to enable our authentication provider.
The contents of these files can be found in the repository at manifests/kpow/config-files.yaml and manifests/kpow/config.yaml.
manifests/kpow/config-files.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kpow-config-files
namespace: factorhouse
data:
hash-rbac.yml: |
# RBAC policies defining user roles and permissions
admin_roles:
- "kafka-admins"
# ... (content truncated for brevity)
hash-jaas.conf: |
# JAAS login module configuration
kpow {
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
file="/etc/kpow/jaas/hash-realm.properties";
};
# ... (content truncated for brevity)
hash-realm.properties: |
# User credentials (username: password, roles)
# admin/admin
admin: CRYPT:adpexzg3FUZAk,server-administrators,content-administrators,kafka-admins
# user/password
user: password,kafka-users
# ... (content truncated for brevity)manifests/kpow/config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kpow-config
namespace: factorhouse
data:
# Environment Configuration
BOOTSTRAP: "fh-k8s-cluster-kafka-bootstrap.kafka.svc.cluster.local:9092"
REPLICATION_FACTOR: "1"
# AuthN + AuthZ
JAVA_TOOL_OPTIONS: "-Djava.awt.headless=true -Djava.security.auth.login.config=/etc/kpow/jaas/hash-jaas.conf"
AUTH_PROVIDER_TYPE: "jetty"
RBAC_CONFIGURATION_FILE: "/etc/kpow/rbac/hash-rbac.yml"Apply these manifests to create the ConfigMaps in the factorhouse namespace.
kubectl apply -f manifests/kpow/config-files.yaml \
-f manifests/kpow/config.yaml -n factorhouseYou can verify their creation by running:
kubectl get configmap -n factorhouse
# NAME DATA AGE
# kpow-config 5 ...
# kpow-config-files 3 ...Deploy Kpow Annual
Download the Helm chart
The Helm chart for Kpow Annual is in a private Amazon ECR repository. First, authenticate your Helm client.
# Enable Helm's experimental support for OCI registries
export HELM_EXPERIMENTAL_OCI=1
# Log in to the AWS Marketplace ECR registry
aws ecr get-login-password \
--region us-east-1 | helm registry login \
--username AWS \
--password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.comNext, pull and extract the chart.
# Create a directory, pull the chart, and extract it
mkdir -p awsmp-chart && cd awsmp-chart
# Pull the latest version of the Helm chart from ECR (add --version <x.x.x> to specify a version)
helm pull oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/factor-house/kpow-aws-annual
tar xf $(pwd)/* && find $(pwd) -maxdepth 1 -type f -delete
cd ..Launch Kpow Annual
Now, install Kpow using Helm. We will reference the service account kpow-annual that was created during the EKS cluster setup, which has the required IAM policy for license management.
helm install kpow-annual ./awsmp-chart/kpow-aws-annual/ \
-n factorhouse \
--set serviceAccount.create=false \
--set serviceAccount.name=kpow-annual \
--values ./values/eks-annual.yamlThe Helm values for this deployment are in values/eks-annual.yaml. It mounts the configuration files from our ConfigMaps and sets resource limits.
# values/eks-annual.yaml
env:
ENVIRONMENT_NAME: "Kafka from Kpow Annual"
envFromConfigMap: "kpow-config"
volumeMounts:
- name: kpow-config-volumes
mountPath: /etc/kpow/rbac/hash-rbac.yml
subPath: hash-rbac.yml
- name: kpow-config-volumes
mountPath: /etc/kpow/jaas/hash-jaas.conf
subPath: hash-jaas.conf
- name: kpow-config-volumes
mountPath: /etc/kpow/jaas/hash-realm.properties
subPath: hash-realm.properties
volumes:
- name: kpow-config-volumes
configMap:
name: "kpow-config-files"
resources:
limits:
cpu: 1
memory: 0.5Gi
requests:
cpu: 1
memory: 0.5GiNote: The CPU and memory values are intentionally set low for this guide. For production environments, check the official documentation for recommended capacity.
Verify and access Kpow Annual
Check that the Kpow pod is running successfully.
kubectl get all -l app.kubernetes.io/instance=kpow-annual -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-annual-kpow-aws-annual-c6bc849fb-zw5ww 0/1 Running 0 46s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-annual-kpow-aws-annual ClusterIP 10.100.220.114 <none> 3000/TCP 47s
# ...To access the UI, forward the service port to your local machine.
kubectl -n factorhouse port-forward service/kpow-annual-kpow-aws-annual 3000:3000You can now access Kpow by navigating to http://localhost:3000 in your browser.

Deploy Kpow Hourly
Configure the Kpow Helm repository
The Helm chart for Kpow Hourly is available in the Factor House Helm repository. First, add the Helm repository.
helm repo add factorhouse https://charts.factorhouse.ioNext, update Helm repositories to ensure you install the latest version of Kpow.
helm repo updateLaunch Kpow Hourly
Install Kpow using Helm, referencing the kpow-hourly service account which has the IAM policy for marketplace metering.
helm install kpow-hourly factorhouse/kpow-aws-hourly \
-n factorhouse \
--set serviceAccount.create=false \
--set serviceAccount.name=kpow-hourly \
--values ./values/eks-hourly.yamlThe Helm values are defined in values/eks-hourly.yaml.
# values/eks-hourly.yaml
env:
ENVIRONMENT_NAME: "Kafka from Kpow Hourly"
envFromConfigMap: "kpow-config"
volumeMounts:
# ... (volume configuration is the same as annual)
volumes:
# ...
resources:
# ...Verify and access Kpow Hourly
Check that the Kpow pod is running.
kubectl get all -l app.kubernetes.io/instance=kpow-hourly -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-hourly-kpow-aws-hourly-68869b6cb9-x9prf 0/1 Running 0 83s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-hourly-kpow-aws-hourly ClusterIP 10.100.221.36 <none> 3000/TCP 85s
# ...To access the UI, forward the service port to a different local port (e.g., 3001) to avoid conflicts.
kubectl -n factorhouse port-forward service/kpow-hourly-kpow-aws-hourly 3001:3000You can now access Kpow by navigating to http://localhost:3001 in your browser.

Delete resources
To avoid ongoing AWS charges, clean up all created resources in reverse order.
Delete Kpow and ConfigMaps
helm uninstall kpow-annual kpow-hourly -n factorhouse
kubectl delete -f manifests/kpow/config-files.yaml \
-f manifests/kpow/config.yaml -n factorhouseDelete the Kafka cluster and Strimzi operator
STRIMZI_VERSION="0.45.1"
kubectl delete -f manifests/kafka/kafka-cluster.yaml -n kafka
kubectl delete -f manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml -n kafkaDelete the EKS cluster
This command will remove the cluster and all associated resources.
eksctl delete cluster -f manifests/eks/cluster.eksctl.yamlConclusion
In this guide, we have successfully deployed a complete, production-ready environment for monitoring Apache Kafka on AWS. By leveraging eksctl, we provisioned a robust EKS cluster with correctly configured IAM roles for service accounts, a critical step for secure integration with AWS services. We then deployed a Kafka cluster using the Strimzi operator, demonstrating the power of Kubernetes operators in simplifying complex stateful applications.
Finally, we walked through the deployment of both Kpow Annual and Kpow Hourly from the AWS Marketplace. This showcased the flexibility of Kpow's subscription models and their seamless integration with AWS for licensing and metering. You are now equipped with the knowledge to set up, configure, and manage Kpow on EKS, unlocking powerful insights and operational control over your Kafka ecosystem.
Highlights
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
.webp)
Deploy Kpow on EKS via AWS Marketplace using Helm
Streamline your Kpow deployment on Amazon EKS with our guide, fully integrated with the AWS Marketplace. We use eksctl to automate IAM Roles for Service Accounts (IRSA), providing a secure integration for Kpow's licensing and metering. This allows your instance to handle license validation via AWS License Manager and report usage for hourly subscriptions, enabling a production-ready deployment with minimal configuration.
Overview
This guide provides a comprehensive walkthrough for deploying Kpow, a powerful toolkit for Apache Kafka, onto an Amazon EKS (Elastic Kubernetes Service) cluster. We will cover the entire process from start to finish, including provisioning the necessary AWS infrastructure, deploying a Kafka cluster using the Strimzi operator, and finally, installing Kpow using a subscription from the AWS Marketplace.
The guide demonstrates how to set up both Kpow Annual and Kpow Hourly products, highlighting the specific integration points with AWS services like IAM for service accounts, ECR for container images, and the AWS License Manager for the annual subscription. By the end of this tutorial, you will have a fully functional environment running Kpow on EKS, ready to monitor and manage your Kafka cluster.
The source code and configuration files used in this guide can be found in the features/eks-deployment folder of this GitHub repository.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
Explore our live multi-cluster demo environment or grab a free Community license and dive into streaming tech on your laptop with Factor House Local.

Prerequisites
To follow along the guide, you need:
- CLI Tools:
- AWS Infrastructure:
- VPC: A Virtual Private Cloud (VPC) that has both public and private subnets is required.
- IAM Permissions: A user with the necessary IAM permissions to create an EKS cluster with a service account.
- Kpow Subscription:
- A subscription to a Kpow product through the AWS Marketplace is required. After subscribing, you will receive access to the necessary components and deployment instructions.
- The specifics of accessing the container images and Helm chart depend on the chosen Kpow product:
- Kpow Annual product:
- Subscribing to the annual product provides access to the ECR (Elastic Container Registry) image and the corresponding Helm chart.
- Kpow Hourly product:
- For the hourly product, access to the ECR image will be provided and deployment utilizes the public Factor House Helm repository for installation.
- Kpow Annual product:
Deploy an EKS cluster
We will use eksctl to provision an Amazon EKS cluster. The configuration for the cluster is defined in the manifests/eks/cluster.eksctl.yaml file within the repository.
Before creating the cluster, you must open this file and replace the placeholder values for <VPC-ID>, <PRIVATE-SUBNET-ID-* >, and <PUBLIC-SUBNET-ID-* > with your actual VPC and subnet IDs.
⚠️ The provided configuration assumes the EKS cluster will be deployed in theus-east-1region. If you intend to use a different region, you must update themetadata.regionfield and ensure the availability zone keys undervpc.subnets(e.g.,us-east-1a,us-east-1b) match the availability zones of the subnets in your chosen region.
Here is the content of the cluster.eksctl.yaml file:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: fh-eks-cluster
region: us-east-1
vpc:
id: "<VPC-ID>"
subnets:
private:
us-east-1a:
id: "<PRIVATE-SUBNET-ID-1>"
us-east-1b:
id: "<PRIVATE-SUBNET-ID-2>"
public:
us-east-1a:
id: "<PUBLIC-SUBNET-ID-1>"
us-east-1b:
id: "<PUBLIC-SUBNET-ID-2>"
iam:
withOIDC: true
serviceAccounts:
- metadata:
name: kpow-annual
namespace: factorhouse
attachPolicyARNs:
- "arn:aws:iam::aws:policy/service-role/AWSLicenseManagerConsumptionPolicy"
- metadata:
name: kpow-hourly
namespace: factorhouse
attachPolicyARNs:
- "arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage"
nodeGroups:
- name: ng-dev
instanceType: t3.medium
desiredCapacity: 4
minSize: 2
maxSize: 6
privateNetworking: trueThis configuration sets up the following:
- Cluster Metadata: A cluster named
fh-eks-clusterin theus-east-1region. - VPC: Specifies an existing VPC and its public/private subnets where the cluster resources will be deployed.
- IAM with OIDC: Enables the IAM OIDC provider, which allows Kubernetes service accounts to be associated with IAM roles. This is crucial for granting AWS permissions to your pods.
- Service Accounts:
kpow-annual: Creates a service account for the Kpow Annual product. It attaches theAWSLicenseManagerConsumptionPolicy, allowing Kpow to validate its license with the AWS License Manager service.kpow-hourly: Creates a service account for the Kpow Hourly product. It attaches theAWSMarketplaceMeteringRegisterUsagepolicy, which is required for reporting usage metrics to the AWS Marketplace.
- Node Group: Defines a managed node group named
ng-devwitht3.mediuminstances. The worker nodes will be placed in the private subnets (privateNetworking: true).
Once you have updated the YAML file with your networking details, run the following command to create the cluster. This process can take 15-20 minutes to complete.
eksctl create cluster -f cluster.eksctl.yamlOnce the cluster is created, eksctl automatically updates your kubeconfig file (usually located at ~/.kube/config) with the new cluster's connection details. This allows you to start interacting with your cluster immediately using kubectl.
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# ip-192-168-...-21.ec2.internal Ready <none> 2m15s v1.32.9-eks-113cf36
# ...Launch a Kafka cluster
With the EKS cluster running, we will now launch an Apache Kafka cluster into it. We will use the Strimzi Kafka operator, which simplifies the process of running Kafka on Kubernetes.
Install the Strimzi operator
First, create a dedicated namespace for the Kafka cluster.
kubectl create namespace kafka
Next, download the Strimzi operator installation YAML. The repository already contains the file manifests/kafka/strimzi-cluster-operator-0.45.1.yaml, but the following commands show how it was downloaded and modified for this guide.
## Define the Strimzi version and download URL
STRIMZI_VERSION="0.45.1"
DOWNLOAD_URL=https://github.com/strimzi/strimzi-kafka-operator/releases/download/$STRIMZI_VERSION/strimzi-cluster-operator-$STRIMZI_VERSION.yaml
## Download the operator manifest
curl -L -o manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml ${DOWNLOAD_URL}
## Modify the manifest to install the operator in the 'kafka' namespace
sed -i 's/namespace: .*/namespace: kafka/' manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yamlNow, apply the manifest to install the Strimzi operator in your EKS cluster.
kubectl apply -f manifests/kafka/strimzi-cluster-operator-0.45.1.yaml -n kafkaDeploy a Kafka cluster
The configuration for our Kafka cluster is defined in manifests/kafka/kafka-cluster.yaml. It describes a simple, single-node cluster suitable for development, using ephemeral storage, meaning data will be lost if the pods restart.
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: fh-k8s-cluster
spec:
kafka:
version: 3.9.1
replicas: 1
listeners:
- name: plain
port: 9092
type: internal
tls: false
# ... (content truncated for brevity)Deploy the Kafka cluster with the following command:
kubectl create -f manifests/kafka/kafka-cluster.yaml -n kafkaVerify the deployment
After a few minutes, all the necessary pods and services for Kafka will be running. You can verify this by listing all resources in the kafka namespace.
kubectl get all -n kafka -o nameThe output should look similar to this, showing the pods for Strimzi, Kafka, Zookeeper, and the associated services. The most important service for connecting applications is the Kafka bootstrap service.
# pod/fh-k8s-cluster-entity-operator-...
# pod/fh-k8s-cluster-kafka-0
# ...
# service/fh-k8s-cluster-kafka-bootstrap <-- Kafka bootstrap service
# ...Deploy Kpow
Now that the EKS and Kafka clusters are running, we can deploy Kpow. This guide covers the deployment of both Kpow Annual and Kpow Hourly products. Both deployments will use a common set of configurations for connecting to Kafka and setting up authentication/authorization.
First, ensure you have a namespace for Kpow. The eksctl command we ran earlier already created the service accounts in the factorhouse namespace, so we will use that. If you hadn't created it, you would run kubectl create namespace factorhouse.
Create ConfigMaps
We will use two Kubernetes ConfigMaps to manage Kpow's configuration. This approach separates the core configuration from the Helm deployment values.
kpow-config-files: This ConfigMap holds file-based configurations, including RBAC policies, JAAS configuration, and user properties for authentication.kpow-config: This ConfigMap provides environment variables to the Kpow container, such as the Kafka bootstrap address and settings to enable our authentication provider.
The contents of these files can be found in the repository at manifests/kpow/config-files.yaml and manifests/kpow/config.yaml.
manifests/kpow/config-files.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kpow-config-files
namespace: factorhouse
data:
hash-rbac.yml: |
# RBAC policies defining user roles and permissions
admin_roles:
- "kafka-admins"
# ... (content truncated for brevity)
hash-jaas.conf: |
# JAAS login module configuration
kpow {
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
file="/etc/kpow/jaas/hash-realm.properties";
};
# ... (content truncated for brevity)
hash-realm.properties: |
# User credentials (username: password, roles)
# admin/admin
admin: CRYPT:adpexzg3FUZAk,server-administrators,content-administrators,kafka-admins
# user/password
user: password,kafka-users
# ... (content truncated for brevity)manifests/kpow/config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kpow-config
namespace: factorhouse
data:
# Environment Configuration
BOOTSTRAP: "fh-k8s-cluster-kafka-bootstrap.kafka.svc.cluster.local:9092"
REPLICATION_FACTOR: "1"
# AuthN + AuthZ
JAVA_TOOL_OPTIONS: "-Djava.awt.headless=true -Djava.security.auth.login.config=/etc/kpow/jaas/hash-jaas.conf"
AUTH_PROVIDER_TYPE: "jetty"
RBAC_CONFIGURATION_FILE: "/etc/kpow/rbac/hash-rbac.yml"Apply these manifests to create the ConfigMaps in the factorhouse namespace.
kubectl apply -f manifests/kpow/config-files.yaml \
-f manifests/kpow/config.yaml -n factorhouseYou can verify their creation by running:
kubectl get configmap -n factorhouse
# NAME DATA AGE
# kpow-config 5 ...
# kpow-config-files 3 ...Deploy Kpow Annual
Download the Helm chart
The Helm chart for Kpow Annual is in a private Amazon ECR repository. First, authenticate your Helm client.
# Enable Helm's experimental support for OCI registries
export HELM_EXPERIMENTAL_OCI=1
# Log in to the AWS Marketplace ECR registry
aws ecr get-login-password \
--region us-east-1 | helm registry login \
--username AWS \
--password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.comNext, pull and extract the chart.
# Create a directory, pull the chart, and extract it
mkdir -p awsmp-chart && cd awsmp-chart
# Pull the latest version of the Helm chart from ECR (add --version <x.x.x> to specify a version)
helm pull oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/factor-house/kpow-aws-annual
tar xf $(pwd)/* && find $(pwd) -maxdepth 1 -type f -delete
cd ..Launch Kpow Annual
Now, install Kpow using Helm. We will reference the service account kpow-annual that was created during the EKS cluster setup, which has the required IAM policy for license management.
helm install kpow-annual ./awsmp-chart/kpow-aws-annual/ \
-n factorhouse \
--set serviceAccount.create=false \
--set serviceAccount.name=kpow-annual \
--values ./values/eks-annual.yamlThe Helm values for this deployment are in values/eks-annual.yaml. It mounts the configuration files from our ConfigMaps and sets resource limits.
# values/eks-annual.yaml
env:
ENVIRONMENT_NAME: "Kafka from Kpow Annual"
envFromConfigMap: "kpow-config"
volumeMounts:
- name: kpow-config-volumes
mountPath: /etc/kpow/rbac/hash-rbac.yml
subPath: hash-rbac.yml
- name: kpow-config-volumes
mountPath: /etc/kpow/jaas/hash-jaas.conf
subPath: hash-jaas.conf
- name: kpow-config-volumes
mountPath: /etc/kpow/jaas/hash-realm.properties
subPath: hash-realm.properties
volumes:
- name: kpow-config-volumes
configMap:
name: "kpow-config-files"
resources:
limits:
cpu: 1
memory: 0.5Gi
requests:
cpu: 1
memory: 0.5GiNote: The CPU and memory values are intentionally set low for this guide. For production environments, check the official documentation for recommended capacity.
Verify and access Kpow Annual
Check that the Kpow pod is running successfully.
kubectl get all -l app.kubernetes.io/instance=kpow-annual -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-annual-kpow-aws-annual-c6bc849fb-zw5ww 0/1 Running 0 46s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-annual-kpow-aws-annual ClusterIP 10.100.220.114 <none> 3000/TCP 47s
# ...To access the UI, forward the service port to your local machine.
kubectl -n factorhouse port-forward service/kpow-annual-kpow-aws-annual 3000:3000You can now access Kpow by navigating to http://localhost:3000 in your browser.

Deploy Kpow Hourly
Configure the Kpow Helm repository
The Helm chart for Kpow Hourly is available in the Factor House Helm repository. First, add the Helm repository.
helm repo add factorhouse https://charts.factorhouse.ioNext, update Helm repositories to ensure you install the latest version of Kpow.
helm repo updateLaunch Kpow Hourly
Install Kpow using Helm, referencing the kpow-hourly service account which has the IAM policy for marketplace metering.
helm install kpow-hourly factorhouse/kpow-aws-hourly \
-n factorhouse \
--set serviceAccount.create=false \
--set serviceAccount.name=kpow-hourly \
--values ./values/eks-hourly.yamlThe Helm values are defined in values/eks-hourly.yaml.
# values/eks-hourly.yaml
env:
ENVIRONMENT_NAME: "Kafka from Kpow Hourly"
envFromConfigMap: "kpow-config"
volumeMounts:
# ... (volume configuration is the same as annual)
volumes:
# ...
resources:
# ...Verify and access Kpow Hourly
Check that the Kpow pod is running.
kubectl get all -l app.kubernetes.io/instance=kpow-hourly -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-hourly-kpow-aws-hourly-68869b6cb9-x9prf 0/1 Running 0 83s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-hourly-kpow-aws-hourly ClusterIP 10.100.221.36 <none> 3000/TCP 85s
# ...To access the UI, forward the service port to a different local port (e.g., 3001) to avoid conflicts.
kubectl -n factorhouse port-forward service/kpow-hourly-kpow-aws-hourly 3001:3000You can now access Kpow by navigating to http://localhost:3001 in your browser.

Delete resources
To avoid ongoing AWS charges, clean up all created resources in reverse order.
Delete Kpow and ConfigMaps
helm uninstall kpow-annual kpow-hourly -n factorhouse
kubectl delete -f manifests/kpow/config-files.yaml \
-f manifests/kpow/config.yaml -n factorhouseDelete the Kafka cluster and Strimzi operator
STRIMZI_VERSION="0.45.1"
kubectl delete -f manifests/kafka/kafka-cluster.yaml -n kafka
kubectl delete -f manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml -n kafkaDelete the EKS cluster
This command will remove the cluster and all associated resources.
eksctl delete cluster -f manifests/eks/cluster.eksctl.yamlConclusion
In this guide, we have successfully deployed a complete, production-ready environment for monitoring Apache Kafka on AWS. By leveraging eksctl, we provisioned a robust EKS cluster with correctly configured IAM roles for service accounts, a critical step for secure integration with AWS services. We then deployed a Kafka cluster using the Strimzi operator, demonstrating the power of Kubernetes operators in simplifying complex stateful applications.
Finally, we walked through the deployment of both Kpow Annual and Kpow Hourly from the AWS Marketplace. This showcased the flexibility of Kpow's subscription models and their seamless integration with AWS for licensing and metering. You are now equipped with the knowledge to set up, configure, and manage Kpow on EKS, unlocking powerful insights and operational control over your Kafka ecosystem.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Release 94.6: Factor Platform, Ververica Integration, and kJQ Enhancements
The first Factor Platform release candidate is here, a major milestone toward a unified control plane for real-time data streaming technologies. This release also introduces Ververica Platform integration in Flex, plus support for Kafka Clients 4.1 / Confluent 8.0.0 and new kJQ operators for richer stream inspection.
Factor Platform release candidate: Early access to unified streaming control
For organisations operating streaming at scale, the challenge has never been about any one technology. It's about managing complexity across regions, tools, and teams while maintaining governance, performance, and cost control.
We've spent years building tools that bring clarity to Apache Kafka and Apache Flink. Now, we're taking everything we've learned and building something bigger: Factor Platform, a unified control plane for real-time data infrastructure.
Factor Platform delivers complete visibility and federated control across hundreds of clusters, multiple clouds, and distributed teams from a single interface. Engineers gain deep operational insight into jobs, topics, and lineage. Business and compliance teams benefit from native catalogs, FinOps intelligence, and audit-ready transparency.
The first release candidate is live. It's designed for early adopters exploring large-scale, persistent streaming environments, and it's ready to be shaped by the teams who use it.
Interested in early access? Contact sales@factorhouse.io

Unlocking native Flink management with Ververica Platform
Our collaboration with Ververica (the original creators of Apache Flink), enters a new phase with the introduction of Flex + Ververica Platform integration. This brings Flink’s enterprise management and observability capabilities directly into the Factor House ecosystem.
Flex users can now connect to Ververica Platform (Community or Enterprise v2) and instantly visualize session clusters, job deployments, and runtime performance. The current release provides a snapshot view of Ververica resources at startup, with live synchronization planned for future updates. It's a huge step toward true end-to-end streaming visibility—from data ingestion, to transformation, to delivery.
Configuration is straightforward: point to your Ververica REST API, authenticate via secure token, and your Flink environments appear right alongside your clusters.
This release represents just the beginning of our partnership with Ververica. Together, we’re exploring deeper integrations across the Flink ecosystem, including OpenShift and Amazon Managed Service for Apache Flink, to make enterprise-scale stream processing simpler and more powerful.
Read the full Ververica Platform integration guide →
Advancing Kafka support with Kafka Clients 4.1.0 and Confluent Schema SerDes 8.0.0
We’ve upgraded to Kafka Clients 4.1.0 / Confluent Schema SerDes 8.0.0, aligning Kpow with the latest Kafka ecosystem updates. Teams using custom Protobuf Serdes should review potential compatibility changes.
Data Inspect gets more powerful with kJQ enhancements
Data Inspect in Kpow has been upgraded with improvements to kJQ, our lightweight JSON query language for streaming data. The new release introduces map() and select() functions, expanding the expressive power of kJQ for working with nested and dynamic data. These additions make it possible to iterate over collections, filter elements based on complex conditions, and compose advanced data quality or anomaly detection filters directly in the browser. Users can now extract specific values from arrays, filter deeply nested structures, and chain logic with built-in functions like contains, test, and is-empty.
For example, you can now write queries like:
.value.correctingProperty.names | map(.localeLanguageCode) | contains("pt")Or filter and validate nested collections:
.value.names | map(select(.languageCode == "pt-Pt")) | is-empty | notThese updates make Data Inspect far more powerful for real-time debugging, validation, and exploratory data analysis. Explore the full range of examples and interactive demos in the kJQ documentation.
See map() and select() in action in the kJQ Playground →
Schema Registry performance improvements
We’ve greatly improved Schema Registry performance for large installations. The observation process now cuts down on the number of REST calls each schema observation makes by an order of magnitude. Kpow now defaults to SCHEMA_REGISTRY_OBSERVATION_VERSION=2, meaning all customers automatically benefit from these performance boosts.
.webp)
Kpow Custom Serdes and Protobuf v4.31.1
This post explains an update in the version of protobuf libraries used by Kpow, and a possible compatibility impact this update may cause to user defined Custom Serdes.
Kpow Custom Serdes and Protobuf v4.31.1
Note: The potential compatibility issues described in this post only impacts users who have implemented Custom Serdes that contain generated protobuf classes.
Resolution: If you encounter these compatibility issues, resolve them by re-generating any generated protobuf classes with protoc v31.1.
In the upcoming v94.6 release of Kpow, we're updating all Confluent Serdes dependencies to the latest major version 8.0.1.
In io.confluent/kafka-protobuf-serializer:8.0.1 the protobuf version is advanced from 3.25.5 to 4.31.1, and so the version of protobuf used by Kpow changes.
- Confluent protobuf upgrade PR: https://github.com/confluentinc/schema-registry/pull/3569
- Related Github issue: https://github.com/confluentinc/schema-registry/issues/3047
This is a major upgrade of the underlying protobuf libraries, and there are some breaking changes related to generated code.
Protobuf 3.26.6 introduces a breaking change that fails at runtime (deliberately) if the makeExtensionsImmutable method is called as part of generated protobuf code.
The decision to break at runtime was taken because earlier versions of protobuf were found to be vulnerable to the footmitten CVE.
- Protobuf footmitten CVE and breaking change announcement: https://protobuf.dev/news/2025-01-23/
- Apache protobuf discussion thread: https://lists.apache.org/thread/87osjw051xnx5l5v50dt3t81yfjxygwr
- Comment on a Schema Registry ticket: https://github.com/confluentinc/schema-registry/issues/3360
We found that when we advanced to the 8.0.1 version of the libraries; we encountered issues with some test classes generated by 3.x protobuf libraries.
Compilation issues:
Compiling 14 source files to /home/runner/work/core/core/target/kpow-enterprise/classes
/home/runner/work/core/core/modules/kpow/src-java-dev/factorhouse/serdes/MyRecordOuterClass.java:129: error: cannot find symbol
makeExtensionsImmutable();
^
symbol: method makeExtensionsImmutable()
location: class MyRecordRuntime issues:
Bad type on operand stack
Exception Details:
Location:
io/confluent/kafka/schemaregistry/protobuf/ProtobufSchema.toMessage(Lcom/google/protobuf/DescriptorProtos$FileDescriptorProto;Lcom/google/protobuf/DescriptorProtos$DescriptorProto;)Lcom/squareup/wire/schema/internal/parser/MessageElement; : invokestatic
Reason:
Type 'com/google/protobuf/DescriptorProtos$MessageOptions' (current frame, stack[1]) is not assignable to 'com/google/protobuf/GeneratedMessage$ExtendableMessage'
Current Frame:
bci:
flags: { }
locals: { 'com/google/protobuf/DescriptorProtos$FileDescriptorProto', 'com/google/protobuf/DescriptorProtos$DescriptorProto', 'java/lang/String', 'com/google/common/collect/ImmutableList$Builder', 'com/google/common/collect/ImmutableList$Builder', 'com/google/common/collect/ImmutableList$Builder', 'com/google/common/collect/ImmutableList$Builder', 'java/util/LinkedHashMap', 'java/util/LinkedHashMap', 'java/util/List', 'com/google/common/collect/ImmutableList$Builder' }
stack: { 'com/google/common/collect/ImmutableList$Builder', 'com/google/protobuf/DescriptorProtos$MessageOptions' }
Bytecode:
0000000: 2bb6 0334 4db2 0072 1303 352c b903 3703
0000010: 00b8 0159 4eb8 0159 3a04 b801 593a 05b8
0000020: 0159 3a06 bb02 8959 b702 8b3a 07bb 0289If you encounter these compatibility issues, resolve them by re-generating any generated protobuf classes with protoc v31.1.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
All Resources
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Release 94.1: Streams Agent, Consumer Offset Management, and Helm Charts
This major version release from Factor House improves consumer offset management, Kafka Streams telemetry, extra data inspect capabilities and new Helm Charts!
This major version release from Factor House:
- Improves data inspect
- Improves consumer offset management
- Improves Kafka Streams agent integration
- Adds Flex and Community Helm Charts
- Resolves a number of small bugs, and,
- Bumps Kafka client dependencies to v3.9.0.
Brand new Helm Charts + Release simplification
This has been a highly requested feature for a while now: in-depth Helm Charts for all of our products!
Customers can now install Helm charts for our full product suite - Flex, Kpow and the Community Editions of both:
helm repo add factorhouse https://charts.factorhouse.io
helm repo update
helm install my-kpow-ce factorhouse/kpow-ceTo read more about our improvements to Helm + Docker see this blog post: Updates to container specifics (DockerHub and Helm Charts).
We've also streamlined our deliverables, introducing clearer release channels to make accessing our products easier than ever. This groundwork sets the stage for a big year of exciting releases!
Consolidate release artifacts
- Consistently deploy all artifacts (Maven, Clojars, AWS Marketplace, Helm, ArtifactHub and DockerHub) to the
factorhouseorganisation. - See blog post: A final goodbye to OperatrIO for more details.
Simplify DockerHub repos
- Consolidate DockerHub repos: we now deploy to the
factorhouse/kpowandfactorhouse/flexrepos respectively. - Community Editions still found at
factorhouse/kpow-ceandfactorhouse/flex-cerepos. - See blog post: Updates to container specifics (DockerHub and Helm Charts) for more details.
Communicate Java compatibility and evolution
- Bump our default Java version to JDK17 for Docker and Helm
- Java 11 and 8 JARs still available
- See blog post: Releasing Software at Factor House: Our Java Compatibility and Evolution Strategy for more details.
1.0.0 Kpow Streams Agent!
Our beloved open-source Kpow Streams Agent hits its 1.0.0 release milestone!
Along with core improvements to the agent, we have poured a lot of love into Kpow's Kafka Streams UI and crunched down on backend work required when processing streams metrics.
- Visit the GitHub README to find out more about the changes and to get started
- JavaDocs for using the agent are now available over at javadoc.io
- Kpow's Streams Agent can be found on Maven Central at io.factorhouse/kpow-streams-agent
Data Inspect improvements
This release is packed with quality improvements to our data inspection functionality, making it smoother, more reliable, and better than ever!
Stay tuned! We're bringing plenty more quality improvements to Kpow's data inspection functionality this year!
New modes
The data inspect form now contains additional Modes. New options include:
- Slice (default) - queries records beginning from a start time
- Bounded window - queries records between a start and end time
Improved data inspect reliability
There has been an outstanding bug in Kafka relating to long-running consumers that could not recover after certain broker rolling-upgrade scenarios. This bug is captured in KAFKA-13467 and resolved in Kafka clients 3.8.0 and above.
Some customers have reported this exact issue when running on Confluent Cloud. We think Confluent periodically roll their brokers in each cluster (probably for reasonable ops reasons) and update their DNS with new broker IPS rather than changing the bootstrap.
This release now resolves this long-standing data inspect issue! Our data inspect consumer pool should be more resilient to broker upgrades.
We have also added the option to manually restart the consumer pool for any case there may be unexpected consumer death.
Configurable isolation level
Starting with 94.1 customers can now specify the isolation level of the query (defaulting to READ_UNCOMMITED). When set to READ_COMMITED data inspect results will only return records from committed transactions. This can be particularly useful for customers looking to debug issues, where they ignore data in uncommitted transactions
Improved Offset Management
Kpow has long supported managing consumer group offsets, but this release gives the feature the attention it deserves:
- Reset offset by providing new offset value
- Reset offset by providing a precise timestamp
- Consistent action menu across different nodes of consumer group topology as well as in table views


Releasing Software at Factor House: Our Java Compatibility and Evolution Strategy
At Factor House, delivering reliable software is at the heart of everything we do. A key aspect of this commitment lies in our approach to managing Java compatibility. This blog post outlines our current release process and future plans for evolving Java support, including our approach to deprecating older versions in a way that respects the needs of diverse customer bases.
Releasing Software at Factor House: Our Java Compatibility and Evolution Strategy
At Factor House, delivering reliable software is at the heart of everything we do. A key aspect of this commitment lies in our approach to managing Java compatibility.
Our suite of products works seamlessly across a range of JVM versions—from Java 8 to Java 17 and beyond. We balance supporting large enterprises still demanding Java 8 releases while staying ahead with JVM advancements, such as catering to customers requiring Graviton builds for deployments targetting ARM.
This blog post outlines our current release process and future plans for evolving Java support, including our approach to deprecating older versions in a way that respects the needs of diverse customer bases.
Our Current Java Release Strategy
We release our software in two primary formats: as JAR files and through Docker containers. Here’s an overview of our compatibility and deployment practices:
JAR Releases
- Java 8: Supported for customers who rely on legacy environments.
- Java 11: A modern, stable release offering long-term support (LTS).
- Java 17: Our recommended LTS version for customers, ensuring compatibility with newer environments and features.
Docker Releases
- We use Amazon Corretto 17 as the base image for our Dockerfiles.
- Our Docker images include the Java 17 JAR by default.
Compatibility Matrix
Java JARs
JAR VersionSupported Java VersionsNotesJava 8Java 8Legacy support, phased-out over timeJava 11Java 11, Java 17Suitable for many modern deploymentsJava 17Java 17+Recommended for most customers
Docker Releases
Deployment TypeBase ImageJAR VersionNotesDockerAmazon Corretto 17Java 17Future-proof, stable LTSHelm ChartsAmazon Corretto 17Java 17Future-proof, stable LTS
Our Commitment to Compatibility
- Backward Compatibility: We understand that some customers operate in environments requiring older Java versions. That’s why we’ve maintained Java 8 and Java 11 compatibility in addition to Java 17.
- Future-Ready: We’re committed to adopting more recent LTS versions of Java as they become available, ensuring our software leverages the latest performance and security improvements.
- Transition Planning: While we currently support Java 8, we recognize its end-of-life status in many contexts. As part of our roadmap, we plan to phase out Java 8 support gradually, allowing customers ample time to transition to newer versions.
Moving Closer to Java's LTS Release Cycle
Moving forward, Factor House will align more closely with Java's LTS release cycle. For instance, LTS GA support for Java 25 commences in September 2025, and we plan to:
- Transition our JAR compilation targets and base Docker images to the new LTS versions as they are released.
- Phase out older versions in alignment with Java's premier support timelines.
This strategy will enable us to provide customers with timely access to the latest Java features and optimizations while maintaining a predictable and transparent deprecation schedule.
Looking Ahead
Docker Image Evolution
Over time, our base Docker image will evolve to reflect newer LTS Java versions. For instance, as future LTS releases like Java 21, Java 25, or beyond become widely adopted, we’ll:
- Transition to the new LTS version as our default base image.
- Update our JAR compilation targets to align with the latest versions.
Phasing Out Java 8
- We will work closely with customers still using Java 8 to support their migration efforts.
- A detailed timeline for deprecating Java 8 will be communicated well in advance to ensure a smooth transition.
Why This Matters
Adopting this strategy ensures that:
- Our software is secure, leveraging the latest Java features and updates.
- Customers have flexibility, whether they’re operating legacy systems or embracing modern environments.
- Factor House remains future-focused, delivering cutting-edge solutions without compromising reliability.
Conclusion
At Factor House, we’re committed to balancing innovation with stability. By supporting multiple Java versions and planning for future transitions, we ensure that our customers can deploy our software confidently, no matter their infrastructure. Stay tuned for updates as we continue to evolve our Java release strategy and roadmap.
Have questions or need guidance on transitioning to a newer Java version? Reach out to our support team—we’re here to help!
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Updates to container specifics (DockerHub and Helm Charts)
Discover how our 94.1 release has streamlined DockerHub, Helm Charts, and AWS Marketplace deployments!
Updates to Container Specifics (DockerHub and Helm Charts)
As part of our 94.1 release we have seen some big, sweeping changes streamlining our deployment pipeline. Key improvements in this space include cleaning up of artifact names and solidifying our strategy around Java version compatibility and evolution. Along with these major improvements we have also poured some love into our base Dockerfiles for Kpow and Flex.
These changes ensure that our products are more ergonomic for our end-users to consume. Read on to learn more about our container improvements!
New AWS Marketplace ECR co-ordinates
For users purchasing Kpow for Apache Kafka on the AWS Marketplace, the location of our Hourly and Annual products has been updated:
For Kpow for Apache Kafka (Hourly) the new coordinates are:
- Docker container - 709825985650.dkr.ecr.us-east-1.amazonaws.com/factor-house/kpow-hourly
For Kpow for Apache Kafka (Annual) the new coordinates are:
- Docker container - 709825985650.dkr.ecr.us-east-1.amazonaws.com/factor-house/kpow-annual
- Helm chart - 709825985650.dkr.ecr.us-east-1.amazonaws.com/factor-house/kpow-annual-chart
New DockerHub co-ordinates
We are excited to announce a significant improvement in our DockerHub management strategy, aimed at enhancing clarity and streamlining your experience with Factor House products.
Introducing Our New Docker Image Structure
Previously, we offered separate Docker images for different Kpow editions:
factorhouse/kpow-ee- (Enterprise edition)factorhouse/kpow-se- (Standard edition)
We have now collapsed both kpow-se and kpow-ee into a single DockerHub image found at factorhouse/kpow.
By consolidating our Docker images, we aim to eliminate confusion and streamline the deployment process for all customers. A single image will cater to every license type, ensuring clarity and ease of use in your experience with Factor House products.
The community edition still remains available at factorhouse/kpow-ce and factorhouse/flex-ce.
New Helm Charts co-ordinates and Charts!
Welcome to Our Expanded Ecosystem
We have transitioned from the previous Charts repository kpow/kpow to a new, centralized location under the Factor House banner. This change aligns with our rebranding and underscores our status as a multi-product company dedicated to providing best-in-class tools and services.
Streamlined Helm Chart Management
To access our updated Helm Charts, please update and use the following repository:
helm repo add factorhouse https://charts.factorhouse.iohelm repo update
Key Highlights of Our New Offering
Flex Helm Charts
- We are excited to introduce Helm charts for Flex, our Apache Flink product!
- Installation usage:
helm install --namespace factorhouse --create-namespace flex factorhouse/flex \
--set env.LICENSE_ID="00000000-0000-0000-0000-000000000001" \
--set env.LICENSE_CODE="FLEX_CREDIT" \
--set env.LICENSEE="Factor House\, Inc." \
--set env.LICENSE_EXPIRY="2022-01-01" \
--set env.LICENSE_SIGNATURE="638......A51" \
--set env.FLINK_REST_URL="http://flink-dev.svc"More detailed installation instructions can be found at our GitHub repository.
Community Edition (CE) Helm Charts
We are excited to introduce community Helm charts for our products! This has been a much requested feature from our growing community userbase:
Install Kpow Community Edition with ease:
/
helm install my-kpow-ce factorhouse/kpow-ceExperience the power of Flex Community, now available as a Helm chart:
helm install my-flex-ce factorhouse/flex-ceEnhanced Availability
- All Helm Charts are open source and hosted on GitHub at factorhouse/helm-charts.
- They are also listed on ArtifactHub, ensuring discoverability and ease of use.
- For AWS Marketplace users, our Amazon-specific charts can now be found under the Factor House organization.
Amazon Corretto 17 as the default base image
Starting with 94.1 the base image for our products is Amazon Coretto 17. For almost all customers will be a completely transparent change.
As part of this clean up, we have dropped a few tags:
alpinetag - we dropped thealpinetag so that we could focus solely on supporting amazonlinux as our base. We aim to support a single, stable long-term support distro as the base for our DockerFiles. Customers can still create their own custom DockerFile targeting alpine with our products.java17tag - now the default base image, previously Java11.
Changes to DockerFile specifics
New entrypoint location
We have changed the entrypoint from /opt/operatr to /opt/factorhouse.
Note: for some customers this might be a breaking change if you use our provided Dockerfile as a base image and reference our old ENTRYPOINT anywhere.
New default JAVA_OPTS
We have added extra flags to our JAVA_OPTS:
--add-opens=java.xml/com.sun.org.apache.xerces.internal.dom=ALL-UNNAMED
--add-opens=java.xml/com.sun.org.apache.xerces.internal.jaxp=ALL-UNNAMED
--add-opens=java.xml/com.sun.org.apache.xerces.internal.util=ALL-UNNAMEDThese are required to run our products with Java 17+. One of our external dependencies requires internal XML processing classes (from Xerces) and JDK 17+ enforces stricter module boundaries, blocking reflective access to internal APIs by default.
Note: if you set custom JAVA_OPTS when using our Dockerfiles, you will need to update your opts to include these additional flags.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
A final goodbye to OperatrIO
2025 is a pivotal moment at Factor House (formally Operatr.IO). We've announced our fundraise and have much more to announce about our roadmap this year. This is why we think that now is the perfect time to do a bit of spring cleaning and retire the io.operatr artifacts for good.
A final goodbye to io.operatr
2025 is a pivotal moment at Factor House (formally Operatr.IO). We've announced our fundraise and have much more to announce about our roadmap this year. This is why we think that now is the perfect time to do a bit of spring cleaning and retire the io.operatr artifacts for good.
One of the hallmarks of Factor House has always been our unwavering commitment to backwards compatibility, ensuring that our customers can seamlessly transition between versions without disruptions to their configurations, deployments, or workflows. While this has been a source of pride for us, sometimes we take this mantra to the extreme. While our product has been named Kpow and our company Factor House for some years now, we were still publishing our Docker images to the old operatr/kpow DockerHub repository.
This blog post outlines our plan to retire the io.operatr artifacts and provides repository details on where to find your new Factor House goodies!
Documenting the Changes: A Transparent Approach
To make this transition smooth for everyone involved, we want to be as transparent as possible about the changes we’re making. Here’s a detailed breakdown of the updates across our key repositories:
DockerHub
As mentioned earlier, any new updates to Kpow will only be published to the factorhouse/kpow repo:
| Product | Previous image location | New image location | Notes |
|---|---|---|---|
| Kpow | operatr/operatr |
factorhouse/kpow |
Has always mirrored
factorhouse/kpow. Starting with 94.1 we will stop mirroring to operatr/operatr.
|
| Kpow | operatr/kpow |
factorhouse/kpow |
Has always mirrored
factorhouse/kpow. Starting with 94.1 we will stop mirroring to operatr/operatr.
|
To read more about our container changes please see this blog post.
Helm Charts
Our Helm Charts are now multi-product! New releases will be pushed to the https://charts.factorhouse.io repository or the factorhouse ArtifactHub repo.
| Product | Previous chart location | New chart location | Notes |
|---|---|---|---|
| Kpow | kpow/kpow |
factorhouse/kpow |
Visit our helm-charts repo for more details.
|
To read more about our container changes please see this blog post.
Maven
We are updating all Maven projects to reflect the Factor House name and branding. This includes updating POM files and repository URLs to ensure compatibility with our latest releases.
That means all Factor House open source will be deployed to the io.factorhouse Maven central namespace.
| Library | Previous deployment | New deployment | Notes |
|---|---|---|---|
| kpow-streams-agent | io.operatr/kpow-streams-agent |
io.factorhouse/kpow-streams-agent |
As part of 94.1, we have moved the streams agent code to
io.factorhouse. We have also pushed significant improvements to the library!
|
Clojars
Our Clojure libraries will be deprecated under the io.operatr namespace and replaced with new packages under the updated namespace:
| Library | Previous deployment | New deployment | Notes |
|---|---|---|---|
| shroud | io.operatr/kpow-secure |
io.factorhouse/shroud |
Previously named
kpow-secure. New name reflects its general cross-product utility.
|
Looking Ahead: A Bright Future for Factor House
This is not just about retiring old artifacts — it’s about celebrating a new chapter in our journey. As we grow and evolve, we’re committed to maintaining the level of excellence that has made us a trusted partner for businesses around the world.
The decision to retire io.operatr isn’t a goodbye to our past but rather a hello to a future filled with endless possibilities. We’re excited to continue building innovative solutions under the Factor House banner, delivering the same reliability and forward-thinking approach our customers have come to expect.
As we move forward, we’ll be sharing more updates about our roadmap and new offerings. Stay tuned for an even brighter year ahead!
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Our Commitment to Engineers
With our funding announcement and the upcoming launch of the Factor Platform, we know some of our existing customers might be wondering: What does this mean for Kpow and Flex? Will we be forced to upgrade? Will prices spike? Keep one thing in mind - at Factor House we're here for engineers.
Our Commitment to Engineers: No Forced Upgrades, No Breaking Changes
With our latest funding announcement and the upcoming launch of the Factor Platform, we know some of our existing customers might be wondering: What does this mean for Kpow and Flex? Will we be forced to upgrade? Will prices suddenly spike?
Let’s clear that up now: Kpow and Flex are here to stay. No forced upgrades to platform, no breaking changes, no artificial roadblocks. Engineers trust us because we build tools that work for them, not against them—and that will never change. Period.

The Factor House Philosophy: Build, Don’t Break
Some companies in our space have taken a different approach; adding new products and forcing customers to move to them or making sudden, dramatic price changes. That’s not how we operate. We believe software engineers deserve better.
If you’re using Kpow or Flex today, you’ll continue to have full access, support, and ongoing updates into the foreseeable future. We don’t make breaking changes. We don’t sunset products just because a new one exists. We’ll always act in the best interests of engineers while continuing to build enterprise solutions that support your evolving needs.
Why Build a Platform, Then?
If Kpow and Flex will still be supported, you might be wondering: Why introduce a platform at all? The answer is simple: while our individual tools solve specific challenges, Factor Platform is designed to solve the bigger picture.
Factor Platform isn’t a replacement - it’s a step up for those who need it. Here’s what it will offer:
- Centralized Management: Kpow, Flex, and all future tools in one place - streamlined and and enterprise scale. A single Web UI and API for data in motion at your organization.
- Control and Automation: Factor Platform is completely dynamically configurable via the UI and API, no more restarts when your RBAC configuration changes.
- Insights and Empowerment: Engineers exist in the space between Kafka, Flink, and other systems. That's where the data lives. That's where Factor Platform thrives.
What’s Next?
We love our customers. Great news if you're operating in the real-time space, we love your customers too.
We think Kpow and Flex are the right tools for most engineering teams today - we're going to sell a lot more licenses.
If you’re happily using Kpow or Flex already, you can keep using them as always. If you’re looking for a way to scale, simplify, and centralize your real-time data tools, Factor Platform will be there when you need it.
This isn’t about locking anyone in - it’s about giving engineers more options, not fewer. That’s a philosophy we’ll always stand by.
Join the Conversation
We know engineers value transparency, and we want to keep the conversation open. If you have thoughts, questions, or feedback, we’d love to hear from you. Your insights shape the tools we build, and we’re committed to making sure they continue to serve your needs.
Factor Platform is on the horizon, and we’re excited to share more soon. If you’d like an early look, reach out - we’d love to show you what’s coming.
- Tell us what you need in a unified platform for streaming data and we'll let you know when Factor Platform is ready for early access.
- Read more about our $5M seed round and where we go from here.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

From Bootstrap to Blackbird: The Future of Factor House
We are thrilled to announce that Factor House has closed a $5M seed round to accelerate the commercial release of our new product, the Factor Platform. Led by Blackbird Ventures, with OIF Ventures, Flying Fox Ventures, and LaunchVic’s Alice Anderson Fund as partners, this round brings our five-year bootstrapping journey to a happy conclusion and points to a bright future ahead!
Announcing our $5M Seed Round, led by Blackbird.
We are thrilled to announce that Factor House has closed a $5M seed round to accelerate the commercial release of our new product, the Factor Platform.
Led by Blackbird Ventures, with OIF Ventures, Flying Fox Ventures, LaunchVic’s Alice Anderson Fund, and Steve and Michelle Holmes as investment partners, this round brings our five-year journey as a bootstrapped startup to a happy conclusion and points to a bright future ahead!
Pull yourself up by your bootstraps
We founded Factor House with a simple yet ambitious goal: to empower engineers with the tools they need to build real-time systems with confidence.
Many startups chase funding to find product-market fit, we took a different path - bootstrapping, building, listening to engineers, iterating, and delivering products that have our users at heart. That approach allowed us to grow organically and cement our place as a trusted name in real-time data management.
We have been fortunate through five years of bootstrapping to pass fellow travellers who could share support, advice, or a shoulder to cry on. It was Ben Slater, Instaclustr’s then Chief Product Officer, who told us how hard it is to find your first customers, and then pointed us in the right direction. Just as importantly, the engineering teams at Block, Airwallex, and Pepperstone pushed us to refine early versions of Kpow, ensuring it met the needs of world-class teams operating at scale.
So why change tactic and close a funding round? Learnings from our users show that the opportunity in front of us is huge, and we're determined to build a balanced business that can not only ship great products, but speak clearly and authoritatively about the future of real-time engineering.
Real-time data is business critical
From FinTech and eCommerce to logistics and cybersecurity, industries everywhere are waking up to the reality that real-time data isn’t a luxury - it’s a necessity. Customers expect instant transactions, predictive analytics, and seamless digital experiences. Businesses that fail to embrace real-time processing will inevitably fall behind those that move faster and make smarter decisions.
Factor House has been at the forefront of this shift, providing engineers with intuitive tools that make real-time data management effortless. Kpow, our flagship toolkit for Apache Kafka, has become an essential part of the stack for enterprises managing complex data flows. But as demand grows, so does the need to innovate.
What's Next for Factor House?
With this investment, we’re focused on three key areas:
Expanding Product Capabilities
We will invest in Kpow and Flex, our flagship products. Engineers always need more; they need more profound insights, intelligent automation, and ever more fine-grained control of underlying systems. We’re investing in our existing products to continue to bring clarity and confidence to engineers working with real-time data.
Growing our Global Reach
Our products are used by engineers in over a hundred countries. Growing our team and expanding our ability to communicate as well as ship products will ensure more companies can access enterprise-ready solutions that scale with their needs.
Building the Future with Factor Platform
Factor Platform combines features of each of our tools with extended functionality to provide clarity, control, and governance of real-time data at enterprise scale.

Our composable system architecture provides centralized management of every Kafka and Flink cluster in an organization from a single Web UI, and allows service integration via a secure OpenAPI 3.1 REST API.
We can't wait to open early access to existing customers and start iterating on their feedback.
A Future Without FUD, Where Engineers Lead the Way
Factor House has always been, and will always be, built for engineers first. For our existing customers, nothing changes (read more about our commitment to engineers).
While many enterprise software companies focus on selling to executives or satisfying aggressive growth targets at the expense of their customers, we stay true to the practitioners - those working directly with data daily - ensuring they have the best tools available.
As real-time data becomes the foundation of modern business, the need for intuitive, scalable, high-performance tooling will only grow. Factor House isn’t just responding to that trend; we’re helping shape the future of how real-time data is managed, understood, and leveraged.
The journey from bootstrap to industry leader has been a remarkable one, and lots of fun, but the most exciting chapters are still ahead.
Stay tuned; the best is yet to come!
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Events & Webinars
Stay plugged in with the Factor House team and our community.

[MELBOURNE, AUS] Apache Kafka and Apache Flink Meetup, 27 November
Melbourne, we’re making it a double feature. Workshop by day, meetup by night - same location, each with valuable content for data and software engineers, or those working with Data Streaming technologies. Build the backbone your apps deserve, then roll straight into the evening meetup.

[SYDNEY, AUS] Apache Kafka and Apache Flink Meetup, 26 November
Sydney, we’re making it a double feature. Workshop by day, meetup by night - same location, each with valuable content for data and software engineers, or those working with Data Streaming technologies. Build the backbone your apps deserve, then roll straight into the evening meetup.
Join the Factor Community
We’re building more than products, we’re building a community. Whether you're getting started or pushing the limits of what's possible with Kafka and Flink, we invite you to connect, share, and learn with others.