The unified Factor House Community License works with both Kpow Community Edition and Flex Community Edition, meaning one license will unlock both products. This makes it even simpler to explore modern data streaming tools, create proof-of-concepts, and evaluate our products.
The new unified Factor House Community License works with both Kpow Community Edition and Flex Community Edition, so you only need one license to unlock both products. This makes it even simpler to explore modern data streaming tools, create proof-of-concepts, and evaluate our products.
What's changing
Previously, we issued separate community licenses for Kpow and Flex, with different tiers for individuals and organisations. Now, there's just one single Community License that unlocks both products.
What's new:
One license for both products
Three environments for everyone - whether you're an individual developer or part of a team, you get three non-production installations per product
Simplified management - access and renew your licenses through our new self-service portal at account.factorhouse.io
Our commitment to the engineering community
Since first launching Kpow CE at Current '22, thousands of engineers have used our community licenses to learn Kafka and Flink without jumping through enterprise procurement hoops. This unified license keeps that same philosophy: high-quality tools that are free for non-production use.
The Factor House Community License is free for individuals and organizations to use in non-production environments. It's perfect for:
New users: Head to account.factorhouse.io to grab your free Community license. You'll receive instant access via magic link authentication.
Existing users: Your legacy Kpow and Flex Community licenses will continue to work and are now visible in the portal. When your license renews (after 12 months), consider switching to the unified model for easier management.
What's included
Both Kpow CE and Flex CE include most enterprise features, optimized for learning and testing. Includes Kafka and Flink monitoring and management, fast multi-topic search, and Schema registry and Kafka Connect support.
License duration: 12 months, renewable annually
Installations: Up to 3 per product (Kpow CE: 1 Kafka cluster + 1 Schema Registry + 1 Connect cluster per installation; Flex CE: 1 Flink cluster per installation)
Support: Self-service via Factor House Community Slack, documentation, and release notes
Deployment: Docker, Docker Compose or Kubernetes
Ready for production? Start a 30-day free trial of our Enterprise editions directly from the portal to unlock RBAC, Kafka Streams monitoring, custom SerDes, and dedicated support.
What about legacy licenses?
If you're currently using a Kpow Individual, Kpow Organization, or Flex Community license, nothing changes immediately. Your existing licenses will continue to work with their respective products and are now accessible in the portal. When your license expires at the end of its 12-month term, you can easily switch to the new unified license for simpler management.
The unified Factor House Community License works with both Kpow Community Edition and Flex Community Edition, meaning one license will unlock both products. This makes it even simpler to explore modern data streaming tools, create proof-of-concepts, and evaluate our products.
The new unified Factor House Community License works with both Kpow Community Edition and Flex Community Edition, so you only need one license to unlock both products. This makes it even simpler to explore modern data streaming tools, create proof-of-concepts, and evaluate our products.
What's changing
Previously, we issued separate community licenses for Kpow and Flex, with different tiers for individuals and organisations. Now, there's just one single Community License that unlocks both products.
What's new:
One license for both products
Three environments for everyone - whether you're an individual developer or part of a team, you get three non-production installations per product
Simplified management - access and renew your licenses through our new self-service portal at account.factorhouse.io
Our commitment to the engineering community
Since first launching Kpow CE at Current '22, thousands of engineers have used our community licenses to learn Kafka and Flink without jumping through enterprise procurement hoops. This unified license keeps that same philosophy: high-quality tools that are free for non-production use.
The Factor House Community License is free for individuals and organizations to use in non-production environments. It's perfect for:
New users: Head to account.factorhouse.io to grab your free Community license. You'll receive instant access via magic link authentication.
Existing users: Your legacy Kpow and Flex Community licenses will continue to work and are now visible in the portal. When your license renews (after 12 months), consider switching to the unified model for easier management.
What's included
Both Kpow CE and Flex CE include most enterprise features, optimized for learning and testing. Includes Kafka and Flink monitoring and management, fast multi-topic search, and Schema registry and Kafka Connect support.
License duration: 12 months, renewable annually
Installations: Up to 3 per product (Kpow CE: 1 Kafka cluster + 1 Schema Registry + 1 Connect cluster per installation; Flex CE: 1 Flink cluster per installation)
Support: Self-service via Factor House Community Slack, documentation, and release notes
Deployment: Docker, Docker Compose or Kubernetes
Ready for production? Start a 30-day free trial of our Enterprise editions directly from the portal to unlock RBAC, Kafka Streams monitoring, custom SerDes, and dedicated support.
What about legacy licenses?
If you're currently using a Kpow Individual, Kpow Organization, or Flex Community license, nothing changes immediately. Your existing licenses will continue to work with their respective products and are now accessible in the portal. When your license expires at the end of its 12-month term, you can easily switch to the new unified license for simpler management.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Release 95.1: A unified experience across product, web, docs and licensing
95.1 delivers a cohesive experience across Factor House products, licensing, and brand. This release introduces our new license portal, refreshed company-wide branding, a unified Community License for Kpow and Flex, and a series of performance, accessibility, and schema-related improvements.
Upgrading to 95.1 If you are using Kpow with a Google Managed Service for Apache Kafka (Google MSAK) cluster, you will now need to use either kpow-java17-gcp-standalone.jar or the 95.1-temurin-ubi tag of the factorhouse/kpow Docker image.
New Factor House brand: unified look across web, product, and docs
We've refreshed the Factor House brand across our website, documentation, the new license portal, and products to reflect where we are today: a company trusted by engineers running some of the world's most demanding data pipelines. Following our seed funding earlier this year, we've been scaling the team and product offerings to match the quality and value we deliver to enterprise engineers. The new brand brings our external presence in line with what we've built. You'll see updated logos in Kpow and Flex, refreshed styling across docs and the license portal, and a completely redesigned website with clearer navigation and information architecture. Your workflows stay exactly the same, and the result is better consistency across all touchpoints, making it easier for new users to evaluate our tools and for existing users to find what they need.
New license portal: self-service access for all users
We've rolled out our new license portal at account.factorhouse.io, to streamline license management for everyone. New users can instantly grab a Community or Trial license with just their email address, and existing users will see their migrated licenses when they log in. The portal lets you manage multiple licenses from one account, all through a clean, modern interface with magic link authentication. This could be upgrading from Community to a Trial, renewing your annual Community License, or requesting a trial extension. For installation and configuration guidance, check our Kpow and Flex docs.
We've consolidated our Community licensing into a single unified license that works with both Kpow Community Edition and Flex Community Edition. Your Community license allows you to run Kpow and Flex in up to three non-production environments each, making it easier to learn, test, and build with Kafka and Flink. The new licence streamlines management, providing a single key for both products and annual renewal via the licence portal. Perfect for exploring projects like Factor House Local or building your own data pipelines. Existing legacy licenses will continue to work and will also be accessible in the license portal.
This release brings in a number of performance improvements to Kpow, Flex and Factor Platform. The work to compute and materialize views and insights about your Kafka or Flink resources has now been decreased by an order of magnitude. For our top-end customers we have observed a 70% performance increase in Kpow’s materialization.
Data Inspect enhancements
Confluent Data Rules support: Data inspect now supports Confluent Schema Registry Data Rules, including CEL, CEL_FIELD, and JSONata rule types. If you're using Data Contracts in Confluent Cloud, Data Inspect now accurately identifies rule failures and lets you filter them with kJQ.
Support for Avro Primitive Types: We’ve added support for Avro schemas that consist of a plain primitive type, including string, number, and boolean. Read the documentation.
Schema Registry & navigation improvements
General Schema Registry improvements (from 94.6): In 94.6, we introduced improvements to Schema Registry performance and updated the observation engine. This release continues that work, with additional refinements based on real-world usage.
Karapace compatibility fix: We identified and fixed a regression in the new observation engine that affected Karapace users.
Redpanda Schema Registry note: The new observation engine is not compatible with Redpanda’s Schema Registry. Customers using Redpanda should set `OBSERVATION_VERSION=1` until full support is available. Read the documentation.
Navigation improvements: Filters on the Schema Overview pages now persist when navigating into a subject and back.
Chart accessibility & UX improvements
This release brings a meaningful accessibility improvement to Kpow & Flex: Keyboard navigation for line charts. Users can now focus a line chart and use the left and right arrow keys to view data point tooltips. We plan to expand accessibility for charts to include bar charts and tree maps in the near future, bringing us closer to full WCAG 2.1 Level AA compliance as reported in our Voluntary Product Accessibility Template (VPAT).
We’ve also improved the UX of comparing adjacent line charts: Each series is now consistently coloured across different line charts on a page, making it easier to identify trends across a series, e.g., a particular topic’s producer write/s vs. consumer read/s.
These changes benefit everyone: developers using assistive technology, teams with accessibility requirements, and anyone who prefers keyboard navigation. Accessibility isn't an afterthought, it's a baseline expectation for enterprise-grade tooling, and we're committed to leading by example in the Kafka and Flink ecosystem.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Streamline your Kpow deployment on Amazon EKS with our guide, fully integrated with the AWS Marketplace. We use eksctl to automate IAM Roles for Service Accounts (IRSA), providing a secure integration for Kpow's licensing and metering. This allows your instance to handle license validation via AWS License Manager and report usage for hourly subscriptions, enabling a production-ready deployment with minimal configuration.
This guide provides a comprehensive walkthrough for deploying Kpow, a powerful toolkit for Apache Kafka, onto an Amazon EKS (Elastic Kubernetes Service) cluster. We will cover the entire process from start to finish, including provisioning the necessary AWS infrastructure, deploying a Kafka cluster using the Strimzi operator, and finally, installing Kpow using a subscription from the AWS Marketplace.
The guide demonstrates how to set up both Kpow Annual and Kpow Hourly products, highlighting the specific integration points with AWS services like IAM for service accounts, ECR for container images, and the AWS License Manager for the annual subscription. By the end of this tutorial, you will have a fully functional environment running Kpow on EKS, ready to monitor and manage your Kafka cluster.
The source code and configuration files used in this guide can be found in the features/eks-deployment folder of this GitHub repository.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
VPC: A Virtual Private Cloud (VPC) that has both public and private subnets is required.
IAM Permissions: A user with the necessary IAM permissions to create an EKS cluster with a service account.
Kpow Subscription:
A subscription to a Kpow product through the AWS Marketplace is required. After subscribing, you will receive access to the necessary components and deployment instructions.
The specifics of accessing the container images and Helm chart depend on the chosen Kpow product:
Kpow Annual product:
Subscribing to the annual product provides access to the ECR (Elastic Container Registry) image and the corresponding Helm chart.
Kpow Hourly product:
For the hourly product, access to the ECR image will be provided and deployment utilizes the public Factor House Helm repository for installation.
Deploy an EKS cluster
We will use eksctl to provision an Amazon EKS cluster. The configuration for the cluster is defined in the manifests/eks/cluster.eksctl.yaml file within the repository.
Before creating the cluster, you must open this file and replace the placeholder values for <VPC-ID>, <PRIVATE-SUBNET-ID-* >, and <PUBLIC-SUBNET-ID-* > with your actual VPC and subnet IDs.
⚠️ The provided configuration assumes the EKS cluster will be deployed in the us-east-1 region. If you intend to use a different region, you must update the metadata.region field and ensure the availability zone keys under vpc.subnets (e.g., us-east-1a, us-east-1b) match the availability zones of the subnets in your chosen region.
Here is the content of the cluster.eksctl.yaml file:
Cluster Metadata: A cluster named fh-eks-cluster in the us-east-1 region.
VPC: Specifies an existing VPC and its public/private subnets where the cluster resources will be deployed.
IAM with OIDC: Enables the IAM OIDC provider, which allows Kubernetes service accounts to be associated with IAM roles. This is crucial for granting AWS permissions to your pods.
Service Accounts:
kpow-annual: Creates a service account for the Kpow Annual product. It attaches the AWSLicenseManagerConsumptionPolicy, allowing Kpow to validate its license with the AWS License Manager service.
kpow-hourly: Creates a service account for the Kpow Hourly product. It attaches the AWSMarketplaceMeteringRegisterUsage policy, which is required for reporting usage metrics to the AWS Marketplace.
Node Group: Defines a managed node group named ng-dev with t3.medium instances. The worker nodes will be placed in the private subnets (privateNetworking: true).
Once you have updated the YAML file with your networking details, run the following command to create the cluster. This process can take 15-20 minutes to complete.
eksctl create cluster -f cluster.eksctl.yaml
Once the cluster is created, eksctl automatically updates your kubeconfig file (usually located at ~/.kube/config) with the new cluster's connection details. This allows you to start interacting with your cluster immediately using kubectl.
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# ip-192-168-...-21.ec2.internal Ready <none> 2m15s v1.32.9-eks-113cf36
# ...
Launch a Kafka cluster
With the EKS cluster running, we will now launch an Apache Kafka cluster into it. We will use the Strimzi Kafka operator, which simplifies the process of running Kafka on Kubernetes.
Install the Strimzi operator
First, create a dedicated namespace for the Kafka cluster.
kubectl create namespace kafka
Next, download the Strimzi operator installation YAML. The repository already contains the file manifests/kafka/strimzi-cluster-operator-0.45.1.yaml, but the following commands show how it was downloaded and modified for this guide.
## Define the Strimzi version and download URL
STRIMZI_VERSION="0.45.1"DOWNLOAD_URL=https://github.com/strimzi/strimzi-kafka-operator/releases/download/$STRIMZI_VERSION/strimzi-cluster-operator-$STRIMZI_VERSION.yaml
## Download the operator manifest
curl -L -o manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml ${DOWNLOAD_URL}
## Modify the manifest to install the operator in the 'kafka' namespace
sed -i 's/namespace: .*/namespace: kafka/' manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml
Now, apply the manifest to install the Strimzi operator in your EKS cluster.
The configuration for our Kafka cluster is defined in manifests/kafka/kafka-cluster.yaml. It describes a simple, single-node cluster suitable for development, using ephemeral storage, meaning data will be lost if the pods restart.
After a few minutes, all the necessary pods and services for Kafka will be running. You can verify this by listing all resources in the kafka namespace.
kubectl get all -n kafka -o name
The output should look similar to this, showing the pods for Strimzi, Kafka, Zookeeper, and the associated services. The most important service for connecting applications is the Kafka bootstrap service.
Now that the EKS and Kafka clusters are running, we can deploy Kpow. This guide covers the deployment of both Kpow Annual and Kpow Hourly products. Both deployments will use a common set of configurations for connecting to Kafka and setting up authentication/authorization.
First, ensure you have a namespace for Kpow. The eksctl command we ran earlier already created the service accounts in the factorhouse namespace, so we will use that. If you hadn't created it, you would run kubectl create namespace factorhouse.
Create ConfigMaps
We will use two Kubernetes ConfigMaps to manage Kpow's configuration. This approach separates the core configuration from the Helm deployment values.
kpow-config-files: This ConfigMap holds file-based configurations, including RBAC policies, JAAS configuration, and user properties for authentication.
kpow-config: This ConfigMap provides environment variables to the Kpow container, such as the Kafka bootstrap address and settings to enable our authentication provider.
The contents of these files can be found in the repository at manifests/kpow/config-files.yaml and manifests/kpow/config.yaml.
kubectl get configmap -n factorhouse
# NAME DATA AGE
# kpow-config 5 ...
# kpow-config-files 3 ...
Deploy Kpow Annual
Download the Helm chart
The Helm chart for Kpow Annual is in a private Amazon ECR repository. First, authenticate your Helm client.
# Enable Helm's experimental support for OCI registries
export HELM_EXPERIMENTAL_OCI=1
# Log in to the AWS Marketplace ECR registry
aws ecr get-login-password \
--region us-east-1 | helm registry login \
--username AWS \
--password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.com
Next, pull and extract the chart.
# Create a directory, pull the chart, and extract it
mkdir -p awsmp-chart && cd awsmp-chart
# Pull the latest version of the Helm chart from ECR (add --version <x.x.x> to specify a version)
helm pull oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/factor-house/kpow-aws-annualtar xf $(pwd)/* && find $(pwd) -maxdepth 1 -type f -delete
cd ..
Launch Kpow Annual
Now, install Kpow using Helm. We will reference the service account kpow-annual that was created during the EKS cluster setup, which has the required IAM policy for license management.
Note: The CPU and memory values are intentionally set low for this guide. For production environments, check the official documentation for recommended capacity.
Verify and access Kpow Annual
Check that the Kpow pod is running successfully.
kubectl get all -l app.kubernetes.io/instance=kpow-annual -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-annual-kpow-aws-annual-c6bc849fb-zw5ww 0/1 Running 0 46s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-annual-kpow-aws-annual ClusterIP 10.100.220.114 <none> 3000/TCP 47s
# ...
To access the UI, forward the service port to your local machine.
The Helm values are defined in values/eks-hourly.yaml.
# values/eks-hourly.yaml
env:
ENVIRONMENT_NAME: "Kafka from Kpow Hourly"envFromConfigMap: "kpow-config"volumeMounts:
# ... (volume configuration is the same as annual)
volumes:
# ...
resources:
# ...
Verify and access Kpow Hourly
Check that the Kpow pod is running.
kubectl get all -l app.kubernetes.io/instance=kpow-hourly -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-hourly-kpow-aws-hourly-68869b6cb9-x9prf 0/1 Running 0 83s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-hourly-kpow-aws-hourly ClusterIP 10.100.221.36 <none> 3000/TCP 85s
# ...
To access the UI, forward the service port to a different local port (e.g., 3001) to avoid conflicts.
In this guide, we have successfully deployed a complete, production-ready environment for monitoring Apache Kafka on AWS. By leveraging eksctl, we provisioned a robust EKS cluster with correctly configured IAM roles for service accounts, a critical step for secure integration with AWS services. We then deployed a Kafka cluster using the Strimzi operator, demonstrating the power of Kubernetes operators in simplifying complex stateful applications.
Finally, we walked through the deployment of both Kpow Annual and Kpow Hourly from the AWS Marketplace. This showcased the flexibility of Kpow's subscription models and their seamless integration with AWS for licensing and metering. You are now equipped with the knowledge to set up, configure, and manage Kpow on EKS, unlocking powerful insights and operational control over your Kafka ecosystem.
Oops! Something went wrong while submitting the form.
Article
xx min read
Our Commitment to Engineers
With our funding announcement and the upcoming launch of the Factor Platform, we know some of our existing customers might be wondering: What does this mean for Kpow and Flex? Will we be forced to upgrade? Will prices spike? Keep one thing in mind - at Factor House we're here for engineers.
Our Commitment to Engineers: No Forced Upgrades, No Breaking Changes
With our latest funding announcement and the upcoming launch of the Factor Platform, we know some of our existing customers might be wondering: What does this mean for Kpow and Flex? Will we be forced to upgrade? Will prices suddenly spike?
Let’s clear that up now: Kpow and Flex are here to stay. No forced upgrades to platform, no breaking changes, no artificial roadblocks. Engineers trust us because we build tools that work for them, not against them—and that will never change. Period.
The Factor House Philosophy: Build, Don’t Break
Some companies in our space have taken a different approach; adding new products and forcing customers to move to them or making sudden, dramatic price changes. That’s not how we operate. We believe software engineers deserve better.
If you’re using Kpow or Flex today, you’ll continue to have full access, support, and ongoing updates into the foreseeable future. We don’t make breaking changes. We don’t sunset products just because a new one exists. We’ll always act in the best interests of engineers while continuing to build enterprise solutions that support your evolving needs.
Why Build a Platform, Then?
If Kpow and Flex will still be supported, you might be wondering: Why introduce a platform at all? The answer is simple: while our individual tools solve specific challenges, Factor Platform is designed to solve the bigger picture.
Factor Platform isn’t a replacement - it’s a step up for those who need it. Here’s what it will offer:
Centralized Management: Kpow, Flex, and all future tools in one place - streamlined and and enterprise scale. A single Web UI and API for data in motion at your organization.
Control and Automation: Factor Platform is completely dynamically configurable via the UI and API, no more restarts when your RBAC configuration changes.
Insights and Empowerment: Engineers exist in the space between Kafka, Flink, and other systems. That's where the data lives. That's where Factor Platform thrives.
What’s Next?
We love our customers. Great news if you're operating in the real-time space, we love your customers too.
We think Kpow and Flex are the right tools for most engineering teams today - we're going to sell a lot more licenses.
If you’re happily using Kpow or Flex already, you can keep using them as always. If you’re looking for a way to scale, simplify, and centralize your real-time data tools, Factor Platform will be there when you need it.
This isn’t about locking anyone in - it’s about giving engineers more options, not fewer. That’s a philosophy we’ll always stand by.
Join the Conversation
We know engineers value transparency, and we want to keep the conversation open. If you have thoughts, questions, or feedback, we’d love to hear from you. Your insights shape the tools we build, and we’re committed to making sure they continue to serve your needs.
Factor Platform is on the horizon, and we’re excited to share more soon. If you’d like an early look, reach out - we’d love to show you what’s coming.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
From Bootstrap to Blackbird: The Future of Factor House
We are thrilled to announce that Factor House has closed a $5M seed round to accelerate the commercial release of our new product, the Factor Platform. Led by Blackbird Ventures, with OIF Ventures, Flying Fox Ventures, and LaunchVic’s Alice Anderson Fund as partners, this round brings our five-year bootstrapping journey to a happy conclusion and points to a bright future ahead!
Led by Blackbird Ventures, with OIF Ventures, Flying Fox Ventures, LaunchVic’s Alice Anderson Fund, and Steve and Michelle Holmes as investment partners, this round brings our five-year journey as a bootstrapped startup to a happy conclusion and points to a bright future ahead!
Pull yourself up by your bootstraps
We founded Factor House with a simple yet ambitious goal: to empower engineers with the tools they need to build real-time systems with confidence.
Many startups chase funding to find product-market fit, we took a different path - bootstrapping, building, listening to engineers, iterating, and delivering products that have our users at heart. That approach allowed us to grow organically and cement our place as a trusted name in real-time data management.
We have been fortunate through five years of bootstrapping to pass fellow travellers who could share support, advice, or a shoulder to cry on. It was Ben Slater, Instaclustr’s then Chief Product Officer, who told us how hard it is to find your first customers, and then pointed us in the right direction. Just as importantly, the engineering teams at Block, Airwallex, and Pepperstone pushed us to refine early versions of Kpow, ensuring it met the needs of world-class teams operating at scale.
So why change tactic and close a funding round? Learnings from our users show that the opportunity in front of us is huge, and we're determined to build a balanced business that can not only ship great products, but speak clearly and authoritatively about the future of real-time engineering.
Real-time data is business critical
From FinTech and eCommerce to logistics and cybersecurity, industries everywhere are waking up to the reality that real-time data isn’t a luxury - it’s a necessity. Customers expect instant transactions, predictive analytics, and seamless digital experiences. Businesses that fail to embrace real-time processing will inevitably fall behind those that move faster and make smarter decisions.
Factor House has been at the forefront of this shift, providing engineers with intuitive tools that make real-time data management effortless. Kpow, our flagship toolkit for Apache Kafka, has become an essential part of the stack for enterprises managing complex data flows. But as demand grows, so does the need to innovate.
What's Next for Factor House?
With this investment, we’re focused on three key areas:
Expanding Product Capabilities
We will invest in Kpow and Flex, our flagship products. Engineers always need more; they need more profound insights, intelligent automation, and ever more fine-grained control of underlying systems. We’re investing in our existing products to continue to bring clarity and confidence to engineers working with real-time data.
Growing our Global Reach
Our products are used by engineers in over a hundred countries. Growing our team and expanding our ability to communicate as well as ship products will ensure more companies can access enterprise-ready solutions that scale with their needs.
Building the Future with Factor Platform
Factor Platform combines features of each of our tools with extended functionality to provide clarity, control, and governance of real-time data at enterprise scale.
Our composable system architecture provides centralized management of every Kafka and Flink cluster in an organization from a single Web UI, and allows service integration via a secure OpenAPI 3.1 REST API.
We can't wait to open early access to existing customers and start iterating on their feedback.
A Future Without FUD, Where Engineers Lead the Way
While many enterprise software companies focus on selling to executives or satisfying aggressive growth targets at the expense of their customers, we stay true to the practitioners - those working directly with data daily - ensuring they have the best tools available.
As real-time data becomes the foundation of modern business, the need for intuitive, scalable, high-performance tooling will only grow. Factor House isn’t just responding to that trend; we’re helping shape the future of how real-time data is managed, understood, and leveraged.
The journey from bootstrap to industry leader has been a remarkable one, and lots of fun, but the most exciting chapters are still ahead.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Blog Post / From Bootstrap to Blackbird: The Future of Factor House
All
Article
xx min read
Factor House Product VPAT
We first published a VPAT in the release notes of Kpow for Apache Kafka v92.4, with a VPAT available to download in every release of Kpow since. Today, we are pleased to announce that we are extending that commitment to all future Factor House product releases - including Flex for Apache Flink and the Factor Platform.
Today, we are pleased to announce that we are extending that commitment to all future releases of Flex for Apache Flink and Factor Platform (currently in beta).
For organisations evaluating products built by Factor House, a VPAT provides crucial information on whether a product can be used by people with accessibility needs and what we need to change so that it is.
Publishing a VPAT for each of our products demonstrates our commitment to ensuring that our tools can be effectively used by all developers. It also solidifies our position as a leader in developer tools, offering our customers the confidence that Kpow, Flex, and Factor Platform meet the highest standards of accessibility.
Some concrete improvements to both product and process at Factor House through our commitment to web accessibility:
Screen Reader Compatibility
Factor House products are fully compatible with screen readers, allowing users with visual impairments to navigate and interact with our software effectively.
Keyboard Shortcuts
We implement ARIA Authoring Practices Guide (APG) patterns to deliver accessible elements. All Factor House UI widgets (such as buttons, menu buttons and dropdowns etc) implement the ARIA spec for keyboard interactions, roles, states and properties.
Our commitment to accessibility was validated through an external audit conducted by AccessibilityOZ, a leading expert in accessibility assessments. Their thorough evaluation helped us identify areas for improvement and guided our efforts to enhance Kpow’s accessibility features.
Published VPAT
We publish our compliance to accessibility guidelines in a publicly available VPAT available in every product release. This document provides a detailed analysis of how our products meet various accessibility standards, giving our customers transparency and confidence in our product’s accessibility capabilities.
The Journey to VPAT Certification
The process of publishing a VPAT was rigorous and required significant enhancements to our products. There were four major steps that we had to undertake:
Our team had to become experts in WAI-ARIA standards to get each product to the point where we could submit to a VPAT audit.
We had to retrofit a large, complex, production-grade Web UI to one that achieved compliance with WCAG accessibility guidelines. It was hard, it was worth it, and if we are honest with ourselves it should have been something we had included from the start.
We adopted frontend libraries built with accessibility in mind, such as HeadlessUI and Echarts. Having such core components of our product ship with great accessibility standards built into their libraries certainly helps our team, and is a testament to the broader developer community.
We engaged an external consultant and auditor, AccessibilityOZ, who iterated with us to create the best, most accessible, outcome.
What Comes Next?
A commitment to publish a VPAT for each product release is a significant milestone, but it is by no means the end of the journey.
Accessibility is a constantly evolving field, and we are committed to staying at the forefront of best practices and emerging standards. We will continue to refine and improve our products, ensuring that they remain accessible to all users.
At Factor House, empowerment is a core value that drives our company and product development. The successful completion of our VPAT certification is a testament to our dedication to this value, and we are determined to continue in our role as a leader in developer tooling.
We would like to extend our sincere thanks to our customers, partners, and team members who have supported us on this journey. Together, we are making technology more inclusive and empowering every developer to achieve their full potential with Kpow.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
How do skateboards and green suns drive web accessibility at Factor House? Learn why building accessible products is important to us, and how we've changed to ensure that accessibility is embedded in our development process.
The Importance of Accessibility at Factor House: A CEO's Perspective
When I was eleven there was an activity at school to draw a picture for your grandparents, the picture would then be laminated for you to give them as a gift. I was in my cool skateboarding phase, so I drew myself skateboarding on a half-pipe - and then I colored the sun in green. My teacher came over asked about my choice in colors. Some of the other kids had a curious peek, because they could see that I had done something wrong — even if I couldn't. That was the moment I learned that I am colorblind.
Color blindness hasn't significantly impacted my career beyond curtailing early thoughts of being a pilot. These days I use color blind modes in tools like GitHub to help with my work, and the biggest practical problem that protanopia gives me is that I can't tell when a banana is ripe until it starts getting spots. I eat a lot of green bananas.
Color blindness has, however, given me a better awareness of how easily products can become inaccessible. When a customer asked us if we could undertake to provide a Voluntary Product Accessibility Template (VPAT) for Kpow for Apache Kafka, we jumped at the chance to understand where our products were lacking and how to improve them.
Achieving WCAG 2.1 AA Compliance
A quick note before we congratulate ourselves too much - while the request for a VPAT was new to me, Web Content Accessibility Guidelines (WCAG) were not. I first heard of WCAG while working on a user portal for an insurance company in England way back in 2002. Web UI have changed a lot since those days, but the expectation that quality work results in accessible UI has remained the same.
Providing a VPAT requires a commitment to achieve and document established accessibility standards like WCAG and Section 508 of the Rehabilitation Act. Working towards this commitment helps organisations assess and communicate the accessibility features and compliance of their products, particularly for users who rely on assistive technologies or have accessibility needs. We understood that accessibility isn't just a product feature - it's a necessity for creating inclusive products that everyone can use effectively.
We partnered with an independent consultancy, AccessibilityOZ, to ensure we were doing this right. AccessibilityOZ involves people who use various assistive technologies and have different accessibility needs in the testing of your website and product. Their comprehensive approach included multiple audits as we completed our work, and was exactly what we needed to not just meet the accessibility standards but to exceed them.
The process of obtaining our VPAT was a significant undertaking, it required us to assess every aspect of Kpow and make necessary changes to ensure compliance with accessibility guidelines. We had to dig deep into our codebase, working from the ground up to close over 100 tickets raised by AccessibilityOz. It was a challenging process taking over 12 months, but it was also incredibly rewarding. We are better developers for it.
Working on accessibility not only improved our products for users with accessibility needs but has also enhanced the overall quality of Kpow for everyone. The improvements we made, such as better navigation and more readable layouts, have made Kpow a faster, more user-friendly tool.
There is no trade-off between accessibility and functionality; in fact, the two go hand in hand. The process of undertaking VPAT has made our products better.
Accessibility is now embedded in our development process at Factor House. We've implemented new tools like Storybook to detect accessibility issues, ensuring that every new feature we release is accessible from the start. We've also upskilled our team, making sure that all of our developers understand the importance of accessibility and are capable of delivering products that meet these standards.
As we continue to grow and evolve, accessibility will remain a core focus for us. We believe that providing accessible tools is not only the right thing to do but also essential for attracting and retaining the best customers. In today's market, organisations have a duty of care to their employees, and providing accessible tools is a fundamental part of that responsibility.
My advice to other tech leaders and developers is simple: prioritise accessibility. It's not just about meeting ticking a box; it's about improving your product for every user. At Factor House, we've seen firsthand how focusing on accessibility has made our products better for everyone, and I'm proud of the work we've done.
For us at Factor House, accessibility is not a one-time task but an ongoing commitment to excellence in product development.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Querying Kafka topics is a critical task for engineers working on data streaming applications, but it can often be a complex and time-consuming process. Enter Kpow's data inspect feature—designed to simplify and optimize Kafka topic queries, making it an essential tool for professionals working with Apache Kafka.
Querying Kafka topics is a critical task for engineers working on data streaming applications, but it can often be a complex and time-consuming process.
Whether you're a developer prototyping a new feature or an infrastructure engineer ensuring the stability of a production environment, having a powerful querying tool at your disposal can make all the difference. Enter Kpow's data inspect feature—designed to simplify and optimize Kafka topic queries, making it an essential tool for professionals working with Apache Kafka.
Overview
Apache Kafka and its ecosystem have emerged as the cornerstone of modern data-in-motion solutions. Our customers leverage a variety of technologies, including Kafka Streams, Apache Flink, Kafka Connect, and custom services using Java, Go, or Python, to build their data streaming applications.
Regardless of the technology stack, engineers need reliable tools to examine the foundational Kafka topics in their streaming applications. This is where Kpow's data inspect feature comes into play. Data inspect offers ad hoc, bounded queries across Kafka topics, proving valuable in both development and production scenarios. Here’s how it can be particularly useful:
Key use cases in development
Validating data structures: Verifying and validating the shape of data (both key and value) during the prototyping phase.
Monitoring message flow: Ensuring that messages are flowing to the topic as expected and that topic message distribution is well balanced across all partitions.
Debugging and troubleshooting: Identifying and resolving issues in the development phase. For example validating that your topic's configuration is applying its compaction policy as expected or that segments are being deleted as expected.
Critical applications in production
Identifying poison messages: Quickly identifying and addressing messages that can cause downstream issues that may have caused consumer groups to break.
Reconciliation and Analytics: Querying for specific events for reconciliation or analytic purposes.
Monitoring and Alerting: Keeping track of Kafka topics for anomalies or unusual activity.
Compliance and Auditing: Ensuring compliance with data governance standards and auditing access to sensitive data.
Capacity Planning: Planning and scaling infrastructure based on the volume and velocity of data flowing through topics.
This article will dive into the technical details of Kpow's data inspect query engine and how you can maximise your own querying in Kafka. Whether you're a developer looking to validate data during development or part of the infrastructure team tasked with ensuring the stability and performance of your production Kafka clusters, data inspect offers a powerful set of tools to help you get the most out of your Kafka deployments.
Introduction to Data Inspect
Kpow’s data inspect gives teams the ability to perform bounded queries across one or more Kafka topics. A bounded query retrieves a specific range or subset of data from a Kafka topic, informed by user input through the data inspect form. Users can specify:
A Date Range: An ISO 8601 date range specifying the start and end bounds of the query.
An Offset Range: The start offset from where you'd like the query to begin (especially useful when searching against a partition or key).
Kpow’s data inspect form simplifies the querying process by offering common query options as defaults. For instance, to view the most recent messages in a topic, Kpow's default query window is set to 'Recent' (e.g., the last 15 minutes). Users can also specify custom date times or timestamps for more fine-grained queries.
Additionally, the data inspect form allows input of topic SerDes and any filters to apply against the result set, which will be explained below.
The Query Plan
Once all inputs are provided, Kpow constructs a query plan similar to that of a SQL engine. This plan optimizes the execution of the query and efficiently parallelizes queries across a pool of consumer groups. It’s this query engine that powers Kpow’s blazingly fast multi-topic search.
The query engine ensures an even distribution of records from all topic partitions when querying. An even distribution is crucial for understanding a topic's performance because it ensures that the analysis is based on a representative sample of the data. If certain partitions are overrepresented, the analysis may be skewed, leading to inaccurate insights.
The cursors table, part of the data inspect result metadata, displays the comprehensive progress of the query, detailing the start and end offsets for each topic partition, the number of records scanned, and the remaining offsets to query.
We understand your data
Kpow supports a wide-array of commonly used data formats (known as SerDes). These formats include:
JSON
AVRO
JSON Schema
Protobuf
Clojure formats such as EDN or Transit/JSON
XML, YAML and raw strings
Kpow integrates with both Confluent's Schema Registry and AWS Glue. Our documentation has guides on how you can configure Kpow's Schema Registry integration.
If we don't support a data format you use (for example you use Protobuf with your own encryption-at-rest) you can import your own custom SerDes to use with Kpow. Visit our documentation to learn more about custom SerDes.
jq filters for Kafka
No matter which message format you use, filtering messages in Kpow works transparently across every deserializer.
kJQ is the filtering engine we have built at Kpow. It's a subset of the jq programming language built specifically for Kafka workloads, and is embedded within Kpow's data inspect.
jq is like sed for JSON data - you can use it to slice and filter and map and transform structured data with the same ease that sed, awk, grep and friends let you play with text.
Instead of creating yet another bespoke querying language that our customers would have to learn, we chose jq, one of the most concise, powerful, and immediately familiar querying languages available.
An example of a kJQ query:
.key.currency == "GBP" and
.value.tx.price | to-double < 16.50 and
.value.tx.pan | endswith("8649")
If you are unfamiliar with jq, or want to learn more we generally recommend the following resources:
jq playground: an online interactive playground for jq filters.
Kpow's kREPL: Kpow has a built in REPL. It is our programmatic interface into Kpow's data inspect functionality. Within the kREPL you can experiment with kJQ queries - much like the jq playground.
While the kREPL is out of scope for this blog post, stay tuned for future articles where we’ll take a deep dive into how you can use kJQ to construct sophisticated filters and data transformations right within Kpow.
Enterprise security built-in
Filtering data is only part of the equation. In order to perform ad-hoc queries against production data, Kpow provides enterprise-grade security features:
Role-Based Access Control (RBAC)
Kpow’s declarative RBAC system is defined in a YAML file, where you can assign policies to user roles authenticated from an external identity provider (IdP). This allows you to permit or deny access to Kafka topics based on user roles.
Any user assigned to the dev role will have access to any topic starting with tx_trade_* for only the confluent-cloud1 cluster. All other topics will be implicitly denied.
Any user assigned to the admin role will have access to all topics for all clusters managed by Kpow.
All other users are implicitly denied access to data inspect functionality.
Data Masking
In environments where compliance with PII requirements is mandatory, data masking is essential. Kpow’s data masking feature allows you to define policies specifying which fields in a message should be redacted in the key, value, or headers of a record. These policies apply to nested data structures or arrays within messages. For instance, a policy might:
Show only the last 4 characters of a field (ShowLast4)
Show only the email host (ShowEmailHost)
Return a SHA hash of the contents (SHAHash)
Fully redact the contents (Full)
Kpow provides a data masking sandbox where users can validate policies against test data, ensuring that redaction methods work as expected before deploying them.
Data Governance
Maintaining a comprehensive audit log is crucial for ensuring data governance and regulatory compliance. Kpow's audit log records all queries performed against topics, providing a detailed trail of who accessed the data, what topics were accessed, and when the query occurred. This information is vital for monitoring and enforcing data security policies, detecting unauthorized access, and demonstrating compliance with regulations such as GDPR, HIPAA, or PCI DSS. Within Kpow’s admin UI, navigate to the "Audit Log" page and then to the "Queries" tab to view all queries performed using Kpow.
Within Kpow's admin UI you can navigate to the "Audit log" page and then to the "Queries" tab to view all queries that have been performed using Kpow.
Getting started today
Kpow's data inspect feature revolutionizes the way professionals work with Apache Kafka, offering a comprehensive toolkit for querying Kafka topics with ease and efficiency. Whether you're validating data structures, monitoring message flow, or troubleshooting issues, Kpow provides the tools you need to streamline your workflow and optimize your Kafka-based applications.
Ready to take your Kafka querying to the next level? Sign up for Kpow's free community edition today and start exploring the power of Kpow's data inspect feature. Experience firsthand why Kpow is the number one toolkit for Apache Kafka and unlock new possibilities for managing and optimizing your Kafka clusters.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
This minor version release from Factor House improves protobuf rendering, sharpens light-mode, simplifies community edition setup, resolves a number of small bugs, and bumps Kafka client dependencies to v3.7.0.
We observed consumer issues being resolved in a client installation of Kpow as the Kafka version bumped above 3.6.2. We will keep an eye this area for the time being and produce a more meaningful blog post with details at some point in the future.
Improve Protobuf rendering
Data inspect now shows default properties for Protobuf messages for both AWS Glue and Confluent Schema Registry.
In below screenshot occupied has a value of false which is the default value for a boolean.
It is now displayed in the results also it can be used in the kJQ filter.
We have improved the light mode in Flex and Kpow, elements have more contrast and visible borders.
This is an area of continuous improvement for us as we take accessibility very seriously. If you want to see further improvements in this space then please give us feedback
Import properties files in Kpow Community Setup wizard
Users can now import .properties file in Kpow Community Setup wizard to set the connection details for Apache Kafka by clicking on Import properties file button.
Release 93.4: Protobuf, Light Mode, and Community
All
[MELBOURNE, AUS] Apache Kafka and Apache Flink Meetup, 27 November
Melbourne, we’re making it a double feature. Workshop by day, meetup by night - same location, each with valuable content for data and software engineers, or those working with Data Streaming technologies. Build the backbone your apps deserve, then roll straight into the evening meetup.
[SYDNEY, AUS] Apache Kafka and Apache Flink Meetup, 26 November
Sydney, we’re making it a double feature. Workshop by day, meetup by night - same location, each with valuable content for data and software engineers, or those working with Data Streaming technologies. Build the backbone your apps deserve, then roll straight into the evening meetup.
We’re building more than products, we’re building a community. Whether you're getting started or pushing the limits of what's possible with Kafka and Flink, we invite you to connect, share, and learn with others.