The unified Factor House Community License works with both Kpow Community Edition and Flex Community Edition, meaning one license will unlock both products. This makes it even simpler to explore modern data streaming tools, create proof-of-concepts, and evaluate our products.
The new unified Factor House Community License works with both Kpow Community Edition and Flex Community Edition, so you only need one license to unlock both products. This makes it even simpler to explore modern data streaming tools, create proof-of-concepts, and evaluate our products.
What's changing
Previously, we issued separate community licenses for Kpow and Flex, with different tiers for individuals and organisations. Now, there's just one single Community License that unlocks both products.
What's new:
One license for both products
Three environments for everyone - whether you're an individual developer or part of a team, you get three non-production installations per product
Simplified management - access and renew your licenses through our new self-service portal at account.factorhouse.io
Our commitment to the engineering community
Since first launching Kpow CE at Current '22, thousands of engineers have used our community licenses to learn Kafka and Flink without jumping through enterprise procurement hoops. This unified license keeps that same philosophy: high-quality tools that are free for non-production use.
The Factor House Community License is free for individuals and organizations to use in non-production environments. It's perfect for:
New users: Head to account.factorhouse.io to grab your free Community license. You'll receive instant access via magic link authentication.
Existing users: Your legacy Kpow and Flex Community licenses will continue to work and are now visible in the portal. When your license renews (after 12 months), consider switching to the unified model for easier management.
What's included
Both Kpow CE and Flex CE include most enterprise features, optimized for learning and testing. Includes Kafka and Flink monitoring and management, fast multi-topic search, and Schema registry and Kafka Connect support.
License duration: 12 months, renewable annually
Installations: Up to 3 per product (Kpow CE: 1 Kafka cluster + 1 Schema Registry + 1 Connect cluster per installation; Flex CE: 1 Flink cluster per installation)
Support: Self-service via Factor House Community Slack, documentation, and release notes
Deployment: Docker, Docker Compose or Kubernetes
Ready for production? Start a 30-day free trial of our Enterprise editions directly from the portal to unlock RBAC, Kafka Streams monitoring, custom SerDes, and dedicated support.
What about legacy licenses?
If you're currently using a Kpow Individual, Kpow Organization, or Flex Community license, nothing changes immediately. Your existing licenses will continue to work with their respective products and are now accessible in the portal. When your license expires at the end of its 12-month term, you can easily switch to the new unified license for simpler management.
The unified Factor House Community License works with both Kpow Community Edition and Flex Community Edition, meaning one license will unlock both products. This makes it even simpler to explore modern data streaming tools, create proof-of-concepts, and evaluate our products.
The new unified Factor House Community License works with both Kpow Community Edition and Flex Community Edition, so you only need one license to unlock both products. This makes it even simpler to explore modern data streaming tools, create proof-of-concepts, and evaluate our products.
What's changing
Previously, we issued separate community licenses for Kpow and Flex, with different tiers for individuals and organisations. Now, there's just one single Community License that unlocks both products.
What's new:
One license for both products
Three environments for everyone - whether you're an individual developer or part of a team, you get three non-production installations per product
Simplified management - access and renew your licenses through our new self-service portal at account.factorhouse.io
Our commitment to the engineering community
Since first launching Kpow CE at Current '22, thousands of engineers have used our community licenses to learn Kafka and Flink without jumping through enterprise procurement hoops. This unified license keeps that same philosophy: high-quality tools that are free for non-production use.
The Factor House Community License is free for individuals and organizations to use in non-production environments. It's perfect for:
New users: Head to account.factorhouse.io to grab your free Community license. You'll receive instant access via magic link authentication.
Existing users: Your legacy Kpow and Flex Community licenses will continue to work and are now visible in the portal. When your license renews (after 12 months), consider switching to the unified model for easier management.
What's included
Both Kpow CE and Flex CE include most enterprise features, optimized for learning and testing. Includes Kafka and Flink monitoring and management, fast multi-topic search, and Schema registry and Kafka Connect support.
License duration: 12 months, renewable annually
Installations: Up to 3 per product (Kpow CE: 1 Kafka cluster + 1 Schema Registry + 1 Connect cluster per installation; Flex CE: 1 Flink cluster per installation)
Support: Self-service via Factor House Community Slack, documentation, and release notes
Deployment: Docker, Docker Compose or Kubernetes
Ready for production? Start a 30-day free trial of our Enterprise editions directly from the portal to unlock RBAC, Kafka Streams monitoring, custom SerDes, and dedicated support.
What about legacy licenses?
If you're currently using a Kpow Individual, Kpow Organization, or Flex Community license, nothing changes immediately. Your existing licenses will continue to work with their respective products and are now accessible in the portal. When your license expires at the end of its 12-month term, you can easily switch to the new unified license for simpler management.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Release 95.1: A unified experience across product, web, docs and licensing
95.1 delivers a cohesive experience across Factor House products, licensing, and brand. This release introduces our new license portal, refreshed company-wide branding, a unified Community License for Kpow and Flex, and a series of performance, accessibility, and schema-related improvements.
Upgrading to 95.1 If you are using Kpow with a Google Managed Service for Apache Kafka (Google MSAK) cluster, you will now need to use either kpow-java17-gcp-standalone.jar or the 95.1-temurin-ubi tag of the factorhouse/kpow Docker image.
New Factor House brand: unified look across web, product, and docs
We've refreshed the Factor House brand across our website, documentation, the new license portal, and products to reflect where we are today: a company trusted by engineers running some of the world's most demanding data pipelines. Following our seed funding earlier this year, we've been scaling the team and product offerings to match the quality and value we deliver to enterprise engineers. The new brand brings our external presence in line with what we've built. You'll see updated logos in Kpow and Flex, refreshed styling across docs and the license portal, and a completely redesigned website with clearer navigation and information architecture. Your workflows stay exactly the same, and the result is better consistency across all touchpoints, making it easier for new users to evaluate our tools and for existing users to find what they need.
New license portal: self-service access for all users
We've rolled out our new license portal at account.factorhouse.io, to streamline license management for everyone. New users can instantly grab a Community or Trial license with just their email address, and existing users will see their migrated licenses when they log in. The portal lets you manage multiple licenses from one account, all through a clean, modern interface with magic link authentication. This could be upgrading from Community to a Trial, renewing your annual Community License, or requesting a trial extension. For installation and configuration guidance, check our Kpow and Flex docs.
We've consolidated our Community licensing into a single unified license that works with both Kpow Community Edition and Flex Community Edition. Your Community license allows you to run Kpow and Flex in up to three non-production environments each, making it easier to learn, test, and build with Kafka and Flink. The new licence streamlines management, providing a single key for both products and annual renewal via the licence portal. Perfect for exploring projects like Factor House Local or building your own data pipelines. Existing legacy licenses will continue to work and will also be accessible in the license portal.
This release brings in a number of performance improvements to Kpow, Flex and Factor Platform. The work to compute and materialize views and insights about your Kafka or Flink resources has now been decreased by an order of magnitude. For our top-end customers we have observed a 70% performance increase in Kpow’s materialization.
Data Inspect enhancements
Confluent Data Rules support: Data inspect now supports Confluent Schema Registry Data Rules, including CEL, CEL_FIELD, and JSONata rule types. If you're using Data Contracts in Confluent Cloud, Data Inspect now accurately identifies rule failures and lets you filter them with kJQ.
Support for Avro Primitive Types: We’ve added support for Avro schemas that consist of a plain primitive type, including string, number, and boolean.
Schema Registry & navigation improvements
General Schema Registry improvements (from 94.6): In 94.6, we introduced improvements to Schema Registry performance and updated the observation engine. This release continues that work, with additional refinements based on real-world usage.
Karapace compatibility fix: We identified and fixed a regression in the new observation engine that affected Karapace users.
Redpanda Schema Registry note: The new observation engine is not compatible with Redpanda’s Schema Registry. Customers using Redpanda should set `OBSERVATION_VERSION=1` until full support is available.
Navigation improvements: Filters on the Schema Overview pages now persist when navigating into a subject and back.
Chart accessibility & UX improvements
This release brings a meaningful accessibility improvement to Kpow & Flex: Keyboard navigation for line charts. Users can now focus a line chart and use the left and right arrow keys to view data point tooltips. We plan to expand accessibility for charts to include bar charts and tree maps in the near future, bringing us closer to full WCAG 2.1 Level AA compliance as reported in our Voluntary Product Accessibility Template (VPAT).
We’ve also improved the UX of comparing adjacent line charts: Each series is now consistently coloured across different line charts on a page, making it easier to identify trends across a series, e.g., a particular topic’s producer write/s vs. consumer read/s.
These changes benefit everyone: developers using assistive technology, teams with accessibility requirements, and anyone who prefers keyboard navigation. Accessibility isn't an afterthought, it's a baseline expectation for enterprise-grade tooling, and we're committed to leading by example in the Kafka and Flink ecosystem.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Streamline your Kpow deployment on Amazon EKS with our guide, fully integrated with the AWS Marketplace. We use eksctl to automate IAM Roles for Service Accounts (IRSA), providing a secure integration for Kpow's licensing and metering. This allows your instance to handle license validation via AWS License Manager and report usage for hourly subscriptions, enabling a production-ready deployment with minimal configuration.
This guide provides a comprehensive walkthrough for deploying Kpow, a powerful toolkit for Apache Kafka, onto an Amazon EKS (Elastic Kubernetes Service) cluster. We will cover the entire process from start to finish, including provisioning the necessary AWS infrastructure, deploying a Kafka cluster using the Strimzi operator, and finally, installing Kpow using a subscription from the AWS Marketplace.
The guide demonstrates how to set up both Kpow Annual and Kpow Hourly products, highlighting the specific integration points with AWS services like IAM for service accounts, ECR for container images, and the AWS License Manager for the annual subscription. By the end of this tutorial, you will have a fully functional environment running Kpow on EKS, ready to monitor and manage your Kafka cluster.
The source code and configuration files used in this guide can be found in the features/eks-deployment folder of this GitHub repository.
About Factor House
Factor House is a leader in real-time data tooling, empowering engineers with innovative solutions for Apache Kafka® and Apache Flink®.
Our flagship product, Kpow for Apache Kafka, is the market-leading enterprise solution for Kafka management and monitoring.
VPC: A Virtual Private Cloud (VPC) that has both public and private subnets is required.
IAM Permissions: A user with the necessary IAM permissions to create an EKS cluster with a service account.
Kpow Subscription:
A subscription to a Kpow product through the AWS Marketplace is required. After subscribing, you will receive access to the necessary components and deployment instructions.
The specifics of accessing the container images and Helm chart depend on the chosen Kpow product:
Kpow Annual product:
Subscribing to the annual product provides access to the ECR (Elastic Container Registry) image and the corresponding Helm chart.
Kpow Hourly product:
For the hourly product, access to the ECR image will be provided and deployment utilizes the public Factor House Helm repository for installation.
Deploy an EKS cluster
We will use eksctl to provision an Amazon EKS cluster. The configuration for the cluster is defined in the manifests/eks/cluster.eksctl.yaml file within the repository.
Before creating the cluster, you must open this file and replace the placeholder values for <VPC-ID>, <PRIVATE-SUBNET-ID-* >, and <PUBLIC-SUBNET-ID-* > with your actual VPC and subnet IDs.
⚠️ The provided configuration assumes the EKS cluster will be deployed in the us-east-1 region. If you intend to use a different region, you must update the metadata.region field and ensure the availability zone keys under vpc.subnets (e.g., us-east-1a, us-east-1b) match the availability zones of the subnets in your chosen region.
Here is the content of the cluster.eksctl.yaml file:
Cluster Metadata: A cluster named fh-eks-cluster in the us-east-1 region.
VPC: Specifies an existing VPC and its public/private subnets where the cluster resources will be deployed.
IAM with OIDC: Enables the IAM OIDC provider, which allows Kubernetes service accounts to be associated with IAM roles. This is crucial for granting AWS permissions to your pods.
Service Accounts:
kpow-annual: Creates a service account for the Kpow Annual product. It attaches the AWSLicenseManagerConsumptionPolicy, allowing Kpow to validate its license with the AWS License Manager service.
kpow-hourly: Creates a service account for the Kpow Hourly product. It attaches the AWSMarketplaceMeteringRegisterUsage policy, which is required for reporting usage metrics to the AWS Marketplace.
Node Group: Defines a managed node group named ng-dev with t3.medium instances. The worker nodes will be placed in the private subnets (privateNetworking: true).
Once you have updated the YAML file with your networking details, run the following command to create the cluster. This process can take 15-20 minutes to complete.
eksctl create cluster -f cluster.eksctl.yaml
Once the cluster is created, eksctl automatically updates your kubeconfig file (usually located at ~/.kube/config) with the new cluster's connection details. This allows you to start interacting with your cluster immediately using kubectl.
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# ip-192-168-...-21.ec2.internal Ready <none> 2m15s v1.32.9-eks-113cf36
# ...
Launch a Kafka cluster
With the EKS cluster running, we will now launch an Apache Kafka cluster into it. We will use the Strimzi Kafka operator, which simplifies the process of running Kafka on Kubernetes.
Install the Strimzi operator
First, create a dedicated namespace for the Kafka cluster.
kubectl create namespace kafka
Next, download the Strimzi operator installation YAML. The repository already contains the file manifests/kafka/strimzi-cluster-operator-0.45.1.yaml, but the following commands show how it was downloaded and modified for this guide.
## Define the Strimzi version and download URL
STRIMZI_VERSION="0.45.1"DOWNLOAD_URL=https://github.com/strimzi/strimzi-kafka-operator/releases/download/$STRIMZI_VERSION/strimzi-cluster-operator-$STRIMZI_VERSION.yaml
## Download the operator manifest
curl -L -o manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml ${DOWNLOAD_URL}
## Modify the manifest to install the operator in the 'kafka' namespace
sed -i 's/namespace: .*/namespace: kafka/' manifests/kafka/strimzi-cluster-operator-$STRIMZI_VERSION.yaml
Now, apply the manifest to install the Strimzi operator in your EKS cluster.
The configuration for our Kafka cluster is defined in manifests/kafka/kafka-cluster.yaml. It describes a simple, single-node cluster suitable for development, using ephemeral storage, meaning data will be lost if the pods restart.
After a few minutes, all the necessary pods and services for Kafka will be running. You can verify this by listing all resources in the kafka namespace.
kubectl get all -n kafka -o name
The output should look similar to this, showing the pods for Strimzi, Kafka, Zookeeper, and the associated services. The most important service for connecting applications is the Kafka bootstrap service.
Now that the EKS and Kafka clusters are running, we can deploy Kpow. This guide covers the deployment of both Kpow Annual and Kpow Hourly products. Both deployments will use a common set of configurations for connecting to Kafka and setting up authentication/authorization.
First, ensure you have a namespace for Kpow. The eksctl command we ran earlier already created the service accounts in the factorhouse namespace, so we will use that. If you hadn't created it, you would run kubectl create namespace factorhouse.
Create ConfigMaps
We will use two Kubernetes ConfigMaps to manage Kpow's configuration. This approach separates the core configuration from the Helm deployment values.
kpow-config-files: This ConfigMap holds file-based configurations, including RBAC policies, JAAS configuration, and user properties for authentication.
kpow-config: This ConfigMap provides environment variables to the Kpow container, such as the Kafka bootstrap address and settings to enable our authentication provider.
The contents of these files can be found in the repository at manifests/kpow/config-files.yaml and manifests/kpow/config.yaml.
kubectl get configmap -n factorhouse
# NAME DATA AGE
# kpow-config 5 ...
# kpow-config-files 3 ...
Deploy Kpow Annual
Download the Helm chart
The Helm chart for Kpow Annual is in a private Amazon ECR repository. First, authenticate your Helm client.
# Enable Helm's experimental support for OCI registries
export HELM_EXPERIMENTAL_OCI=1
# Log in to the AWS Marketplace ECR registry
aws ecr get-login-password \
--region us-east-1 | helm registry login \
--username AWS \
--password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.com
Next, pull and extract the chart.
# Create a directory, pull the chart, and extract it
mkdir -p awsmp-chart && cd awsmp-chart
# Pull the latest version of the Helm chart from ECR (add --version <x.x.x> to specify a version)
helm pull oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/factor-house/kpow-aws-annualtar xf $(pwd)/* && find $(pwd) -maxdepth 1 -type f -delete
cd ..
Launch Kpow Annual
Now, install Kpow using Helm. We will reference the service account kpow-annual that was created during the EKS cluster setup, which has the required IAM policy for license management.
Note: The CPU and memory values are intentionally set low for this guide. For production environments, check the official documentation for recommended capacity.
Verify and access Kpow Annual
Check that the Kpow pod is running successfully.
kubectl get all -l app.kubernetes.io/instance=kpow-annual -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-annual-kpow-aws-annual-c6bc849fb-zw5ww 0/1 Running 0 46s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-annual-kpow-aws-annual ClusterIP 10.100.220.114 <none> 3000/TCP 47s
# ...
To access the UI, forward the service port to your local machine.
The Helm values are defined in values/eks-hourly.yaml.
# values/eks-hourly.yaml
env:
ENVIRONMENT_NAME: "Kafka from Kpow Hourly"envFromConfigMap: "kpow-config"volumeMounts:
# ... (volume configuration is the same as annual)
volumes:
# ...
resources:
# ...
Verify and access Kpow Hourly
Check that the Kpow pod is running.
kubectl get all -l app.kubernetes.io/instance=kpow-hourly -n factorhouse
# NAME READY STATUS RESTARTS AGE
# pod/kpow-hourly-kpow-aws-hourly-68869b6cb9-x9prf 0/1 Running 0 83s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kpow-hourly-kpow-aws-hourly ClusterIP 10.100.221.36 <none> 3000/TCP 85s
# ...
To access the UI, forward the service port to a different local port (e.g., 3001) to avoid conflicts.
In this guide, we have successfully deployed a complete, production-ready environment for monitoring Apache Kafka on AWS. By leveraging eksctl, we provisioned a robust EKS cluster with correctly configured IAM roles for service accounts, a critical step for secure integration with AWS services. We then deployed a Kafka cluster using the Strimzi operator, demonstrating the power of Kubernetes operators in simplifying complex stateful applications.
Finally, we walked through the deployment of both Kpow Annual and Kpow Hourly from the AWS Marketplace. This showcased the flexibility of Kpow's subscription models and their seamless integration with AWS for licensing and metering. You are now equipped with the knowledge to set up, configure, and manage Kpow on EKS, unlocking powerful insights and operational control over your Kafka ecosystem.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
This article teaches you how to configure Kpow to restrict visibility of Kafka resources with Multi-Tenancy.
Kpow provides sophisticated Role Based Access Control to allow, deny, or stage user actions for any Kafka resource, to a group or topic level. However, for some of our users controlling the actions that a user can take wasn't quite enough.
"I have hundreds of topics and groups, showing users all of them is confusing. Can I restrict visibility of resources with RBAC?"
The best part of working on Kpow is understanding the needs of engineering teams who use Apache Kafka. On the face of it using RBAC to restrict user visibility as well control of resources is reasonable, but when we considered the broader idea we understood this is a bigger problem.
Introducing Multi-Tenancy
Kpow Multi-Tenancy allows you to assign user roles to one or more tenants.
Each tenant explicitly includes or excludes resources such as Kafka Clusters, Groups, Topics, Schema Registries and Connect Clusters.
A user role may be assigned multiple tenants, and a user with multiple tenants has the ability to easily switch between them.
When operating within a tenant a user can only see resources included by that tenant or create resources that would be valid within that tenant.
Importantly, users will see a fully consistent synthetic cluster-view of their aggregated resources. The overall user experience is simply of a restricted set of Kafka resources as if they were truly the only resources in the system.
Now our user with hundreds of groups and topics can configure views for different business units and provide a simplified Kafka experience to their users.
Tenants In Action
Let's start at the end, below you can see the Broker UI of two different tenants operating in the one Kpow instance:
Global tenant is configured to contain all resources
Transaction tenant is configured to contain only topics starting with tx_*
Global Tenant UI
We can see 233 topics in the global tenant.
Transaction Tenant UI
The transaction tenant only shows 200 topics, and they are much more uniform.
A user can switch between these two tenants if they have roles with each tenant assigned. Kpow continues to observe and control all attached Kafka resources, but provides a consistent view of synthetic clusters constructed of only the groups and topics included in each tenant. Aggregated metrics like write/s and total disk space can be seen in either view with different figures.
Uses of Multi-Tenancy
The primary intended use of Multi-Tenancy is for you to provide restricted views of Kafka resources to users from different teams in your organization.
However with the growth of Managed Kafka Services you may also want to configure basic tenants that exclude topics and groups of no regular interest.
Kpow stores all information regarding your Kafka resources in internal topics within your cluster, including an audit log of user actions. Kpow is also constructed of two Kafka Streams applications that run in unison to build the telemetry presented back to you.
A common user request has been to hide these internal topics and groups in the general UI as they're not of interest to our end users. Previously this had been a complicated task of in-place exclusion in the front-end, but aggregated metrics were hard to achieve.
If you have no tenants configured Kpow automatically provides two. A Global tenant that shows all attached Kafka resources and a Kpow Hidden tenant that hides Kpow resources and the consumer offsets topic.
You may want to provide tenants for specific business units, or you might just want to exclude internal topics from your cloud or managed service provider, or both!
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Restrict the set of Kafka resources that are accessible to a user role from all the resources available to Kpow. A user role may be assigned multiple tenants.
When operating within a tenant a user can only see resources included by that tenant, they will also see a fully consistent synthetic cluster-view of their aggregated resources.
The overall user experience is simply of a restricted set of Kafka resources as if they were truly the only resources in the system.
Kpow Streaming Search allows you to automatically continue queries until:
The number of results returned matches 'Result Limit' (default 100)
The number of scanned records exceeds 'Scan Boundary'
The query reaches the end of the topic
In function Streaming Search is precisely the same as Data Inspect, in our benchmarking tests we were able to search 1M+ messages from multiple topics in under a minute.
Confluent Metrics
Kpow now offers Confluent Cloud metrics integration, allowing you to visualise disk usage metrics and active client connections.
Release 81: Multi-Tenancy, Streaming Search, and Confluent Metrics
All
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Also included in this release is multi-topic inspect by regex or consumer group, new 'freshness' metrics for topics and consumer groups, configurable columns for all tabular data, and a number of minor bug fixes and improvements.
Kafka Streams UI
Kpow provides a top-level UI for Kafka Streams applications.
Configure the open-source Kpow Streams Agent and quickly see summary metrics on stream activity, reads, and lag. Use the Kpow Workflows UI to dive into topology visualisation.
Topic Search by Regex or Group
Kpow's powerful Data Inspect functionality allows you to inspect multiple topics at once, and now you can specify topics by regex, or choose to inspect the same topics being consumed by a consumer group.
Freshness Metrics
See how long it has been since a topic was produced to or a consumer group read a message - right down to a topic partition or group assignment level.
Three new metrics are exposed via Prometheus for alerting purposes:
topic_production_inactive_mins
topic_consumption_inactive_mins
group_consumption_inactive_mins
Configurable Columns
As we introduce even more power to the Kpow UI, some of the tabular data has become a little crowded - so now you can choose the columns to display for each table, and see a simple description of what each column represents.
Release 80: Kafka Streams, Topic Regex Search, Freshness Metrics
All
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Temporary policies allow Admins the ability to assign access control policies for a fixed duration. This blog post introduces temporary policies with an all-to-common real-world scenario.
This article introduces Kpow for Apache Kafka®'s new Temporary Policies feature.
Introducing Temporary Policies
Introduced to the Kpow Kafka Management and Monitoring toolkit in v79 is the ability to Stage Mutations , create Temporary Role Based Access Control Policies (temporary policies), and a suite of new admin features giving greater control over Kpow to Admin Users.
This blog post introduces temporary policies through the lense of a common real-world scenario.
Temporary policies allow Admins the ability to assign access control policies for a fixed duration. A common use-case would be providing a user TOPIC_INSPECT access to read data from a topic for an hour while resolving an issue in a Production environment.
Temporary Policies Use Case
You wake up one morning to a dreaded sight: a poison message has taken down one of your services.
Your team decides the simplest solution is to skip the message by incrementing your consumer group's offset for the topic.
Now here's the problem. Access to production is limited, and for such a simple action (incrementing the offset), a team member generally must jump through the hoops of configuring the VPN, connecting to the jumpbox, and making sure they execute the right combination of bash commands against the Kafka cluster.
Often these operations are unnecessarily time-consuming, brittle, and frustrating in a time-critical moment when you need to restore production access. Furthermore, the jumpbox generally has full access to the Kafka cluster, and there is no audit log recording the actions being committed.
In combination with Kpow's existing Role-Based Access Controls and powerful mutation actions, Temporary Policies improve this experience by giving teams the tools they need to easily effect change in a secured environment, like production, when things go wrong.
Configuring Role-Based Access Control
In this example, two roles are coming from our Identity provider: devs and owners.
We will assign anyone with the role owners admin access, and give them GROUP_EDIT access to the production cluster.
The devs role will be implicitly denied from undertaking any action against the cluster, but are authorized for read-only access to view the production cluster in Kpow.
Our example RBAC yaml file might look something like:
This configuration prevents regular developers from making changes against the production cluster.
The Poison Pill
Today is the day when your team has to fix the consumer group on the production cluster.
Everyone has been briefed on the plan, and it has been decided that the team lead will temporarily grant the devs role Allow access for GROUP_EDIT. This will enable one of the developers on the team to make the required change to the production cluster.
This has been done through the Temporary Policies section of Kpow's settings UI:
Once a temporary policy has been created, team members can be notified via Slack with the Kpow Slack integration.
Incrementing the offset
A team member has been tasked with the job of incrementing the offset of the consumer group for the problematic topic.
The developer looks to the application logs and notices that it is partition 3 of topic tx_trade1 that contains the poison message.
The erroring consumer group is named trade_b2.
The developer then opens Kpow, navigates to the "Workflows" tab, and selects the consumer group.
From within the consumer group view, the dev clicks on the partition and selects "Skip Offset".
This action will schedule the mutation, and once someone on the team scales down the trade_b2 service, the offset will be incremented.
Post-Mortem
Kpow also provides valuable information and insights for teams to use after a production incident when you are completing your incident post-mortem.
Kpow has an Audit Log for Data Governance, and all the actions undertaken to resolve any production incident are persisted in Kpow's audit log topic. Meaning you can use the Audit Log to see the recorded history of all actions taken to restore the production service.
Inspecting the audit log message reveals the offset that was skipped.
You can use Kpow's data inspect functionality to view the poison message to help investigate why that message took down the consumer group.
You can find further information on setting up, viewing and managing temporary policies here.
Further reading/references
Explore our documentation to learn more about the Kpow's features mentioned in this article:
Manage, Monitor and Learn Apache Kafka with Kpow by Factor House.
We know how easy Apache Kafka® can be with the right tools. We built Kpow to make the developer experience with Kafka simple and enjoyable, and to save businesses time and money while growing their Kafka expertise. A single Docker container or JAR file that installs in minutes, Kpow's unique Kafka UI gives you instant visibility of your clusters and immediate access to your data.
Kpow is compatible with Apache Kafka+1.0, Red Hat AMQ Streams, Amazon MSK, Instaclustr, Aiven, Vectorized, Azure Event Hubs, Confluent Platform, and Confluent Cloud.
Start with a free 30-day trial and solve your Kafka issues within minutes.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
The Kpow Streams Agent integrates your Kafka Streams topologies with Kpow, offering near-realtime monitoring and visualisation of your streaming compute:
We intend for this software to be used by the wider JVM ecosystem (eg, Java, Kotlin, Scala), so would like our library to be available on Maven Central.
There weren't many resources online documenting anyone's experience deploying Clojure-centric software to Maven Central and the few resources that I came across were out of date with the current requirements (as of June 2021).
This blog post documents the steps needed to make your Clojure code available to a wider audience.
Create a Sonatype account
The entire process for claiming your own namespace on Maven Central starts with creating a JIRA ticket. This seemed a bit archaic to us, coming from Clojars, NPM, and Crates.io backgrounds. Certainly, with these modern package managers, the integration between language/ecosystem/registry seems a lot more streamlined, thus publishing software much easier.
We weren't sure if this would be an automated process or if we would have to wait for a human to manually approve our request. We were relieved that it was indeed somewhat automated, with a bot automatically approving each step.
Sign up to JIRA
The first step is to create an account on the Sonatype JIRA.
The credentials you provide here will also be the same credentials you use to deploy, so keep that in mind as you proceed.
Create a new ticket
Once you have signed up for the JIRA, create a new ticket and choose the issue type "New Project".
This is where you claim your group.id on Maven Central. This must be related to a website domain you own. For us, we claimed io.operatr as our companies domain is operatr.io. You will need to prove ownership of your domain for the next section.
Add a TXT entry to your domain
Once you have submitted your JIRA ticket, a bot should automatically reply to your issue within a few minutes asking for verification of your domain. The simplest way to verify your domain is to create a TXT entry.
For example, if you use Cloudflare to manage your DNS records you could follow these steps to add a TXT entry to your domain.
You will need to create a TXT entry containing the Jira issue ID of your ticket (for example OSSRH-70400)
Once you have added the TXT entry to your domain, the bot should automatically reply and confirm that your group.id has been prepared
Deploy Requirements
You will now be granted the ability to deploy snapshot and release artifacts to s01.oss.sonatype.org for the group.id you have just registered.
All artifacts you deploy here are staged and can only be promoted to Maven Central if they meet the requirements.
This section will document how you can configure your project.clj to meet the Sonatype requirements
GPG Keys
Firstly, we want to create a GPG key to sign our release. Lein's GPG Guide is a good starting place on how you can do that.
Once you have created your GPG key you will need to upload your public key to a keyserver, such as https://keyserver.ubuntu.com/
In order to do this, you can export your GPG public key with the following command:
gpg --armor --export $MY_EMAIL
Credentials
Next, you will need to update your ~/.lein/credentials.clj file to include your Sonatype credentials:
It is a requirement to include both a -sources.jar and -javadoc.jar jar as part of your deployment.
These requirements are tailored more towards Java codebases than Clojure:
If, for some reason (for example, license issue or it's a Scala project), you can not provide -sources.jar or -javadoc.jar , please make fake -sources.jar or -javadoc.jar with simple README inside to pass the checking
You can create -sources.jar and -javadoc.jar jars by using a :classifiers key in lein:
If you have followed all of the steps from the previous section, you should have a lein project that meets all Sonatype requirements and is ready to be deployed!
You can do this via a regular:
lein deploy
Once you have deployed your artifacts, the next step is to log in to the Nexus Repository Manager at https://s01.oss.sonatype.org/. Again, your JIRA credentials from before are used to log in.
Once inside the, navigate to "Staging Repositories" - you should see an entry labeled XXX-1000.
Click on this item and verify that the contents you wish to deploy to Central are present.
If everything looks good, click the "Close" button. This will trigger the requirements check
If the requirements check passes, you will be able to press the "Release" button. Once you press this button, your release will shortly be synced with Maven Central!
Manage, Monitor and Learn Apache Kafka with Kpow by Factor House.
We know how easy Apache Kafka® can be with the right tools. We built Kpow to make the developer experience with Kafka simple and enjoyable, and to save businesses time and money while growing their Kafka expertise. A single Docker container or JAR file that installs in minutes, Kpow's unique Kafka UI gives you instant visibility of your clusters and immediate access to your data.
Kpow is compatible with Apache Kafka+1.0, Red Hat AMQ Streams, Amazon MSK, Instaclustr, Aiven, Vectorized, Azure Event Hubs, Confluent Platform, and Confluent Cloud.
Start with a free 30-day trial and solve your Kafka issues within minutes.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Kpow v79 introduces Kpow Admin roles with the ability to Stage Mutations and create Temporary RBAC Policies , all wrapped up in a new Settings UI.
Note: If you are currently using Kpow with RBAC your users are all considered non-admin and will have slightly less visibility of Kpow until you specify admin roles.
Kpow Admin Roles
Admins have greater visibility and control of Kpow than normal users.
Non-Admin users can see their own access policies, configure their UI preferences, and view a log of the last 7 days of their account activity.
Admin Users can approve or deny staged mutations, create and remove temporary policies, and have full visibility of all existing system features like the Audit Log.
Admin Users can assign temporary access permissions to a role.
A common use-case would be providing a user TOPIC_INSPECT access to read data from a topic for an hour while resolving an issue in a Production environment.
[MELBOURNE, AUS] Apache Kafka and Apache Flink Meetup, 27 November
Melbourne, we’re making it a double feature. Workshop by day, meetup by night - same location, each with valuable content for data and software engineers, or those working with Data Streaming technologies. Build the backbone your apps deserve, then roll straight into the evening meetup.
[SYDNEY, AUS] Apache Kafka and Apache Flink Meetup, 26 November
Sydney, we’re making it a double feature. Workshop by day, meetup by night - same location, each with valuable content for data and software engineers, or those working with Data Streaming technologies. Build the backbone your apps deserve, then roll straight into the evening meetup.
We’re building more than products, we’re building a community. Whether you're getting started or pushing the limits of what's possible with Kafka and Flink, we invite you to connect, share, and learn with others.