
Releases
Empowering engineers with everything they need to build, monitor, and scale real-time data pipelines with confidence.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Release 94.6: Factor Platform, Ververica Integration, and kJQ Enhancements
The first Factor Platform release candidate is here, a major milestone toward a unified control plane for real-time data streaming technologies. This release also introduces Ververica Platform integration in Flex, plus support for Kafka Clients 4.1 / Confluent 8.0.0 and new kJQ operators for richer stream inspection.
Factor Platform release candidate: Early access to unified streaming control
For organisations operating streaming at scale, the challenge has never been about any one technology. It's about managing complexity across regions, tools, and teams while maintaining governance, performance, and cost control.
We've spent years building tools that bring clarity to Apache Kafka and Apache Flink. Now, we're taking everything we've learned and building something bigger: Factor Platform, a unified control plane for real-time data infrastructure.
Factor Platform delivers complete visibility and federated control across hundreds of clusters, multiple clouds, and distributed teams from a single interface. Engineers gain deep operational insight into jobs, topics, and lineage. Business and compliance teams benefit from native catalogs, FinOps intelligence, and audit-ready transparency.
The first release candidate is live. It's designed for early adopters exploring large-scale, persistent streaming environments, and it's ready to be shaped by the teams who use it.
Interested in early access? Contact [email protected]

Unlocking native Flink management with Ververica Platform
Our collaboration with Ververica (the original creators of Apache Flink), enters a new phase with the introduction of Flex + Ververica Platform integration. This brings Flink’s enterprise management and observability capabilities directly into the Factor House ecosystem.
Flex users can now connect to Ververica Platform (Community or Enterprise v2) and instantly visualize session clusters, job deployments, and runtime performance. The current release provides a snapshot view of Ververica resources at startup, with live synchronization planned for future updates. It's a huge step toward true end-to-end streaming visibility—from data ingestion, to transformation, to delivery.
Configuration is straightforward: point to your Ververica REST API, authenticate via secure token, and your Flink environments appear right alongside your clusters.
This release represents just the beginning of our partnership with Ververica. Together, we’re exploring deeper integrations across the Flink ecosystem, including OpenShift and Amazon Managed Service for Apache Flink, to make enterprise-scale stream processing simpler and more powerful.
Read the full Ververica Platform integration guide →
Advancing Kafka support with Kafka Clients 4.1.0 and Confluent Schema SerDes 8.0.0
We’ve upgraded to Kafka Clients 4.1.0 / Confluent Schema SerDes 8.0.0, aligning Kpow with the latest Kafka ecosystem updates. Teams using custom Protobuf Serdes should review potential compatibility changes.
Data Inspect gets more powerful with kJQ enhancements
Data Inspect in Kpow has been upgraded with improvements to kJQ, our lightweight JSON query language for streaming data. The new release introduces map() and select() functions, expanding the expressive power of kJQ for working with nested and dynamic data. These additions make it possible to iterate over collections, filter elements based on complex conditions, and compose advanced data quality or anomaly detection filters directly in the browser. Users can now extract specific values from arrays, filter deeply nested structures, and chain logic with built-in functions like contains, test, and is-empty.
For example, you can now write queries like:
.value.correctingProperty.names | map(.localeLanguageCode) | contains("pt")Or filter and validate nested collections:
.value.names | map(select(.languageCode == "pt-Pt")) | is-empty | notThese updates make Data Inspect far more powerful for real-time debugging, validation, and exploratory data analysis. Explore the full range of examples and interactive demos in the kJQ documentation.
See map() and select() in action in the kJQ Playground →
Schema Registry performance improvements
We’ve greatly improved Schema Registry performance for large installations. The observation process now cuts down on the number of REST calls each schema observation makes by an order of magnitude. Kpow now defaults to SCHEMA_REGISTRY_OBSERVATION_VERSION=2, meaning all customers automatically benefit from these performance boosts.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Release 94.5: New Factor House docs, enhanced data inspection and URP & KRaft improvements
This release introduces a new unified documentation hub - Factor House Docs. It also introduces major data inspection enhancements, including comma-separated kJQ Projection expressions, in-browser search, and over 15 new kJQ transforms and functions. Further improvements include more reliable cluster monitoring with improved Under-Replicated Partition (URP) detection, support for KRaft improvements, the flexibility to configure custom serializers per-cluster, and a resolution for a key consumer group offset reset issue.
Introducing our new docs site
Announcement post: Introduction to Factor House docs v2.0.
All Factor House product documentation has been migrated to a new, unified site. This new hub, Factor House Docs, provides a single, streamlined resource for all users.

Key improvements you'll find on the new site include:
- Unified product content: All documentation is now in one place with a simplified structure, consolidating what was previously separate community and enterprise docs.
- Clear feature availability:
COMMUNITY,TEAM, andENTERPRISEbadges have been added to clearly indicate which features are available in each edition. - Improved organization: Content is now grouped into more relevant sections, making it easier to find the information you need.
- Powerful search, instant answers: Instantly find any configuration, example, or guide with our new Algolia-powered, site-wide search.
- Hands-on playground: A new section featuring interactive labs and projects to help you explore product capabilities.
- Ready for the future: The documentation for the new Factor Platform will be added and expanded upon release, ensuring this hub remains the most up-to-date resource for all product information.
The documentation has been updated and is live at https://docs.factorhouse.io.
Data inspect enhancements
Feature post: Data inspect enhancements in Kpow 94.5.
Kpow 94.5 builds upon the foundation of previous releases to deliver a more powerful and user-friendly data inspection experience.
kJQ Projection expressions & search
- Comma-Separated kJQ Projection expressions: We've added support for comma-separated projection expressions (e.g.,
.value.base, .value.rates). This allows you to extract multiple fields from Kafka records in a single query, providing targeted data views without cluttering your output. This works for both key and value sub-paths. - In-Browser Search (Ctrl + F): You can now use in-browser search (Ctrl + F) with kJQ filters to quickly find records by JSON path or value without re-running queries. The results component is now fully keyboard-friendly and follows the Listbox pattern, making it easier for everyone to navigate. Screen reader users can understand the list structure, and keyboard users can move through and select items smoothly and predictably.

Schema & deserialization insights
Data Inspect now provides detailed schema metadata for each message, including schema IDs and deserializer types. It also identifies misaligned schemas and poison messages, offering the following deserialization options:
- Drop record (default): Ignores erroneous records and displays only well-formatted records.
- Retain record: Includes both well-formatted and erroneous records. Instead of displaying the raw, poisonous value for problematic records, the system flags them with the message 'Deserialization exception'.
- Poison only: Displays only erroneous records, where the value is recorded as 'Deserialization exception'.

Sorting by Attribute
Selecting the 'Pretty printed (sorted)' display option sorts the attributes of the key or value alphabetically by name, improving readability and consistency during inspection.

High-performance streaming
Data Inspect can stream over 500,000 records smoothly without UI lag, enabling efficient analysis of large datasets.
kJQ improvements
Expanded kJQ capabilities with new transforms including parse-json, floor, ceil, upper-case, lower-case, trim, ltrim, rtrim, reverse, sort, unique, first, last, keys, values, is-empty, and flatten.
Also added new functions: within, split, and join, enabling richer data manipulation directly within kJQ queries.
For more details on these new features, please refer to the updated kJQ manual. Also, be sure to visit the new interactive examples page on our new Factor House docs site—it's a great way to quickly verify your kJQ queries.
Consumer group management
Empty group member assignments
Previously, EMPTY consumer groups showed no offset information in the reset offset UI, preventing customers from resetting their offsets. This was a critical issue when a poison message caused an entire consumer group to go offline. The fix now fetches offsets directly from the AdminClient instead of relying on the snapshot, ensuring offsets can be reset in these scenarios.

Cluster & platform enhancements
Improved Under-Replicated Partition (URP) Detection
Feature post: Enhanced URP detection in Kpow 94.5.
We've enhanced our calculation for under-replicated partitions to provide more accurate health monitoring for your Kafka clusters. The system now correctly detects partitions with fewer in-sync replicas than the configured replication factor, even when brokers are offline and not reported by the AdminClient.
You can find URP details on the Brokers and Topics pages. The summary statistics will display the total number of under-replicated partitions. If this count is greater than zero, a new table will appear with details on all applicable topics.
Brokers

Topics

To further strengthen monitoring and alerting, new Prometheus metrics have been introduced to track under-replicated partitions. These metrics integrate seamlessly with your existing observability stack and provide more granular insights:
- broker_urp: The number of under-replicated topic partitions on this broker.
- topic_urp: The number of under-replicated partitions for this specific topic.
- topic_urp_total: The total number of under-replicated partitions across all topics in the Kafka cluster.
KRaft improvements and fixes
The following improvements and bug fixes have been made to KRaft:
- Polished KRaft tables with improved sorting and corrected display of the End Offset (now showing timestamps instead of offsets).
- Fixed an issue where the controller broker was incorrectly displayed in broker details.
- Added new KRaft-related metrics to Prometheus for enhanced observability.
- Introduced new KRaft-specific Prometheus metrics to strengthen observability:
- kraft_high_watermark: The high watermark of the metadata log in the KRaft cluster, indicating the highest committed offset.
- kraft_leader_epoch: The current leader epoch in the KRaft cluster, incremented each time a new leader is elected.
- kraft_leader_id: The broker ID of the current leader in the KRaft cluster responsible for handling metadata operations.
- kraft_observer_count: The number of observer replicas in the KRaft cluster. Observers replicate the metadata log but do not participate in leader election.
- kraft_replicas_count: The total number of replicas (voters + observers) in the KRaft cluster responsible for maintaining the metadata log.
- kraft_voter_count: The number of voting replicas in the KRaft cluster. Voters participate in leader election and maintain the metadata log.
Custom SerDes per cluster
Feature post: Per-cluster custom serdes configuration guide.
Kpow supports to configure custom SerDes on a per-cluster basis, providing the flexibility to handle different data formats and compatibility requirements across your Kafka environments. This approach ensures smoother integration with diverse data pipelines, reduces serialization errors, and improves overall system reliability.
Here is an example of a per-cluster custom SerDes configuration:
serdes:
- name: "PROTO 1"
format: "json"
isKey: true
config:
bootstrap: "some-value"
limit: 22
display: another-value
abc: $SOME_ENV
- name: "PROTO 2"
cluster: "Trade Book (Staging)" # THIS IS A NEW KEY
format: "json"
isKey: false
config:
bootstrap: "some-value"
limit: "100"
display: another-value
abc: $ANOTHER_ENVBug fixes
This release addresses several key issues improving UI stability, integrations, and navigation consistency across the platform.
- Resolved an edge case in the temporary policy UI display
- Fixed Microsoft Teams webhook integration
- Fixed a buggy textarea inside the Data Masking Playground.
- Fixed regression affecting ordering of ksqlDB, Schema Registry, and Connect resources in navigation dropdown
- Fixed ordering of Kafka resources. Examples:
CONNECT_RESOURCE_IDS=QA1,DEV1,DEV2→ showing in order:QA1,DEV1,DEV2SCHEMA_REGISTRY_RESOURCE_IDS=QA1,DEV1,DEV2→ showing in order:QA1,DEV1,DEV2KSQLDB_RESOURCE_IDS=QA1,DEV1,DEV2→ showing in order:QA1,DEV1,DEV2
Updated Help menu
Introduced a redesigned Help menu featuring an improved What's New section for quick access to the latest product updates. The menu also now includes direct links to join our Slack community, making it easier to connect with other users, share feedback, and get support right from within the product.

Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Release 94.4: Auto SerDes improvements
This minor hotfix release from Factor House resolves a bug when using Auto SerDes without Data policies, and adds support for UTF-8 String Auto SerDes inference.
Auto SerDes improvements
94.4 is a small hotfix release following up from last week's 94.3 release.
Kpow's Auto SerDes feature works alongside our data policies feature. Data policies allow you to configure declarative redaction policies against your data. When data policies are configured, any SerDes marked as non-redactable (e.g., String) will be excluded from the list of deserializers Kpow will try to use to infer the topic's data.
94.3 had a bug with this implementation where Auto SerDes detection was failing unless you had configured Kpow with a data policies file. 94.4 fixes this bug.
We have also improved the Auto SerDes inference based on customer feedback. Kpow will attempt to infer String records as the lowest priority. Kpow ensures the inferred data is a valid UTF-8 encoded string when inferring.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Release 94.3: BYO AI, Topic data inference, and Data inspect improvements
This minor release from Factor House introduces BYO AI model support, topic data inference, and major enhancements to data inspect—such as AI-powered filtering, new UI elements, and AVRO date formatting. It also adds integration with GCP Managed Kafka Schema Registry, improved webhook support, updated policy handling, and several UI and error-handling improvements.
This minor release from Factor House introduces BYO AI model support, topic data inference, and major enhancements to data inspect—such as AI-powered filtering, new UI elements, and AVRO date formatting. It also adds integration with GCP Managed Kafka Schema Registry, improved webhook support, updated policy handling, and several UI and error-handling improvements.
BYO AI
Kpow now offers optional integrations with popular AI models, from Ollama to enterprise solutions like Azure OpenAI. These features are entirely opt-in: unless you configure an AI provider, Kpow will not expose any AI functionality.
As of release 94.3, Kpow supports integrations with:
- Azure OpenAI
- AWS Bedrock
- OpenAI
- Anthropic
- Ollama
You can configure one or more AI models, and set a default model in your user preferences for use with Kpow’s AI-powered features.
These AI models power all AI-powered features in Kpow. Read the documentation for more details.

kJQ filter generation
Transform natural language queries into powerful kJQ filters with AI-assisted query generation. This feature empowers users of all technical backgrounds to extract insights from Kafka topics without requiring deep JQ programming knowledge.
How it works
Simply describe what you're looking for in plain English, and the AI model generates a syntactically correct kJQ filter tailored to your data. The system leverages:
- Natural language processing: Convert conversational prompts like "show me all orders over $100 from the last hour" into precise kJQ expressions.
- Schema-aware generation: When topic schemas are available, the AI optionally incorporates field names, data types, and structure to create more accurate filters.
- Validation integration: Generated filters are automatically validated against Kpow's kJQ engine to ensure syntactic correctness before execution.
Usage
Navigate to any topic's Data Inspect view and select the AI Filter option. Enter your query in natural language, and Kpow will generate the corresponding kJQ filter. You can then execute, modify, or save the generated filter for future use.
The AI filter generator works best when provided with specific, actionable descriptions of the data you want to find. Include field names, value ranges, example data and logical operators in your natural language query for optimal results.

Auto deserializers
Within Kpow's Data Inspect UI, you can specify Auto as the key or value deserializer and Kpow will attempt to infer the data format and deserialize the records it consumes.
Auto SerDes provides immediate data inspection capabilities without requiring prior knowledge of topic serialization formats. This feature is particularly valuable when:
- Exploring unfamiliar topics for the first time
- Working with multiple topics that may contain mixed data formats
- Debugging serialization issues across different environments
- Onboarding new team members who need quick topic insights
The Auto SerDes option appears alongside manually configurable deserializers like JSON, Avro, String, and custom SerDes in the Data Inspect interface.
When selected, Kpow analyzes each topic and applies the most appropriate deserializer automatically.
Topic inference observation
To persist and query inferred topic information—such as key deserializer, value deserializer, and schema registry ID—in Kpow’s UI, enable the Topic SerDes Observation job by setting:
INFER_TOPIC_SERDES=true

kJQ language improvements
In response to our customers' evolving filtering needs, we've significantly improved the kJQ language to make Kafka record filtering more powerful and flexible. Check out the updated kJQ filters documentation for full details.
Below are some highlights of the improvements:
Chained alternatives
Selects the first non-null email address and checks if it ends with ".com":
.value.primary_email // .value.secondary_email // .value.contact_email | endswith(".com")String/Array slices
Matches where the first 3 characters of transaction_id equal TXN:
.value.transaction_id[0:3] == "TXN"For example, { "transaction_id": "TXN12345" } matches, while { "transaction_id": "ORD12345" } does not
UUID type support
kJQ supports UUID types out of the box, including the UUID deserializer, AVRO + logical types, or Transit / JSON and EDN deserializers that have richer data types.
To compare against literal UUID strings, prefix them with #uuid to coerce into a UUID:
.key == #uuid "fc1ba6a8-6d77-46a0-b9cf-277b6d355fa6"Data inspect improvements
UI overhaul
We've enhanced the data inspect UI with smaller toolbar buttons and reorganized dropdowns to accommodate the new features detailed below.

Event log
All queries display the 'Event log' when anomalies occur during search, such as data policy applications, deserialization errors, or record processing exceptions. Each event includes a timestamp and a severity level (INFO, WARN, or ERROR).

Send to kREPL
Customers can now send data inspect queries directly to the kREPL, our programmatic Kafka interface designed for power users. The kREPL enables notebook-style data transformations—such as aggregations and groupings—on consumed data.
You can access this feature through the 'Send' dropdown in the result metadata toolbar. Learn more about the kREPL by visiting our documentation.

GCP managed Kafka schema registry integration
Kpow now supports the Google Schema Registry as a varient of Confluent-compatible schema registry. Here is an example configuration.
ENVIRONMENT_NAME=GCP Kafka Cluster
BOOTSTRAP=bootstrap.<cluster-id>.<gcp-region>.managedkafka.<gcp-project-id>.cloud.goog:9092
SECURITY_PROTOCOL=SASL_SSL
SASL_MECHANISM=OAUTHBEARER
SASL_LOGIN_CALLBACK_HANDLER_CLASS=com.google.cloud.hosted.kafka.auth.GcpLoginCallbackHandler
SASL_JAAS_CONFIG=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
SCHEMA_REGISTRY_NAME=GCP Schema Registry
SCHEMA_REGISTRY_URL=https://managedkafka.googleapis.com/v1/projects/<gcp-project-id>/locations/<gcp-region>/schemaRegistries/<registry-id>
SCHEMA_REGISTRY_BEARER_AUTH_CUSTOM_PROVIDER_CLASS=com.google.cloud.hosted.kafka.auth.GcpBearerAuthCredentialProvider
SCHEMA_REGISTRY_BEARER_AUTH_CREDENTIALS_SOURCE=CUSTOMAdditional webhooks
We now support both Microsoft Teams and a generic HTTP webhook call. When configured, Kpow can send Data governance (Audit log) records to your Slack, all you need to do is configure a webhook.
Temporary policy improvements
Administrators can now set a custom expiration date and time when creating temporary policies. Selecting Custom in the duration dropdown activates a mandatory datetime picker that only allows future dates.
Configuration options include the TEMPORARY_POLICY_MAX_MS environment variable to control the maximum policy duration (default: 1 hour). Setting this to -1 removes the duration limit.
Discover more about these updates in our documentation.
Helm charts
Artifact Hub updates
- Added values schema support for improved chart validation
- Charts are now signed for enhanced security and trust
- Publisher identity is verified to ensure authenticity
- Charts are marked as official for trusted, curated content
Container security improvements
The container security context has been tightened by disabling privilege escalation (AllowPrivilegeEscalation=false), running the container as a non-root user (UID 1001), disallowing privileged mode, and dropping all Linux capabilities.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Release 94.2: Google MSK, Data Inspect, and A+ Docker Health
This minor release from Factor House introduces support for GCP MSK and new feature improvements such as data inspect display options, AVRO Date Logical Type formatting, flat CSV export, and fixes a bug in consumer offset reset!
This minor release from Factor House introduces support for GCP MSK and new feature improvements such as data inspect display options, AVRO Date Logical Type formatting, flat CSV export, and fixes a bug in consumer offset reset!
Read on for details of:
- Support for Google Cloud Managed Service for Kafka
- Versatile data inspect display options
- Improved support for AVRO Date Logical Types
- New flat CSV export format for data inspect
- Even faster frontend with React and Tailwind migrations
- New open-source Clojure libraries!
- “A”-rated Docker health score
- Bug fixes with consumer offset reset
👏 Special thanks to our users who provided feedback and contributed to this release!
Google Cloud MSK Support
Google Cloud Managed Service for Apache Kafka offers a fully managed Apache Kafka solution, simplifying deployment and operations for real-time data pipelines.
Kpow now offers full support to monitor and manage your Google Cloud Kafka clusters. Learn how to Set Up Kpow with Google Cloud Managed Service for Apache Kafka.
Versatile Data Inspect Display Options
Data inspect is absolutely Kpow's most used feature. It made perfect sense, therefore, to enhance it with the following display options:
- Order by:
- Timestamp
- Offset
- Collapse data greater than [x] kB
- Key and Value display as
Pretty printedorRaw - Timestamp format:
- UNIX
- UTC Datetime
- Local Datetime
- Record size display as
Pretty printedorInt - Set visibility for fields: Topic, Partition, Offset, Headers, Timestamp, Age, Key size (bytes), Value size (bytes)
Display options are persistent in local cache for multi-session use.
Field visibility carries over to data export as well. Fields marked as not visible will be excluded in data export.
To use these options click Display in the menu bar atop the search results to open the Display options menu. For help, see updated docs: Data inspect

Improved Support for AVRO Date Logical Types
Previously, Date Logical Types in AVRO schemas would only display as integer values. This is not a human-readable timecode and limits filtering abilities in data inspect by requiring an integer input instead of allowing more advanced date-time representations.
In the 94.2 release, AVRO Date Logical Types can now be formatted to and from date-time Strings. Date manipulation functions have been built into kJQ as well, to enhance your data inspect filtering (see updated docs: Date Filtering with 'from-date').
A sample AVRO schema using this feature is:
{
"type": "record",
"name": "liquidity-update",
"fields": [
{
"name": "id",
"type": "string"
},
{
"name": "timestamp",
"type": {
"type": "int",
"Logical Type": "date"
}
},
{
"name": "pool",
"type": "string"
},
{
"name": "nodes",
"type": "string"
}
]
}Flat CSV Export Format
An option has been added to data inspect for flat CSV export. This has been a requested feature that will enable better human-readability and processing of JSON-serialized records. Rather than the key/value being an escaped JSON object:
key:
{
"id": "c8b3256f-be66-436a-a575-007588d7a9a3"
}
value:
{
"id": "c8b3256f-be66-436a-a575-007588d7a9a3",
"timestamp": "2025-05-14",
"pool": "CSX-JBN",
"nodes": "7-2-10-1"
}It equates to the following in flat CSV format:
key, value.id, value.timestamp, value.pool, value.nodes
{"id" "c8b3256f-be66-436a-a575-007588d7a9a3"}, c8b3256f-be66-436a-a575-007588d7a9a3, 2025-05-14, CSX-JBN, 7-2-10-1The exported output from a number of such records is therefore:

Notice that only the value fields are exploded into column format (not the key), and that they are alphabetically ordered for easier navigation.
React and Tailwind Migrations + New Open Source Libraries
The Kpow UI is known for being oh-so-fast, and it just got snappier with our migration to React 19 and Tailwind 4.0.
With this migration we preserve our gold standard of web accessibility (WCAG 2.1 AA Compliant), while also ensuring that our products can scale under high-demand applications, exhibiting even better efficiency than prior versions (with roughly a quarter of commits relative to the former Reagent-mediated version, under the same conditions).
To achieve this, we built two new open source ClojureScript libraries that will serve the wider Clojure community. The purpose of these libraries is to preserve the spirit of Reagent and re-frame, but modernize their foundations to align with today's React.
Our new libraries for the Clojure community are:
- HSX: a Hiccup-to-React compiler that lets us write components the way we always have, but produces pure React function components under the hood.
- RFX: a re-frame-inspired subscription and event system built entirely on React hooks and context.
HSX and RFX are more than just drop-in replacements — they’re the result of over a decade’s experience working in ClojureScript UIs. As a result, our products run faster, our code is easier to rationalise, and our products scale even more efficiently.
We invite you to try HSX and RFX, and to learn more about their development journey: Beyond Reagent: Migrating to React 19 with HSX and RFX
“A”-Rated Docker Health Score
Due to excellent work from our dev team, our Kpow container has received an “A” Docker Health Score. Our engineers are proud to present software that you can trust is secure, well maintained, and efficient, giving you confidence that our tools to help you manage your critical data-streaming pipelines meet stringent quality standards.

Consumer Offset Reset
This release fixes a regression in 94.1 where resetting a consumer group offsets could fail with an unexpected error.
Consumer offset management plays a vital role in controlling consumer group behavior. For an updated in-depth instructional, see: Consumer Offset Management in Kpow
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript

Release 94.1: Streams Agent, Consumer Offset Management, and Helm Charts
This major version release from Factor House improves consumer offset management, Kafka Streams telemetry, extra data inspect capabilities and new Helm Charts!
This major version release from Factor House:
- Improves data inspect
- Improves consumer offset management
- Improves Kafka Streams agent integration
- Adds Flex and Community Helm Charts
- Resolves a number of small bugs, and,
- Bumps Kafka client dependencies to v3.9.0.
Brand new Helm Charts + Release simplification
This has been a highly requested feature for a while now: in-depth Helm Charts for all of our products!
Customers can now install Helm charts for our full product suite - Flex, Kpow and the Community Editions of both:
helm repo add factorhouse https://charts.factorhouse.io
helm repo update
helm install my-kpow-ce factorhouse/kpow-ceTo read more about our improvements to Helm + Docker see this blog post: Updates to container specifics (DockerHub and Helm Charts).
We've also streamlined our deliverables, introducing clearer release channels to make accessing our products easier than ever. This groundwork sets the stage for a big year of exciting releases!
Consolidate release artifacts
- Consistently deploy all artifacts (Maven, Clojars, AWS Marketplace, Helm, ArtifactHub and DockerHub) to the
factorhouseorganisation. - See blog post: A final goodbye to OperatrIO for more details.
Simplify DockerHub repos
- Consolidate DockerHub repos: we now deploy to the
factorhouse/kpowandfactorhouse/flexrepos respectively. - Community Editions still found at
factorhouse/kpow-ceandfactorhouse/flex-cerepos. - See blog post: Updates to container specifics (DockerHub and Helm Charts) for more details.
Communicate Java compatibility and evolution
- Bump our default Java version to JDK17 for Docker and Helm
- Java 11 and 8 JARs still available
- See blog post: Releasing Software at Factor House: Our Java Compatibility and Evolution Strategy for more details.
1.0.0 Kpow Streams Agent!
Our beloved open-source Kpow Streams Agent hits its 1.0.0 release milestone!
Along with core improvements to the agent, we have poured a lot of love into Kpow's Kafka Streams UI and crunched down on backend work required when processing streams metrics.
- Visit the GitHub README to find out more about the changes and to get started
- JavaDocs for using the agent are now available over at javadoc.io
- Kpow's Streams Agent can be found on Maven Central at io.factorhouse/kpow-streams-agent
Data Inspect improvements
This release is packed with quality improvements to our data inspection functionality, making it smoother, more reliable, and better than ever!
Stay tuned! We're bringing plenty more quality improvements to Kpow's data inspection functionality this year!
New modes
The data inspect form now contains additional Modes. New options include:
- Slice (default) - queries records beginning from a start time
- Bounded window - queries records between a start and end time
Improved data inspect reliability
There has been an outstanding bug in Kafka relating to long-running consumers that could not recover after certain broker rolling-upgrade scenarios. This bug is captured in KAFKA-13467 and resolved in Kafka clients 3.8.0 and above.
Some customers have reported this exact issue when running on Confluent Cloud. We think Confluent periodically roll their brokers in each cluster (probably for reasonable ops reasons) and update their DNS with new broker IPS rather than changing the bootstrap.
This release now resolves this long-standing data inspect issue! Our data inspect consumer pool should be more resilient to broker upgrades.
We have also added the option to manually restart the consumer pool for any case there may be unexpected consumer death.
Configurable isolation level
Starting with 94.1 customers can now specify the isolation level of the query (defaulting to READ_UNCOMMITED). When set to READ_COMMITED data inspect results will only return records from committed transactions. This can be particularly useful for customers looking to debug issues, where they ignore data in uncommitted transactions
Improved Offset Management
Kpow has long supported managing consumer group offsets, but this release gives the feature the attention it deserves:
- Reset offset by providing new offset value
- Reset offset by providing a precise timestamp
- Consistent action menu across different nodes of consumer group topology as well as in table views

Join the Factor Community
We’re building more than products, we’re building a community. Whether you're getting started or pushing the limits of what's possible with Kafka and Flink, we invite you to connect, share, and learn with others.