Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Kpow v87 features support for AVRO Decimal Logical Type fields in both Data Inspect and Data Produce, improved Data Import UX for CSV message upload, and support for RHOSAK.
Kpow v87 features support for AVRO Decimal Logical Type fields, improved Data Import UX for CSV message upload, and support for RHOSAK.
AVRO Decimal Logical Type Support
The AVRO specification allows for several logical types, including decimal type which allows for an arbitrary precision number with optional scale.
On the JVM decimal types result in a BigDecimal, with the wire-format being a BigInteger stuffed into a ByteBuffer. In early version of Kpow this resulted in users seeing strange characters output for decimal types as our inspect engine just naively converted the ByteBuffer into a string (sorry about that!). Data Produce of decimals wasn't supported.
In time we added a logical type conversion that converted these ByteBuffers into floats in the Data Inspect results, but that's not a great solution due to float precision / interpretation.
Now in v87 of Kpow decimal logical type fields are represented in our UI as string fields, this means that there is no room for float precision issues between the many hands that ferry fields between browser, JVM, to Kafka, and back again.
Where a schema declares a decimal logical type field our UI will check both scale and precision when producing messages. Both Avro and Avro (Strict) serdes now support consuming and producing messages with decimal fields.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Kpow v86 features improved Kpow snapshot compute performance and an updated Schema UI that shows AWS Glue schema status.
Compute Performance
Kpow is built to monitor and manage all of your Kafka resources, many of our users run multi-cluster, multi-connect, multi-schema installations of Kpow where there is quite a lot to monitor and manage! One part of our continuous delivery is keeping an eye on the CPU and Heap usage of our demo environment (2x MSK, 1x Connect, 2x Schema, 500 topics, 50 groups), we also work closely with a number of users with larger clusters to learn how we can improve Kpow's internal observation and computation engine.
Thanks to those users who helped us identify compute improvements we have seen a significant reduction in Kpow heap consumption with this release.
Release 86: Compute Performance and AWS Glue UX
All
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Kpow v85 features improved Data Masking support and a new ConsumerOffsets serdes in Data Inspect.
Data Masking Granularity
Kpow allows you to mark PII or sensitive data in Data Inspect results, now you can specify Data Policies with finer granularity - to a key, value, or headers level.
E.g. this Data Policy applies ShowLast4 redaction to every topic, masking credit-card fields in the value of any message but leaving headers and keys unredacted.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
A simplified form now supports memory of previously selected serdes, multiple result-sets at once and the option to switch to classic mode
Upgraded kJQ filters support multiple predicates joined by logical operators and optional parenthesis for explicit precedence. Context-highlighting, auto-completion, a filter memory (press the arrow-up key to see previous filters) and quick query execution (hit shift-enter from within the kJQ entry field).
Audit Log
Data Inspect queries are now recoded in the Kpow Audit Log.
This gives Kpow admins fine grained visibility of user data access.
Github Teams
Kpow supports Github Teams as user-roles in RBAC policies, see: Github SSO documentation.
We’re building more than products, we’re building a community. Whether you're getting started or pushing the limits of what's possible with Kafka and Flink, we invite you to connect, share, and learn with others.