Developer Experience Engineer

Jaehyeon Kim

Jaehyeon Kim leads Developer Experience at Factor House, where he creates technical content, drives platform engineering initiatives, and speaks at meetups and conferences. He has deep expertise in real-time systems and modern data platforms, having worked extensively with Apache Kafka, Apache Flink, and Apache Spark across a decade of data and cloud engineering roles.

Expertise

Jae's expertise sits at the intersection of real-time data engineering and developer education. He has designed and built production systems across stream processing, data lakehouse architectures, data lineage, and platform observability, with a particular focus on making complex distributed systems resilient and maintainable at scale. At Factor House, he translates that hands-on experience into technical content and conference talks that help engineers get more out of Apache Kafka and Apache Flink.

Experience

Jae currently leads Developer Experience at Factor House. Before that, he was a Senior Data Engineer at Simple Machines, working on a data harmonisation project for Insurance Australia Group. Prior to that, he spent two and a half years as a Data and Cloud Consultant at Cevo Australia, delivering end-to-end data lake, data warehouse, and real-time transformation projects for clients across construction, financial services, and energy. His earlier career spans senior engineering and data science roles at CloudWave, Network 10, CoreLogic Australia, and Xaxis Digital.

Certifications

CKAD: Certified Kubernetes Application Developer

Build and Deploy a Generative AI Solution Using a RAG Framework

Education

Master of Actuarial Studies (Extension), University of New South Wales

Master of Economics, Seoul National University

Bachelor of Economics, Kyung Hee University

Latest articles

Article
March 26, 2026
Beyond JMX: Supercharging Grafana Dashboards with High-Fidelity Metrics

Move beyond raw JMX noise and unlock business-relevant observability for your Kafka environment. This guide explores how to feed high-fidelity, pre-calculated metrics, such as consumer group lag in seconds, directly from Kpow into your Grafana dashboards for proactive capacity planning and incident response.

Article
February 17, 2026
Rapid Kafka Diagnostics: A Unified Workflow for Root Cause Analysis

The Context Gap caused by fragmented tools hinders effective Kafka monitoring and troubleshooting, as it forces engineers to manually piece together logs and metrics. This guide demonstrates how to close that gap using Kpow's unified workflow to identify the stall, inspect the data, and resolve the incident in a single interface.

Article
February 2, 2026
Kafka Observability with Kpow: Driving Operational Excellence

Apache Kafka is the central nervous system of the modern enterprise, yet operating it at scale often leads to reactive maintenance cycles. Identifying three critical gaps in context, data quality, and governance, this article introduces a comprehensive strategy to transform reactive troubleshooting into proactive operational excellence with Kpow.