
What the IBM Confluent acquisition means for Kafka users

Table of contents
On March 17, 2026, IBM completed its $11 billion acquisition of Confluent. The deal was announced in December and closed quickly. Since then, the question for anyone running Kafka in production has been straightforward: what changes, and what should we do about it?
Early Signals
The early signs are not reassuring. Hundreds of former Confluent employees have appeared on LinkedIn with #opentowork since the acquisition closed. Unofficial reports suggest the number may be as high as 800. There's been no official statement from IBM or Confluent on the scale of the cuts.
For engineering and platform teams, the concern isn't the layoffs themselves but what they imply about product velocity. Confluent's value has always been driven by deep engineering investment: in the Kafka core, in the connector ecosystem, in Schema Registry, in the cloud platform. Fewer engineers means slower bug fixes, slower feature development, and less community engagement on KIPs. If you depend on Confluent's managed connectors or cloud-native features, the pace at which those improve is directly relevant to your operational risk.
What History Tells Us
Two precedents are worth studying, though neither is a perfect analogy.
The first is Oracle's acquisition of Sun Microsystems in 2010, which brought MySQL into Oracle's hands. Over time the open-source release cadence slowed, and most Linux distributions switched their default database to MariaDB. Oracle never killed MySQL, but it deprioritised it. The key difference: Oracle's core business (Oracle Database) competed directly with MySQL, giving Oracle an incentive to let it stagnate. IBM has no equivalent competing product, so the incentive structure is different.
The more instructive precedent is Red Hat's decision, under IBM ownership, to replace CentOS Linux with CentOS Stream in late 2020. CentOS had been a free, stable, binary-compatible rebuild of RHEL. Thousands of organisations ran production workloads on it precisely because it tracked a stable RHEL release. CentOS Stream changed the model to a rolling preview of the next RHEL version. The product still existed, but the contract it had with its users (stability equivalent to RHEL) was broken. Organisations that had standardised on CentOS faced a forced migration they hadn't planned for.
The pattern in both cases is the same: the open-source project survived, but the terms of engagement changed in ways that imposed real costs on users who had built deep operational dependencies.
Where the Risk Actually Sits
Apache Kafka the open-source project is likely safe. It has multiple companies invested in its development. AWS, Redpanda, Aiven, and others all have commercial interests in the Kafka protocol continuing to thrive. IBM has publicly stated its acquisition rationale depends on Kafka's openness and adoption.
The risk is in the commercial and proprietary layer. Specifically:
Confluent-managed connectors. If your data pipelines use Confluent's fully managed or licensed source and sink connectors, you have a hard dependency. There's no direct equivalent on other platforms. The open-source Kafka Connect framework is portable, but the managed hosting and operational wrapper around it is not.
Schema Registry. Confluent Schema Registry is the de facto standard for schema management in Kafka environments, but it's a Confluent product, not part of Apache Kafka. Alternative implementations exist (Apicurio Registry from Red Hat and AWS Glue Schema Registry being the most mature) but migration involves changing client configurations, compatibility testing, and potentially reworking CI/CD pipelines that integrate with the registry API.
Confluent Cloud platform dependencies. Features like cluster linking, Stream Governance, the Confluent Terraform provider, and the Confluent CLI tooling don't have 1:1 replacements in the broader ecosystem. If your operational workflows are built around these, a migration is materially harder than "just point at a different broker."
Pricing and licensing. This is the quietest risk but potentially the most impactful. IBM has a well-established pattern of monetising enterprise customers through licensing changes. Confluent's current pricing may not survive contact with IBM's enterprise sales model.
What To Do Now
You don't need to migrate anything today. But you should know your exposure. A few concrete questions worth putting to your platform team:
- Which Confluent-specific features are we actually using, versus standard Kafka APIs?
- If we had to move to MSK, Redpanda, or Aiven, what would break? What would need to be re-implemented?
- How much of our CI/CD and data pipeline tooling is tied to Confluent-specific features, versus standard APIs like Schema Registry or Kafka Admin that other providers also support?
- Are our monitoring and operational workflows coupled to Confluent Control Center?
That last point is sometimes overlooked. If your engineers' daily Kafka workflows (topic inspection, consumer group management, ACL configuration, data inspection) run through Confluent tooling, that's both a technical dependency and a familiarity dependency. It's one of the things that makes a provider migration feel harder than it technically is.
This is an area where decoupling is straightforward. Vendor-agnostic Kafka management tools exist specifically to work across providers. Factor House's Kpow, for example, works with Confluent Cloud, AWS MSK, Redpanda, Aiven, and self-managed Kafka. Regardless of which tool you choose, the underlying point is the same: building your observability and operational workflows on a tool that is independent of your Kafka provider means one less thing to migrate if your provider changes.
What To Watch
A few signals that will indicate how this acquisition is playing out over the coming months:
- Kafka open-source contribution cadence. Track commit activity from Confluent-affiliated contributors on the Apache Kafka repo. A sustained decline would suggest engineering resources are being redirected.
- KIP activity. Kafka Improvement Proposals are the mechanism for protocol-level changes. If Confluent-driven KIPs slow down or shift toward IBM enterprise use cases, that tells you something about roadmap priorities.
- Confluent Cloud pricing changes. Any movement toward consumption-based pricing with enterprise minimums, or changes to the free tier, would signal IBM's monetisation strategy taking hold.
- Connector ecosystem investment. Watch whether the managed connector catalogue continues to expand, or whether investment shifts toward IBM's own integration tooling (App Connect, MQ).
None of this is cause for panic. But the cost of understanding your dependencies now is low, and the cost of discovering them under pressure is high.