I'm not sure what you mean. Message persistence is a fundamental feature of Kafka that almost everyone using it relies on, its not some esoteric feature no one uses. We're each coming from our own network bias here, but in my experience a lot of people are really unhappy with the operational toil associated with running Kafka at scale in production.
Yes it’s fundamental, but it’s not generally that significant to users of Kafka.
As a developer, Kafka is a place to publish and subscribe to data with reliability and performance.
As a developer, the fact that messages are persistent is nothing more than a cool feature in that I can replay messages if I need to.
Things like consumer groups and offsets are features of the API, but they aren’t complex. Every similar tool whether it be RabbitMQ or IBM MQ has its own API abstractions and features. Likewise, I need to learn about failover semantics, but that’s the same with any API dependency.
It seems that you and the other posters here have a concensus that it’s hard to operate. Rather than saying that Kafka is dead or a polarising technology, a better line of argument is that it’s simply hard or expensive to operate at scale. (I personally think that’s par for the course with a technology like this, but that’s an aside.)
You have to remember that for everyone operating Kafka, there will be on average tens or hundreds of developers using it. And the vast, vast majority of those will not find it to be particularly polarising. Instead, they’ll find it a de-facto choice.
I'm not sure what you mean. Message persistence is a fundamental feature of Kafka that almost everyone using it relies on, its not some esoteric feature no one uses. We're each coming from our own network bias here, but in my experience a lot of people are really unhappy with the operational toil associated with running Kafka at scale in production.